id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.01460 | ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive
Advantages | This paper proposes a step toward approximate Bayesian inference in on-policy
actor-critic deep reinforcement learning. It is implemented through three
changes to the Asynchronous Advantage Actor-Critic (A3C) algorithm: (1)
applying a ReLU function to advantage estimates, (2) spectral normalization of
actor-critic weights, and (3) incorporating \emph{dropout as a Bayesian
approximation}. We prove under standard assumptions that restricting policy
updates to positive advantages optimizes for value by maximizing a lower bound
on the value function plus an additive term. We show that the additive term is
bounded proportional to the Lipschitz constant of the value function, which
offers theoretical grounding for spectral normalization of critic weights.
Finally, our application of dropout corresponds to approximate Bayesian
inference over both the actor and critic parameters, which enables
\textit{adaptive state-aware} exploration around the modes of the actor via
Thompson sampling. We demonstrate significant improvements for median and
interquartile mean metrics over A3C, PPO, SAC, and TD3 on the MuJoCo continuous
control benchmark and improvement over PPO in the challenging ProcGen
generalization benchmark. | Andrew Jesson, Chris Lu, Gunshi Gupta, Nicolas Beltran-Velez, Angelos Filos, Jakob Nicolaus Foerster, Yarin Gal | 2023-06-02T11:37:22Z | http://arxiv.org/abs/2306.01460v4 | # ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
###### Abstract
This paper introduces a novel method for enhancing the effectiveness of on-policy Deep Reinforcement Learning (DRL) algorithms. Three surprisingly simple modifications to the A3C algorithm: (1) processing advantage estimates through a ReLU function, (2) spectral normalization, and (3) dropout, serve to not only improve efficacy but also yield a "cautious" DRL algorithm. Where on-policy algorithms such as Proximal Policy Optimization (PPO) and Asynchronous Advantage Actor-Critic (A3C) do not explicitly account for cautious interaction with the environment, our method integrates caution in two critical ways: (1) by maximizing a lower bound on the value function plus a constant, thereby promoting a _conservative value estimation_, and (2) by incorporating Thompson sampling for cautious exploration. In proving that our algorithm maximizes the lower bound, we also ground Regret Matching Policy Gradients (RMPG), a discrete-action on-policy method for multi-agent reinforcement learning. Our rigorous empirical evaluations across various benchmarks demonstrate our approach's improved performance against existing on-policy algorithms. This research represents a substantial step towards efficacious and cautious DRL algorithms, which are needed to unlock applications to complex, real-world problems.
## 1 Introduction
Deep Reinforcement Learning (DRL) is a paradigm to approximate solutions to complex sequential decision-making problems in domains such as robotics (Ibarz et al., 2021), autonomous driving (Kiran et al., 2021), strategy games (Mnih et al., 2015; Silver et al., 2017; Arulkumaran et al., 2019), and human-computer interaction (Ziegler et al., 2019). In recent years, DRL algorithms have achieved state-of-the-art performance on many challenging benchmarks (Young and Tian, 2019; Lange, 2022; Todorov et al., 2012; Brockman et al., 2016). However, their success in real-world applications does not only depend on their capacity to execute tasks while simultaneously refining the equations defining their action policy. It also hinges on cautious policy execution in the face of finite observations of a world in flux to avoid catastrophic results.
On-policy algorithms, such as Proximal Policy Optimization (PPO) (Schulman et al., 2017) or Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016), incorporate differentiable policies that are updated based on recent interactions with the environment. Such recency bias, and their potential to actively sample informative observations, make on-policy approaches compelling candidates for applications in real-world non-stationary environments. However, neither PPO nor A3C explicitly accounts for cautious environmental interaction. In response, we propose a novel method that explicitly incorporates caution in decision-making in two significant ways: (1) by maximizing a lower-bound on the value function plus a constant to promote algorithmic decision-making under a conservative
estimate of value (Kumar et al., 2020); and (2) by integrating careful exploration around action values with higher estimated value via Thompson sampling (Thompson, 1933). Only three surprisingly simple modifications to the A3C algorithm are needed to achieve this: (1) the lower-bound on value is realized by processing advantage estimates through a ReLU function, (2) the additive constant is regularized by applying spectral normalization to promote conservative estimates of value, and (3) Thompson sampling is enabled by adopting dropout and weight normalization.
Through our thorough empirical assessments on the Gymnasium and Brax MuJoCo benchmarks for continuous control (Brockman et al., 2016; Freeman et al., 2021), we show that our approach consistently outperforms existing on-policy algorithms such as PPO and A3C. Furthermore, our method shows competitive performance to these state-of-the-art on-policy methods in environments found in the MinAtar and ClassicControl benchmarks (Lange, 2022; Young and Tian, 2019). Consequently, this paper offers a novel enhancement to boost the efficacy of on-policy DRL algorithms, underpinned by comprehensive theoretical proof and extensive empirical evidence of its effectiveness. While sufficiently cautious algorithmic interaction with the world is still a distant goal, we hope this research will catalyze the development of further efficacious and careful applications of DRL for solving complex, real-world problems.
## 2 Background
**Notation.** We consider a discounted, \(\mathrm{T}\)-horizon Markov Decision Process (MDP) defined by the tuple \((\mathcal{S},\mathcal{A},\mathrm{P},\mathrm{r},\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathrm{P}\) is the state transition probability, \(\mathrm{r}\) is the immediate reward upon transitioning from state \(\mathbf{s}\) to state \(\mathbf{s}^{\prime}\), and \(\gamma\in[0,1]\) is the discount factor. MDPs provide a framework for modeling sequential decision-making problems, where an agent interacts with an environment over discrete time steps to achieve a goal (Puterman, 2014). Following the notation of Sutton and Barto (2018), we define states at time \(\mathrm{t}\in\mathrm{T}\) by the \(d\)-dimensional, real-valued, random variable, \(\mathbf{S}_{\mathrm{t}}:\Omega\to\mathcal{S}\subseteq\mathbb{R}^{d}\), with observable instances \(\mathbf{s}_{\mathrm{t}}=\mathbf{S}_{\mathrm{t}}(\omega_{\mathrm{t}}):\forall \omega_{\mathrm{t}}\in\Omega\). We define actions by the \(m\)-dimensional random variable \(\mathbf{A}_{\mathrm{t}}:\Omega\to\mathcal{A}\), with observable instances, \(\mathbf{a}_{\mathrm{t}}=\mathbf{A}_{\mathrm{t}}(\omega_{\mathrm{t}}):\forall \omega_{\mathrm{t}}\in\Omega\). Rewards are defined by the continuous-valued random variable, \(\mathrm{R}_{\mathrm{t}}:\Omega\to\mathcal{R}\subseteq\mathbb{R}\), with observable instances, \(\mathrm{r}_{\mathrm{t}}=\mathrm{R}_{\mathrm{t}}(\omega_{\mathrm{t}}):\forall \omega_{\mathrm{t}}\in\Omega\). Let the random variable, \(\mathrm{G}_{\mathrm{t}}\coloneqq\sum_{\mathrm{k}=1+1}^{\mathrm{T}}\gamma^{k-1 -\mathrm{t}}\mathrm{R}_{\mathrm{k}}\), denote the discounted return. We use the standard definitions for the conditional action distribution/density (policy), \(\pi(\mathbf{a}\mid\mathbf{s})\), the state value function under the policy, \(v_{\pi}(\mathbf{s})\coloneqq\mathbb{E}_{\pi}\left[\mathrm{G}_{\mathrm{t}} \mid\mathbf{S}_{\mathrm{t}}=\mathbf{s}\right]\), and state-action value function under the policy, \(q_{\pi}(\mathbf{s},\mathbf{a})\coloneqq\mathbb{E}_{\pi}\left[\mathrm{G}_{ \mathrm{t}}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}= \mathbf{a}\right]\).
**On-policy, Actor-critic reinforcement learning.** On-policy, Actor-critic approaches to reinforcement learning are called _policy-gradient_ methods, in that they seek to optimize a policy function, \(\pi(\mathbf{a}\mid\mathbf{s},\mathbf{\theta})\), differentiable concerning parameters, \(\mathbf{\theta}\), to maximize the expected discounted return under the policy, \(v_{\pi}(\mathbf{s})\). On-policy approaches differ from off-policy approaches in that they only use recent observations from the current policy to achieve this objective. Actor-critic methods differ from other policy-gradient methods because they fit an approximate value function (critic), \(v(\mathbf{s},\mathbf{w})\), to the data collected under the policy, in addition to optimizing the policy function (actor). The critic is typically used in actor optimization but not generally for decision-making.
Deep reinforcement learning implements the actor and critic using neural network architectures, where the function parameters correspond to network weights. We denote the parameters of the actor and critic networks as \(\mathbf{\theta}\) and \(\mathbf{w}\), respectively. The output likelihood of the actor makes distributional assumptions informed by characteristics of the action space, \(\mathcal{A}\). For continuous action spaces, the likelihood is commonly an independent multivariate normal distribution with homogeneous noise variance, \(\pi(\mathbf{a}_{\mathrm{t}}\mid\mathbf{s}_{\mathrm{t}},\mathbf{\theta})\sim \mathcal{N}(\mathbf{a}\mid\mathbf{\mu}(\mathbf{s},\mathbf{\theta}),\mathrm{I}\mathbf{ \sigma}^{2}(\mathbf{\theta}))\), where \(\mathbf{\sigma}^{2}(\mathbf{\theta})=(\sigma_{1}^{2},\ldots,\sigma_{m}^{2})\) is the vector of inferred action noise variances. For discrete action spaces, the likelihood is often a categorical distribution, \(\pi(\mathbf{a}_{\mathrm{t}}\mid\mathbf{s}_{\mathrm{t}},\mathbf{\theta})\sim \mathrm{Categorical}(\mathbf{a}\mid\mathbf{\mu}(\mathbf{s},\mathbf{\theta}))\). In both cases, the mean parameter of the likelihood, \(\mathbf{\mu}(\mathbf{s},\mathbf{\theta})\), is the \(m\)-dimensional, vector-valued output of a neural network architecture with parameters, \(\mathbf{\theta}\). Critic networks are commonly fit using a mean squared error objective, which corresponds to a univariate normal output likelihood with unit variance, \(p(\mathrm{g}\mid\mathbf{s},\mathbf{w})\sim\mathcal{N}(\mathbf{s}\mid v( \mathbf{s},\mathbf{w}),1)\), where the mean parameter is the approximate value function, \(v(\mathbf{s},\mathbf{w})\), and is given by the scalar-valued output of any neural network architecture with parameters, \(\mathbf{w}\).
The baseline on-policy, actor-critic policy gradient algorithm seeks to perform gradient ascent with respect to the "performance" function, \(J(\mathbf{\theta})\coloneqq v_{\pi}(\mathbf{s}_{0},\mathbf{\theta})\), where \(v_{\pi}(\mathbf{s}_{0},\mathbf{\theta})\) is the value function
with respect to the parameters \(\mathbf{\theta}\). By the policy gradient theorem (Sutton et al., 1999), we have: \(\nabla_{\mathbf{\theta}}J(\mathbf{\theta})=\nabla_{\mathbf{\theta}}v_{\pi}(\mathbf{s}_{0}) \propto\int_{\mathcal{S}}\rho(\mathbf{s})\int_{\mathcal{A}}q_{\pi}(\mathbf{s}, \mathbf{a})\nabla_{\mathbf{\theta}}\pi(\mathbf{a}\mid\mathbf{s},\mathbf{\theta})d \mathbf{a}d\mathbf{s}\). Sutton and Barto (2018) show that a generalization of this result includes a comparison of the state-action value function, \(q_{\pi}(\mathbf{s},\mathbf{a})\), to an arbitrary baseline that does not vary with the action, \(\mathbf{a}\). When the baseline is the state value function, \(v_{\pi}(\mathbf{s})\), we have an objective in terms of the _advantage function_(Schulman et al., 2015), \(h_{\pi}(\mathbf{s},\mathbf{a})\coloneqq q_{\pi}(\mathbf{s},\mathbf{a})-v_{ \pi}(\mathbf{s})\), namely: \(\nabla_{\mathbf{\theta}}J(\mathbf{\theta})\propto\int_{\mathcal{S}}\rho(\mathbf{s}) \int_{\mathcal{A}}h_{\pi}(\mathbf{s},\mathbf{a})\nabla_{\mathbf{\theta}}\pi( \mathbf{a}\mid\mathbf{s},\mathbf{\theta})d\mathbf{a}d\mathbf{s}\). This formulation in terms of _all actions_ can be further simplified in terms of observed actions and states as: \(\nabla_{\mathbf{\theta}}J(\mathbf{\theta})\propto\mathbb{E}_{\pi}\left[h_{\pi}( \mathbf{S}_{\mathrm{t}},\mathbf{A}_{\mathrm{t}})\nabla_{\mathbf{\theta}}\log\pi( \mathbf{A}_{\mathrm{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right]\). We use \(\mathbb{E}_{\pi}\) to denote an expectation over states \(\mathbf{S}_{\mathrm{t}}\) and actions \(\mathbf{A}_{\mathrm{t}}\) collected under the policy \(\pi(\mathbf{a}\mid\mathbf{s})\).
In general, because neither the state-action, \(q_{\pi}(\mathbf{s},\mathbf{a})\), nor the state value, \(v_{\pi}(\mathbf{s})\), functions are known, we need an estimator for the advantage function. For compactness, we will focus on the generalized advantage estimator (GAE) proposed by Schulman et al. (2015): \(h(\mathbf{s}_{\mathrm{t}},\mathbf{r}_{\mathrm{t}},\mathbf{w})=\sum_{\mathrm{k =t+1}}^{\mathrm{T}}(\gamma\lambda)^{\mathrm{k-1}-1}\delta_{\mathrm{t-k+1}}^{ \mathbf{w}}\), where \(0<\lambda\leq 1\), and \(\delta_{\mathrm{t}}^{\mathbf{w}}=\mathbf{r}_{\mathrm{t}}+\gamma v(\mathbf{s}_ {\mathrm{t+1}};\mathbf{w})-v(\mathbf{s}_{\mathrm{t}};\mathbf{w})\) is the temporal difference (TD) residual of the value function with discount, \(\gamma\)(Sutton and Barto, 2018). The GAE then yields a low-variance gradient estimator for the policy function: \(\widehat{\nabla_{\mathbf{\theta}}J}(\mathbf{\theta})\coloneqq\mathbb{E}_{\pi}\left[h( \mathbf{S}_{\mathrm{t}},\mathbf{R}_{\mathrm{t}},\mathbf{w})\nabla_{\mathbf{\theta }}\log\pi(\mathbf{A}_{\mathrm{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right]\). Finally, the actor and critic networks are generally optimized by using mini-batch stochastic gradient descent Robbins and Monro (1951) to fit the functions induced by the network weights to a batch of data collected under the current policy, \(\mathcal{D}_{\pi}^{b}=\{\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{r}_{i}\}_{i=1}^{b}\).
## 3 Methods
In this section, we develop our cautious, on-policy actor-critic algorithm. As a reminder, we realize this algorithm by making three simple changes to the A3C algorithm: first, we process advantage estimates through a ReLU function; second, we regularize network weights using spectral normalization; and third, we implement the actor and critic networks as Bayesian Neural Networks to enable Thompson sampling. We provide the theoretical grounding to prove that clipping the advantages during policy optimization results in optimizing a lower bound on the value function plus a constant. We show that under standard assumptions, the constant is equal to the expected, clipped difference in the state value function, \(\gamma v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}(\mathbf{s})\), over all actions, \(\mathbf{a}\), and next-states, \(\mathbf{s}^{\prime}\), under the policy given state, \(\mathbf{s}\), and that we can regularize it using spectral normalization. And finally, we detail how to enable cautious exploration via Thompson sampling by adding dropout and weight decay. The following theorem formalizes the main result of our paper.
**Theorem 3.1**.: _Let, denote the discounted return. Let \(q_{\pi}(\mathbf{s},\mathbf{a})=\mathbb{E}_{\pi}\left[\mathrm{G}_{\mathrm{t}} \mid\mathbf{S}_{\mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}=\mathbf{a}\right]\), denote the state-action value function, and \(v_{\pi}(\mathbf{s})=\mathbb{E}_{\pi}\left[\mathrm{G}_{\mathrm{t}}\mid\mathbf{ S}_{\mathrm{t}}=\mathbf{s}\right]\), denote the state value function, under policy \(\pi(\mathbf{a}\mid\mathbf{s},\mathbf{\theta})\). Let \(\left(x\right)^{+}\coloneqq\max(0,x)\). Assume, without loss of generality, that rewards, \(\mathrm{R}_{\mathrm{t}}\), are non-negative. Assume that the gradient of the policy, \(\nabla\pi(\mathbf{a}\mid\mathbf{s},\mathbf{\theta})\), is a conservative vector field. Then, performing gradient ascent with respect to,_
\[\nabla_{\mathbf{\theta}}J(\mathbf{\theta})=\mathbb{E}_{\pi}\left[\left(q_{\pi}(\mathbf{ S}_{\mathrm{t}},\mathbf{A}_{\mathrm{t}})-v_{\pi}(\mathbf{S}_{\mathrm{t}}) \right)^{+}\nabla_{\mathbf{\theta}}\log\pi(\mathbf{A}_{\mathrm{t}}\mid\mathbf{S}_{ \mathrm{t}},\mathbf{\theta})\right], \tag{1}\]
_maximizes a lower-bound, \(v_{\pi}^{*}(\mathbf{s})\), on the state value function, \(v_{\pi}(\mathbf{s})\), plus a constant:_
\[v_{\pi}^{*}(\mathbf{s})\leq v_{\pi}(\mathbf{s})+C(\mathbf{s}), \tag{2}\]
_where, \(C(\mathbf{s})=\iint\left(\gamma v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}(\mathbf{s} )\right)^{+}d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}=\mathbf{a})d\Pi(\mathbf{a}\mid\mathbf{S}_{\mathrm{t}}= \mathbf{s})\), is the expected, clipped difference in the state value function, \(\gamma v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}(\mathbf{s})\), over all actions, \(\mathbf{a}\), and next states, \(\mathbf{s}^{\prime}\), under the policy given state, \(\mathbf{s}\). Here, we use \(\int\ldots d\Pi(\mathbf{a}\mid\mathbf{s})\) to denote \(\sum_{\mathbf{a}}\ldots\pi(\mathbf{a}\mid\mathbf{s})\) for discrete action spaces and \(\int\ldots\pi(\mathbf{a}\mid\mathbf{s})d\mathbf{a}\) for continuous action spaces. Similarly, we use \(\int\ldots d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{s},\mathbf{a})\) to denote \(\sum_{\mathbf{s}^{\prime}}\ldots p(\mathbf{s}^{\prime}\mid\mathbf{s},\mathbf{a})\) for discrete state spaces and \(\int\ldots p(\mathbf{s}^{\prime}\mid\mathbf{s},\mathbf{a})d\mathbf{s}^{\prime}\) for continuous state spaces. Proof is provided in Appendix A.1._
Bounding the constant \(C(\mathbf{s})\).Considering the value function, \(v_{\pi}(\mathbf{s})\), as \(K\)-Lipschitz continuous and assuming that the expected value of the value function, \(v_{\pi}(\mathbf{s}^{\prime})\) over next-states, \(\mathbf{s}^{\prime}\), is equal to the value function evaluated at the current state, \(v_{\pi}(\mathbf{s})\). Then, when \(\gamma=1\), the constant is bounded
proportional to the expected absolute difference between states.
\[\begin{split} C(\mathbf{s})&=\iint\Big{(}v_{\pi}( \mathbf{s}^{\prime})-v_{\pi}(\mathbf{s})\Big{)}^{+}d\mathbb{P}(\mathbf{s}^{ \prime}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}=\mathbf{ a})d\Pi(\mathbf{a}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s})\\ &=\frac{1}{2}\iint\Big{(}v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}( \mathbf{s})+\big{|}v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}(\mathbf{s})\big{|} \Big{)}d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s}, \mathbf{A}_{\mathrm{t}}=\mathbf{a})d\Pi(\mathbf{a}\mid\mathbf{S}_{\mathrm{t}}= \mathbf{s})\\ &=\frac{1}{2}\iint|v_{\pi}(\mathbf{s}^{\prime})-v_{\pi}(\mathbf{s })\big{|}d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{S}_{\mathrm{t}}=\mathbf{s },\mathbf{A}_{\mathrm{t}}=\mathbf{a})d\Pi(\mathbf{a}\mid\mathbf{S}_{\mathrm{t }}=\mathbf{s})\\ &\leq\frac{1}{2}\iint K\big{|}\big{|}\mathbf{s}^{\prime}- \mathbf{s}\big{|}\big{|}d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{S}_{ \mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}=\mathbf{a})d\Pi(\mathbf{a}\mid \mathbf{S}_{\mathrm{t}}=\mathbf{s}).\end{split} \tag{3}\]
This interpretation motivates using spectral normalization (Miyato et al., 2018) of the value function estimator weights, \(v(\mathbf{s},\mathbf{w})\), which regulates the Lipschitz constant, \(K\), of the estimator and can improve performance in the off-policy reinforcement learning setting (Bjorck et al., 2021; Gogianu et al., 2021). Moreover, when using the generalized advantage estimator with the same assumptions, the constant is given by: \(C(\mathbf{s})=\frac{1}{2}\iint\big{|}\gamma\lambda v_{\pi}(\mathbf{s}^{\prime })-v_{\pi}(\mathbf{s})\big{|}d\mathbb{P}(\mathbf{s}^{\prime}\mid\mathbf{S}_{ \mathrm{t}}=\mathbf{s},\mathbf{A}_{\mathrm{t}}=\mathbf{a})d\Pi(\mathbf{a}\mid \mathbf{S}_{\mathrm{t}}=\mathbf{s})\). Since \(\gamma\lambda<1\), the GAE also serves to regularize the constant.
**Cautious exploration.** We propose Bayesian inference over the actor and critic parameters to enable cautious exploration via Thompson sampling (Thompson, 1933). This involves introducing posterior distributions over the policy parameters, \(q(\boldsymbol{\Theta}\mid\mathcal{D}_{n-1})\), and value function estimator parameters, \(q(\mathbf{W}\mid\mathcal{D}_{n-1})\). Here, \(\mathcal{D}_{n-1}=\{\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{r}_{i}\}_{i=1}^{ \mathcal{T}_{n-1}}\) is data collected under the policy, \(\pi(\mathbf{a}\mid\mathbf{s},\boldsymbol{\Theta}_{n-1})\), over a set of horizons, \(\mathcal{T}_{n-1}=\mathrm{T}_{1}^{n-1}\cup\mathrm{T}_{2}^{n-1}\cup\ldots\). In general, any inference technique is permissible. In Algorithm 1, we outline the procedure for the case of approximate inference using dropout Bayesian Neural Networks (BNNs) following Gal and Ghahramani (2016). For a dropout BNN, the posterior distribution for the policy parameters is of the form \(q(\boldsymbol{\theta}\mid\widehat{\boldsymbol{\theta}},p)\), where \(\widehat{\boldsymbol{\theta}}\) is the expected value of the parameters, and \(p\) is the dropout rate. Similarly, the posterior distribution for the value function parameters is of the form \(q(\mathbf{w}\mid\widehat{\mathbf{w}},p)\), where \(\widehat{\mathbf{w}}\) is the expected value of the parameters, and \(p\) is the dropout rate. We optimize each dropout BNN by minimizing the Kullback-Leibler divergence between a prior distribution and its approximate posterior.
We term this method VSOP: for Variational [b]ayes, Spectral-normalized, On-Policy reinforcement learning. Algorithm 1 details VSOP for dropout BNNs.
```
0: initial state, \(\mathbf{s}^{\prime}\), environment, \(p(\mathbf{s}^{\prime},\mathbf{r}\mid\mathbf{s},\mathbf{a})\), rollout buffer, \(\mathcal{D}\), initial actor parameters, \(\widehat{\boldsymbol{\theta}}\), initial critic parameters, \(\widehat{\mathbf{w}}\), dropout rate, \(p\), learning rate, \(\eta\), minibatch size, \(b\).
1:while true do\(\triangleright\) reset rollout buffer
2:\(\mathcal{D}\leftarrow\emptyset\)
3:while acting do\(\triangleright\) interact with the environment
4:\(\mathbf{s}\leftarrow\mathbf{s}^{\prime}\)\(\triangleright\) update current state
5:\(\boldsymbol{\theta}\sim q(\boldsymbol{\theta}\mid\widehat{\boldsymbol{\theta}},p)\)if TS else\(\boldsymbol{\theta}\leftarrow\widehat{\boldsymbol{\theta}}\)\(\triangleright\) sample actor params if Thompson sampling (TS)
6:\(\mathbf{a}\sim\pi(\mathbf{a}\mid\mathbf{s},\boldsymbol{\theta})\)\(\triangleright\) sample action from policy
7:\(\mathbf{s}^{\prime},\mathbf{r}\sim p(\mathbf{s}^{\prime},\mathbf{r}\mid\mathbf{s}, \mathbf{a})\)\(\triangleright\) sample next state and reward from environment
8:\(\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{s},\mathbf{a},\mathbf{r})\}\)\(\triangleright\) update rollout buffer
9:\(\mathbf{w}^{*}+\widehat{\mathbf{w}}\)\(\triangleright\) freeze critic weights for advantage estimates
10:\(\beta\leftarrow(1-p)/\left(2|\mathcal{D}|\right)\)\(\triangleright\) set parameter precision
11:while fitting do\(\triangleright\) update actor and critic
12:\(\{\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{r}_{i}\}_{i=1}^{b}\sim\mathcal{D}\)\(\triangleright\) sample minibatch from rollout buffer
13:\(\widehat{\mathbf{w}}\sim q(\mathbf{w}\mid\mathbf{w}^{*},p)\)if TS else\(\widetilde{\mathbf{w}}\leftarrow\mathbf{w}^{*}\)\(\triangleright\) sample advantage params if TS
14:\(\boldsymbol{\theta}\sim q(\boldsymbol{\theta}\mid\widehat{\boldsymbol{\theta}},p)\)\(\triangleright\) sample actor parameters
15:\(\widehat{\boldsymbol{\theta}}\leftarrow\widehat{\boldsymbol{\theta}}-\eta\frac{1}{b}\sum_{i=1}^{b}h^{+}( \mathbf{s}_{i},\mathbf{r}_{i},\widetilde{\mathbf{w}})\nabla_{\boldsymbol{ \theta}}\log\pi(\mathbf{a}_{i}\mid\mathbf{s}_{i},\boldsymbol{\theta})+2\beta \boldsymbol{\theta}\)\(\triangleright\) update actor
16:\(\mathbf{w}\sim q(\mathbf{w}\mid\widehat{\mathbf{w}},p)\)\(\triangleright\) sample critic parameters
17:\(\widehat{\mathbf{w}}\leftarrow\widehat{\mathbf{w}}-\eta\frac{1}{b}\sum_{i=1}^{b} \nabla_{\mathbf{w}}\log p(g(\mathbf{s}_{i},\mathbf{r}_{i},\widetilde{\mathbf{w}}) \mid\mathbf{s}_{i},\mathbf{w})+2\beta\mathbf{w}\)\(\triangleright\) update critic
## 4 Related Works
### On-policy methods
VSOP is an on-policy RL algorithm. Table 1 compares the gradient of the performance function, \(\nabla J(\mathbf{\theta})\), for VSOP with those for relevant on-policy algorithms. We discuss each algorithm below.
**Mirror Learning.**_Trust Region Policy Optimization (TRPO)_[15] is an on-policy, actor-critic method that improves upon the baseline policy gradient method by incorporating a constraint on the maximum size of policy updates. TRPO takes small steps toward improvement and limits the step size to ensure that the new policy does not deviate significantly from the old policy. TRPO achieves this by optimizing a surrogate objective function that approximates the expected reward under the new policy while imposing a constraint on the KL divergence between the new and old policies. TRPO is effective in various high-dimensional and continuous control tasks. Proximal Policy Optimization (PPO)_[15], like TRPO, improves upon the baseline policy gradient method by constraining the maximum size of policy updates. However, instead of using a KL divergence constraint, PPO employs a clipped surrogate objective function to limit the size of policy updates. PPO simplifies the optimization procedure compared to TRPO, making it more computationally efficient and easier to implement. While TRPO and PPO constrain policy updates based on the ratio between the new and old policies, VSOP constrains policy updates according to the sign of the estimated advantage function. As such, PPO and TRPO are instances of the _mirror learning_ framework Kuba et al. (2022), whereas VSOP does not inherit the same theoretical guarantees. Lu et al. (2022) explores the Mirror Learning space by meta-learning a "drift" function. They term their immediate result Learned Policy Optimization (LPO). Through its analysis, they arrive at _Discovered Policy Optimisation (DPO)_, a novel, closed-form RL algorithm.
**Regret Matching Policy Gradient (RMPG)**[16] is inspired by an objective called regret policy gradient (RPG), which maximizes a lower-bound on the advantages: \((h(\mathbf{s},\mathbf{a}))^{+}\leq h(\mathbf{s},\mathbf{a})\). RPG directly optimizes the policy for an estimator of the advantage lower-bound, denoted as \(\nabla_{\mathbf{\theta}}J^{\text{RPG}}(\mathbf{\theta})\). RMPG, being inspired by RPG, has a different objective, \(\nabla_{\mathbf{\theta}}J^{\text{RMPG}}(\mathbf{\theta})\). In both cases, \(q(\mathbf{s},\mathbf{a},\mathbf{w})\) is a parametric estimator of the state-action value function, \(q_{\pi}(\mathbf{s},\mathbf{a})\). RMPG has demonstrated improved sample efficiency and stability in learning compared to standard policy gradient methods. VSOP is closely related to RMPG; however, we provide the missing theoretical foundations to ground RMPG (Appendix A.1), extend RMPG from the _all actions_ formulation making it more suitable for continuous control (Appendix A.2), and employ the GAE rather than the state-action value function estimator, \(q(\mathbf{s},\mathbf{a},\mathbf{w})\).
**Risk Sensitive Reinforcement Learning.** Instead of optimizing expected value, risk-sensitive RL methods optimize a measure of risk. Tamar et al. (2015) propose the risk-averse _CVaR-PG_ which seeks to minimize the Conditional Value at Risk (CVaR), \(\Phi(\theta)\coloneqq\mathds{E}_{\pi}\left[\mathrm{G_{t}}\mid\mathrm{G_{t}} \leq\nu_{\alpha}\right]\), where \(\nu_{\alpha}\) is the \(\alpha\)-quantile of the return, \(\mathrm{G_{t}}\), distribution under the policy, \(\pi(\mathbf{a}\mid\mathbf{s},\mathbf{\theta})\). Relatedly, Tang et al. (2020) have used the CVaR as a baseline function for standard policy updates. By focusing only on
\begin{table}
\begin{tabular}{l l} \hline \hline Method & \(\nabla J(\mathbf{\theta})\) \\ \hline A3C & \(\mathbb{E}_{\pi}\left[h_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})\nabla \log\pi(\mathbf{A_{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right];\) & \(h_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})=q_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})-v_{\pi}(\mathbf{S}_{\mathrm{t}})\) \\
**VSOP** & \(\mathbb{E}_{\pi}\left[h_{\pi}^{+}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}}) \nabla\log\pi(\mathbf{A_{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right];\) & \(h_{\pi}^{+}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})=\max\left(0,h_{\pi}(\mathbf{ S}_{\mathrm{t}},\mathbf{A_{t}})\right)\) \\ RMPG & \(\mathbb{E}_{\pi}\left[\int h_{\pi}^{+}(\mathbf{S}_{\mathrm{t}},\mathbf{a}) \nabla d\Pi(\mathbf{a}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right];\) & \(\rho(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}},\mathbf{\theta})=\frac{\pi(\mathbf{A_{t} }\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})}{\pi(\mathbf{A_{t}}\mid\mathbf{S}_{ \mathrm{t}},\mathbf{\theta}_{\mathrm{old}})}\) \\ TRPO & \(\mathbb{E}_{\pi}\left[\min\left(h_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}}) \nabla\rho(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}},\mathbf{\theta}),\text{clip} \Big{(}h_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})\nabla\rho(\mathbf{S}_{ \mathrm{t}},\mathbf{A_{t}},\mathbf{\theta}),1-\epsilon,1+\epsilon\Big{)}\right)\right]\) \\ DPO & \(\mathbb{E}_{\pi}\left[\nabla\begin{cases}\big{(}h_{\pi}(\rho(\mathbf{\theta})-1)-a \tanh(h_{\pi}(\rho(\mathbf{\theta})-1)/a)\big{)}^{+}&h_{\pi}(\mathbf{S}_{\mathrm{ t}},\mathbf{A_{t}})\geq 0\\ \big{(}h_{\pi}\log(\rho(\mathbf{\theta}))-b\tanh(h_{\pi}\log(\rho(\mathbf{\theta})/b) \big{)}^{+}&h_{\pi}(\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})<0\end{cases}\right]\) \\ CVaR & \(\mathbb{E}_{\pi}\left[\left(\nu_{\alpha}-\mathrm{G_{t}}\right)^{+}\nabla\log\pi( \mathbf{A_{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right];\) & \(\nu_{\alpha}\coloneqq\alpha\)-quantile of return, \(\mathrm{G_{t}}\) \\ RSPG & \(\mathbb{E}_{\pi}\left[\left(\mathrm{G_{t}}-\nu_{\alpha}\right)^{+}\nabla\log\pi( \mathbf{A_{t}}\mid\mathbf{S}_{\mathrm{t}},\mathbf{\theta})\right];\) & \(\mathrm{G_{t}}\coloneqq\sum_{k=t+1}^{T}\gamma^{k-1-t}\mathrm{R_{k}}\) \\ EPOpt & \(\mathbb{E}_{\pi}\left[\mathds{1}\big{(}\mathrm{G_{t}}\leq\nu_{\alpha}\big{)} \nabla J(\theta,\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})\right];\) & \(J(\theta,\mathbf{S}_{\mathrm{t}},\mathbf{A_{t}})\) on-policy perf. function \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of performance functions for on-policy methods
the worse case trajectories, CVaR-PG is susceptible to "blindness to success," thus Greenberg et al. (2022) propose a Cross-entropy Soft-Risk algorithm (CeSoR) to address this. Kenton et al. (2019) and Filos et al. (2022) also propose uncertainty aware, risk-averse methods. For model-based policy gradient methods, Rajeswaran et al. (2016) propose _Ensemble Policy Optimization (EPOpt)_, which incorporates restricting policy updates to be risk-averse based on the CVaR and uses ensembles to sample hypothesized models. In contrast to the above risk-averse methods, Petersen et al. (2019) present _Risk Seeking Policy Gradient (RSPG)_ which focuses on maximizing best-case performance by only performing gradient updates when rewards exceed a specified quantile of the reward distribution. Prashanth et al. (2022) provide a comprehensive discussion on risk-sensitive RL.
### Off-policy methods
_Self Imitation Learning (SIL)_(Oh et al., 2018) is a hybrid method that uses clipped advantage estimates to improve the performance of on-policy algorithms such as PPO and A2C by learning from its successful off-policy trajectories. By leveraging experience replay, SIL encourages the agent to imitate its high-reward actions. _Self Imitation Advantage Learning (SIAL)_(Ferret et al., 2020) extends SIL to the off-policy domain. SIAL uses the clipped advantage function to weigh the importance of different actions during self-imitation, enabling the agent to focus on actions that yield higher long-term rewards. Importantly, even though SIL and SIAL only update policies when advantage estimates are positive, they differ from VSOP in that they are off-policy algorithms that learn from successful past trajectories and optimize different objectives based on max-entropy reinforcement learning (Aghasadeghi and Bretl, 2011; Haarnoja et al., 2018).
### Thompson Sampling in Deep Reinforcement Learning
Thompson sampling has been extensively explored in conventional and Deep Q-Learning (Strens, 2000; Wang et al., 2005; Osband et al., 2016; Moerland et al., 2017; Azizzadenesheli et al., 2018) to improve exploration and sample efficiency. Clements et al. (2019) and Nikolov et al. (2018) propose similar sampling-based exploration strategies for Deep Q-Learning. Jiang et al. propose a Thompson sampling strategy based on an ensemble of quantile estimators of the state-action value distribution. In the context of _policy gradient_ methods, related Upper Confidence Bound (UCB) (Ciosek et al., 2019) and Hamiltonian Monte-Carlo (HMC) (Xu and Fekri, 2022) approaches are proposed for off-policy Soft Actor-Critic (SAC) (Haarnoja et al., 2018), and Henaff et al. proposes an elliptical episodic reward for general use. Igl et al. (2019) propose Selective Noise Injection using fixed dropout masks to sample policies and then actions, but stop short of formalizing this as Thompson sampling. Similarly for Hausknecht and Wagenet (2022). We believe our work is the first to formalize and show the benefit of Thompson sampling for on-policy actor-critic methods.
## 5 Experiments
We comprehensively evaluate VSOP against on-policy RL methods across various domains, including continuous and discrete action spaces and diverse dimensionalities in both the action and observation spaces. Furthermore, we evaluate our method using both PyTorch (Paszke et al., 2019) and JAX (Bradbury et al., 2018) frameworks. In Section 5.1, we compare VSOP to baseline implementations of PPO, A3C, and RMPG on the Gymnasium (Brockman et al., 2016) implementation of MuJoCo (Todorov et al., 2012) for continuous control (Section 5.1.1). In this setting, we further ablate the effect that positive advantages, spectral normalization, and Thompson sampling each has on performance (Section 5.1.2), investigate the relationship between Thompson sampling and asynchronous parallelization (Appendix C.1), show that spectral normalization and Thompson sampling also have non-negligible positive effects for PPO (Appendix C.2), and offer comparison to off-policy approaches like SAC (Haarnoja et al., 2018) and Twin Delayed DDPG (TD3) (Fujimoto et al., 2018) (Section 5.1.3). In Section 5.2, we exploit the fast iteration cycles offered by vectorized JAX implementations and the gymnax framework (Lange, 2022) to perform fair comparisons of VSOP, PPO, A2C, and DPO under equal hyper-parameter search budgets.
### Gymnasium MuJoCo
For this evaluation, we build off of Huang et al. (2022)'s CleanRL package which provides reproducible, user-friendly implementations of state-of-the-art reinforcement learning algorithms using PyTorch Paszke et al. (2019), Gymnasium Brockman et al. (2016); Todorov et al. (2012), and Weights & Biases (Biases, 2018). Overall, several code-level optimizations for PPO reproducibility (Engstrom et al., 2020; Andrychowicz et al., 2021) are superfluous for our method in this setting. For example, we omit advantage normalization, value loss clipping (Schulman et al., 2017), gradient clipping, and modification of the default Adam (Kingma and Ba, 2014) epsilon parameter as they either do not lead to an appreciable difference in performance or have a slightly negative effect. However, we find that orthogonal weight initialization, learning rate annealing, reward scaling/clipping, and observation normalization/clipping have non-negligible positive effects on performance Engstrom et al. (2020); Andrychowicz et al. (2021). In addition to adding dropout, weight decay regularization, and spectral normalization, we also look at model architecture modifications not present in the CleanRL implementation: layer width, number of hidden layers, layer activation, layer normalization Ba et al. (2016), and residual connections. We find that ReLU activation functions (Nair and Hinton, 2010), increasing layer width to 256, and a dropout rate of 0.01-0.04 are beneficial. We find that network depth and residual connections are benign overall. In contrast to recent findings for offline data in off-policy reinforcement learning (Ball et al., 2023), layer normalization -- whether applied to the actor, the critic, or both -- is detrimental to performance. We give full details in Appendix B.1.
#### 5.1.1 Comparison to on-policy baselines.
First, we compare tuned VSOP to baseline implementations: PPO, A2C, and RMPG. We use the CleanRL (Huang et al., 2022) implementation of PPO, the StableBaselines3 (Raffin et al., 2021) hyper-parameter settings for A2C, and the VSOP optimal hyper-params for RMPG. Figure 1 summarizes these results. VSOP improves over baseline PPO in five environments, matches its performance in four environments, and is worse in just one environment, Pusher. VSOP improves over A3C in all environments but Pusher, where performance is statistically equal. Finally, VSOP improves over RMPG in all environments.
#### 5.1.2 Ablation of mechanisms.
Next, we investigate the influence of our four proposed mechanisms on the performance of VSOP. For reference, the mechanisms are positive advantages, single-action setting, spectral normalization, and Thompson sampling. Figure 2 summarizes these results, where we see that positive advantages and operating in the single-action regime impact performance on MuJoCo significantly. Spectral normalization and Thompson sampling also influence performance on MuJoCo positively, especially in high-dimensional action and observation space settings such as Humanoid, Humanoid Stand-Up,
Figure 1: Gymnasium-MuJoCo. Comparing VSOP to on-policy baseline algorithms. Here, VSOP improves over baseline PPO in 5 environments, matches it’s performance in 4 environments, and is worse in just 1 environment. VSOP improves over A2C in all environments but Pusher, where performance is statistically equal. Finally, VSOP improves over RMPG in all environments.
and Ant. The performance gains for spectral normalization align with results given by Bjorck et al. (2021) and Gogianu et al. (2021) for DDPG (Lillicrap et al., 2015), DRQ (Kostrikov et al., 2020), Dreamer (Hafner et al., 2019), DQN (Wang et al., 2016) and C51 (Bellemare et al., 2017).
#### 5.1.3 Closing the gap to off-policy methods
Interestingly, we see that applying spectral normalization and dropout to PPO also yields an improvement. We call this augmentation VSPPO and provide detailed analysis in Appendix C.2. In Figure 3, we compare VSOP and VSPPO to SAC and TD3. We close the performance gap significantly for environments such as Humanoid, Half-Cheetah, Ant, and Humanoid Stand-up.
### Gymnax Environments
PureJaxRL (Lu et al., 2022) uses Gymnax (Lange, 2022) and Jax (Bradbury et al., 2018) to enable vectorization, which facilitates principled hyper-parameter tuning. Using it, we explore several environments and compare VSOP, PPO, A3C, and DPO. We use Bayesian hyper-parameter optimization (Snoek et al., 2012) and give each algorithm a search budget of 100 steps. We search over hyper-parameters such as the learning rate, number of update epochs, number of mini-batches in
Figure 3: Mujoco continuous control benchmark comparison to SAC and TD3
Figure 2: Comparing the effect of VSOP mechanisms on Mujoco continuous control performance. Using the single action framework and updating the policy only on positive advantage estimates have the largest effects, followed by spectral normalization, and finally Thompson sampling. Green solid lines (VSOP) show proposed, optimized method. Yellow dashed lines (no Thomp. samp.) show VSOP without Thompson sampling. Red dash dot lines (no spect. norm.) show VSOP without spectral normalization. Blue dotted lines (RMPG) show the “all actions” approach. Purple dash dot lines (with neg. advantages) show VSOP without restricting policy updates to positive advantages.
an update epoch, the GAE \(\lambda\) parameter, the max gradient norm, and the width of the network. We give full implementation details in Appendix B.2. Table 2 shows the overall ranking of each method. VSOP is competitive with DPO and improves over PPO and A3C.
Figure 4 summarize the results for **Classic Control**. Performance of each method is in general statistically equal, but VSOP shows significant gain on MountainCar Continuous.
Figure 5 summarize the results for **MinAtar**[13, 14]. Mean episodic return and 68% CI over 20 random seeds are shown for VSOP (Blue), PPO (Orange), A3C (Green), and DPO (Red). Methods are hyper-parameter tuned using Bayesian Optimization with 100 search steps. p-values for last episode with respect to VSOP shown in brackets. VSOP performs well on Breakout and SpaceInvaders.
Figure 6 summarize the results for **Brax MuJoCo**[14, 14]. We perform paired t-tests for the last episode between each method and VSOP. We threshold at a p-value of 0.1 to indicate significance. VSOP significantly outperforms A3C in all environments. VSOP significantly outperforms PPO in four of nine environments (InvertedDoublePendulum, Pusher, Reacher, and Walker2d), is statistically equivalent in two environments (Hopper and HumanoidStandUp), and is significantly less effective in three environments (Ant, HalfCheetah, and Humanoid). VSOP outperforms DPO on Ant, is statistically equivalent in four environments (HumanoidStandUp,
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline
**Method** & **Brax-MuJoCo** & **MinAtar** & **Classic Control** & **Avg. Rank** \\ \hline DPO & **1.33** & **1.75** & 1.25 & 1.44 \\ VSOP (Ours) & 1.78 & 2.50 & **1.00** & 1.76 \\ PPO & 2.00 & 2.25 & 1.25 & 1.83 \\ A3C & 4.00 & 2.25 & 1.25 & 2.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rank scores (lower is better) for VSOP, DPO, PPO, and A3C on Brax-MuJoCo, MinAtar, and Classic Control. Methods are ranked from 1 to 4 based on statistically significant differences (paired t-test with p-value 0.1) between mean last episode returns. Ties are given the same rank, and the proceeding score will be the last rank plus the number of additional methods.
Figure 4: Classic Control Environments [14]. Mean episodic return and 68% CI over 20 random seeds are shown for VSOP (Blue), PPO (Orange), A3C (Green), and DPO (Red). Each method is hyper-parameter tuned using Bayesian Optimization with 100 search steps. Paired t-test p-values for last episode with respect to VSOP shown in brackets. Significant improvement is seen for VSOP compared to all other methods on MountainCar Continuous.
Figure 5: MinAtar Environments [13]. Mean episodic return and 68% CI over 20 random seeds are shown for VSOP (Blue), PPO (Orange), A3C (Green), and DPO (Red). Methods are hyper-parameter tuned using Bayesian Optimization with 100 search steps. p-values for last episode with respect to VSOP shown in brackets. VSOP performs well on Breakout and SpaceInvaders.
Pusher, Reacher, and Walker2d), but is significantly less effective in four environments (HalfCheetah, Hopper, Humanoid, and InvertedDoublePendulum). Overall, VSOP outperforms A3C and PPO and is competitive with DPO.
## 6 Conclusion
We have presented a novel approach for improving the performance of on-policy DRL algorithms by incorporating cautious interaction. Our method realized through simple modifications to the A3C algorithm, optimizes a lower bound on value plus a constant and integrates exploration via Thompson sampling. We provide a theoretical justification for our approach by demonstrating that our algorithm optimizes this lower bound. Our empirical evaluations across several diverse benchmarks confirm our approach's improved performance compared to existing on-policy algorithms. Although achieving sufficiently cautious algorithmic interaction with the world remains a distant goal, our research constitutes a significant stride toward this objective. We trust that our work will catalyze further advancements in the field, propelling the development of more cautious and efficacious DRL applications in resolving complex, real-world problems.
## 7 Broader Impact
Algorithmic decision-making is becoming increasingly present in many areas of our life. While this has the potential for benefit, it is also known to automate and perpetuate historical patterns that
Figure 6: Brax-MuJoCo Environments (Freeman et al., 2021; Todorov et al., 2012). Mean episodic return and 68% CI over 20 random seeds are shown for VSOP (Blue), PPO (Orange), A3C (Green), and DPO (red). Each method is hyper-parameter tuned using Bayesian Optimization (Snoek et al., 2012) with a budget of 100 search steps. Paired t-test p-values for last episode with respect to VSOP shown in brackets. VSOP generally out performs PPO and A3C and is competitive with DPO.
are often unjust and discriminatory (Buolamwini and Gebru, 2018; Noble, 2018; Benjamin, 2020; Birhane, 2021). We believe that cautious interaction is a necessary feature for the type of deployed algorithmic decision-making systems the RL community envisions, but that technological solutions alone will not suffice.
## 8 Acknowledgements
AJ would like to thank Luisa Zintgraf and Panagiotis Tigas for the crash course in reinforcement learning. The authors would like to thank everyone who engaged with this Twitter thread. Specifically, we would like to thank Johan Ferret for highlighting Self-Imitation Advantage Learning, Wilka Carvalho for highlighting Self-Imitation Learning, Nathan Grinsztajn for highlighting Risk Seeking Policy Gradients, Ohad Rubin for highlighting Discovered Policy Optimization, and Marc Lanctot for the detailed discussion on Regret Matching Policy Gradients. The authors would like to thank Jannik Kossen for brainstorming the title. Finally, the authors thank Jacob Beck and all anonymous reviewers for their valuable feedback and suggestions.
|
2310.13782 | Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
Shader Images | Knowledge distillation (KD) has been a popular and effective method for model
compression. One important assumption of KD is that the original training
dataset is always available. However, this is not always the case due to
privacy concerns and more. In recent years, "data-free" KD has emerged as a
growing research topic which focuses on the scenario of performing KD when no
data is provided. Many methods rely on a generator network to synthesize
examples for distillation (which can be difficult to train) and can frequently
produce images that are visually similar to the original dataset, which raises
questions surrounding whether privacy is completely preserved. In this work, we
propose a new approach to data-free KD that utilizes unnatural OpenGL images,
combined with large amounts of data augmentation and adversarial attacks, to
train a student network. We demonstrate that our approach achieves
state-of-the-art results for a variety of datasets/networks and is more stable
than existing generator-based data-free KD methods. Source code will be
available in the future. | Logan Frank, Jim Davis | 2023-10-20T19:28:50Z | http://arxiv.org/abs/2310.13782v1 | # Data-Free Knowledge Distillation Using Adversarially
###### Abstract
Knowledge distillation (KD) has been a popular and effective method for model compression. One important assumption of KD is that the original training dataset is always available. However, this is not always the case due to privacy concerns and more. In recent years, "data-free" KD has emerged as a growing research topic which focuses on the scenario of performing KD when no data is provided. Many methods rely on a generator network to synthesize examples for distillation (which can be difficult to train) and can frequently produce images that are visually similar to the original dataset, which raises questions surrounding whether privacy is completely preserved. In this work, we propose a new approach to data-free KD that utilizes unnatural OpenGL images, combined with large amounts of data augmentation and adversarial attacks, to train a student network. We demonstrate that our approach achieves state-of-the-art results for a variety of datasets/networks and is more stable than existing generator-based data-free KD methods. Source code will be available in the future.
## 1 Introduction
Neural networks have become a dominant force in machine learning (ML) [16, 25, 58]. Enabled by modern compute hardware, the size of the largest available networks has continued to increase with time [14, 32, 33]. Coinciding with the desire to increase model capacity, there has been a growing interest in deploying small, but well-performing, networks on edge devices [51, 59]. Utilizing neural networks on such devices is often difficult as there are typically strict resource constraints such as memory or power consumption. Unfortunately, models frequently have to sacrifice performance in order to satisfy these imposed limitations. This often points practitioners towards considering existing methods that enable a smaller network to perform as well as a larger network all while retaining its lean characteristic.
Paradigms that address the aforementioned goal are called model compression techniques, which as the name implies, aim to compress a large and complex neural network that performs well on some downstream task (classification, object detection, etc.) into one with a more compact and efficient form. One such method is _knowledge distillation_ (KD) [27], where the goal is to transfer the information stored inside a cumbersome "teacher" network to a more compact "student" network (which can be a completely different architecture). This is accomplished by utilizing the soft-target outputs (or internal features) of the teacher network to guide the student towards mimicking the output responses of the teacher when presented with similar inputs. It has even been shown that performance gains can be achieved by distilling between a teacher and student that are the exact same network architecture (_self-distillation_) [19]. Other works have built upon the standard KD approach by applying adversarial attacks to the original data for exploiting decision boundary information in the teacher model, which resulted in improved student accuracy [26, 55].
While KD has been a remarkably popular mechanism for creating lightweight models, one particular drawback that it (along with other model compression techniques [21, 29] and KD derivatives [26, 55]) suffers from is that standard KD relies on the strong assumption that the original training data (or similar distribution data) will be available at distillation. However, this is not often the case as privacy concerns continue to grow regarding the use and transfer
Figure 1: Example TwiGL OpenGL shader images.
ring of data. Moreover, the data could be kept as proprietary in-house information by a company or simply put, the dataset is too large to transfer or store in a reasonable manner. To circumvent this drawback, many approaches have been proposed in the realm of _data-free knowledge distillation_ (DFKD), which utilizes the teacher as a means to generate synthetic examples to orchestrate the distillation process. These methods typically involve generating examples by either sampling from distributions modeled off prior information contained inherently in the teacher (_e.g_., batch norm [28] statistics) [40, 60] or an additional generator network [22] trained concurrently with distillation [4, 9, 11]. Incidentally, the synthetic examples generated by these approaches often closely resemble or give insights to the original data [64, 65]. This begs the question as to whether or not these DFKD approaches are truly privacy-preserving. Furthermore, the potential inclusion of a generator network can complicate the KD process as there is a completely different set of hyperparameters to be managed as well as the potential issues of mode collapse or non-convergence [48].
In this work, we present a novel approach to DFKD that differs from existing methods in that it does not utilize an additional generator network or attempt to extract naturalistic synthetic examples from the teacher. Instead, we propose to 1) construct a dataset of unnatural synthetic images using OpenGL shaders, 2) adversarially perturb them to identify the decision boundaries in the teacher model, and 3) transfer this knowledge to the student using standard KD [27]. Experiments show significant improvements over existing DFKD approaches for multiple datasets and network architectures, all while being completely privacy-preserving with respect to the original dataset. We additionally show qualitatively that our distilled models can better anonymize the original data embedded in the teacher. Our contributions are summarized as follows:
1. A new framework for data-free knowledge distillation that exploits synthetic imagery and adversarial perturbations rather than utilizing a generator network.
2. The proposed approach utilizes the standard knowledge distillation regime and can easily be incorporated into more advanced distillation techniques.
3. Our method is completely privacy-preserving to the original dataset and furthermore enables a path for anonymizing the data embedded in the teacher model.
We begin with a review of related work in Sect. 2. The components of our proposed DFKD approach are described in Sect. 3. Lastly, extensive experiments demonstrating our method are presented in Sect. 4.
## 2 Related Work
In recent years, many works have been proposed in the areas of KD, DFKD, ML privacy, and adversarial examples.
**Knowledge Distillation.** The transferring of knowledge from a large network to a smaller network was introduced in [5] and was further refined and coined as "knowledge distillation" in [27]. The general KD framework outlined in [27] trains a student network to match the temperature-scaled [23] soft outputs from a larger teacher network using entropy-based loss functions. Since then, several works have investigated what properties influence the success of KD [3, 10, 57] and others have proposed structural improvements to the seminal approach [45, 49, 53, 54]. Notably, [3] argued that KD can be viewed as "function matching" and showed that applying mixup [66] to the input distillation images results in improved student performance. They also observed that some knowledge can be transferred to a student network using out-of-domain data, albeit at a significant performance loss compared to employing in-domain data. Furthermore, KD has been coupled with other model compression techniques [39] and applied to other areas such as semi-supervised learning [62], multi-exit architectures [42], and more. Rather than distilling from a large cumbersome network to a smaller network, self-distillation repeatedly transfers knowledge to a student that is the same architecture as the teacher [19].
**Data-Free Knowledge Distillation.** In standard KD, it is assumed that the original training dataset is available. However this is not always the case, which has motivated a series of data-free KD approaches that attempt to transfer knowledge when no data is available. This line of work can be separated into two categories based on whether they utilize a generator network [4, 9, 11, 18, 65] or inherent teacher network statistics [60, 64, 40] to synthesize examples that may be beneficial for KD. The first truly data-free approach was proposed in [40], where they modelled Dirichlet distributions at the output of the teacher and synthesized examples to match these distributions using backpropagation. In [9], the teacher is treated as a fixed discriminator while a generator is trained to output images that maximize the one-hot cross entropy loss for the teacher. A similar concept is proposed in [11] with an added constraint that minimizes the divergence between the generated example's features and the teacher's learned batch norm layer [28] statistics.
Later, we will compare to the relevant approaches of Contrastive Model Inversion (CMI) [18] and Pseudo Replay Enhanced DFKD (PRE) [4]. Both CMI and PRE rely on a generator network to synthesize examples for transferring knowledge from teacher to student. More specifically, CMI utilizes the framework presented in [11], which consists of three different loss components, with accompanied hyperparameter weight coefficients, that guide the generator's outputs. They introduced a novel contrastive loss component in addition to the aforementioned losses that forces the generator to output new examples that differ from previously synthesized examples that are stored in a memory bank. As for PRE, they utilize the approach outlined in [9],
consisting of four loss terms (two of which are similar to CMI), also associated with multiple hyperparameter weight coefficients, used to train the generator network. Rather than using a memory bank, PRE utilized an additional variational autoencoder [31] to remember previously synthesized examples created by the novel-view generator network.
Our work differs significantly from previous methods in that we do not utilize an additional generator network nor do we attempt to synthesize naturalistic examples using the teacher. We do leverage the teacher to obtain information about its decision boundaries, however the samples we obtain remain unnatural in appearance.
**ML Privacy.** In recent years, data privacy and the inherent privacy of ML models have become major concerns. It has been shown that private training examples can be extracted from MLP networks [24], large language models [7], and diffusion models [6]. Although DFKD is motivated by privacy issues surrounding the original dataset, many approaches can generate examples that are visually similar or have artifacts similar to examples from the original data [18, 64, 65, 9, 11]. This raises an argument as to whether privacy is completely preserved in these methods.
**Adversarial Examples.** Another major concern for neural networks is adversarial examples [52]. These are examples that have been intentionally perturbed to appear unchanged, but have been manipulated by an adversarial attack to be classified by a model as something different (_e.g_., an image of a "dog" that was perturbed and classified as "guacamole"). Numerous adversarial attacks have been proposed with varying benefits and capabilities [8, 35, 52, 38]. To combat these attacks, several defenses [41, 61, 44] and detection [1, 36] methods have been introduced over the years. Rather than attempting to be robust against or detect such inputs, adversarial examples were shown to provide accuracy improvements for standard KD in [26, 55]. These approaches used adversarial attacks on _real_ data to create slightly modified examples that helped identify the decision boundaries in the teacher network. We employ adversarial attacks for a similar goal, however we utilize pairs of adversarial examples to better outline the decision boundary and also include an additional stronger/deeper attack. Furthermore, only standard KD with the original _in-domain_ training dataset is considered in [55, 26], whereas we utilize synthetic _out-of-domain_ examples in a different KD regime where access to the original data is prohibited.
## 3 Method
In this section, we describe the main components of our approach: 1) the creation of a synthetic dataset, 2) the role of data augmentation on that dataset, 3) how to utilize adversarial perturbations on the dataset to identify the decision boundaries of the teacher, and lastly, 4) the method for transferring knowledge from teacher to student.
### OpenGL Shader Image Dataset
Our focus for this work is to transfer knowledge from a pretrained network to a freshly initialized network _without_ accessing the original dataset. We instead want to create some new dataset with the added constraints that there is _no_ additional generator network used for synthesizing these images nor any other mechanism for attempting to recover "desirable" images from the teacher. Therefore, we leverage procedural image programs (using OpenGL [50]) for rendering synthetic images to construct the dataset we will use for KD. Utilizing the approach of [2], we first synthesize several images for each of the available 1089 TwiGL shaders [56]. Examples of these shader images are shown in Fig. 1. As some of these shaders produced images that were either constant (_i.e_., containing all one color such as all black or all white) or simple (_i.e_., containing only two colors or containing few colored pixels), we filter the rendered shader images in order create an initial set of synthetic images with the most diversity possible by removing the aforementioned images. With this scheme, we can synthesize near-infinite amounts of unnatural, _out-of-domain_ images to construct a dataset that can be leveraged for KD as will be shown.
Given a teacher network \(\mathcal{F}_{t}\) pretrained on some dataset with \(\mathcal{C}=\{c_{1},...,c_{R}\}\) classes and a filtered synthetic dataset \(\mathcal{D}_{S}\), we pass every example in \(\mathcal{D}_{S}\) through \(\mathcal{F}_{t}\) to obtain an initial teacher prediction and aggregate all predictions. To form the final synthetic dataset \(\mathcal{D}_{K}\) that will be used for KD, we randomly select examples from \(\mathcal{D}_{S}\) based on their teacher predictions. For each class \(c_{i}\) in the teacher's dataset, we uniformly sample examples from \(\mathcal{C}_{-i}\) (the set of classes \(\mathcal{C}\) without \(c_{i}\)) to meet a desired number of synthetic images \(N_{i}\) per \(c_{i}\) where
\[\mathcal{C}_{-i}=\{\ c_{j}\ \ \forall\ \ j\in\mathcal{C}\ \ \ s.t.\ \ \ j\neq i\ \} \tag{1}\]
and the set of examples \(X_{j}\) assigned to each \(c_{j}\in\mathcal{C}_{-i}\) are
\[X_{j}=\{\ x_{k}\ \ \forall\ \ x_{k}\in\mathcal{D}_{S}\ \ \ s.t.\ \ \ \mathcal{F}_{t}(x_{k})=c_{j}\ \} \tag{2}\]
In other words, we are selecting examples for \(c_{i}\) that are predicted as any other label besides \(c_{i}\). If uniformly sampling across \(\mathcal{C}_{-i}\) does not meet the specified number of examples per class (_i.e_., \(N_{i}\) is not evenly divisible by \(R-1\)), then the remaining examples are randomly sampled without replacement until \(N_{i}\) examples are obtained. For all images collected for class \(c_{i}\), we assign them an associated target label \(t=c_{i}\). Note that we do allow repeats between classes (_i.e_., a specific image can be assigned to both \(c_{1}\) and \(c_{2}\)).
Collecting \(N_{i}\) shader images per class for the specific pretrained teacher network creates the base synthetic distillation dataset \(\mathcal{D}_{K}=\{(x_{1},t_{1}),...,(x_{N},t_{N})\}\) where
\[\mathcal{F}_{t}(x_{j})\neq t_{j}\ \ \forall\ \ (x_{j},t_{j})\in\mathcal{D}_{K} \tag{3}\]
Later, we will discuss why it is important that all examples not be classified as their associated target label by default, and further describe how \(\mathcal{D}_{K}\) is used for distilling knowledge from the associated pretrained teacher network to a new, freshly initialized student network.
### Data Augmentation
As mentioned, one of the benefits to our method for creating a synthetic dataset is the ability to render an unbounded number of images per shader code, enabling near-infinite dataset sizes if desired. However, it becomes obvious that obtaining hundreds of thousands to potentially millions of images quickly poses a storage issue. A clear solution for increasing dataset diversity without having an over-inflated dataset is data augmentation. This allows us to artificially create more examples during distillation and furthermore, augmented examples could enable us to explore regions of the teacher's feature space that the original data could not discover on its own.
In ordinary fully-supervised training, data augmentations should be "label-preserving" [20]. However, we do not have such constraint as there is no notion of a "label" (or any other semantic meaning) with our OpenGL shader images. Thus, we can employ large amounts of data augmentation that would normally not be considered. Later, we will describe our complete data augmentation regime for KD training and additionally show how data augmentation has provided a strong benefit to our approach that would likely not be obtained through more physical examples.
### Decision Boundary Exploitation
To transfer the most amount of information possible to the student, we utilize targeted adversarial attacks on our synthetic examples to identify the decision boundaries in the teacher network. As will be described below, our attack is based on the Basic Iterative Method (BIM) [35].
Beginning with a synthetic example \(x_{j}\) having target label \(t_{j}\) where \(\mathcal{F}_{t}(x_{j})\neq t_{j}\), the goal of our adversarial attack is to perturb the image "from the outside in", starting at an arbitrary label and ending at a desired label \(t_{j}\). This allows us to identify the decision boundary between \(t_{j}\) and all other classes in the teacher \(\mathcal{F}_{t}\). Thus, it is important that the augmented synthetic example be initially classified by the teacher as any other label besides \(t_{j}\). Generally speaking, our attack will perturb an example until it has crossed into the decision space of \(t_{j}\) and is within some specified argmax softmax threshold (to ensure the example is somewhat close to the boundary). In addition to the final adversarial image obtained from a successful attack, we also take the example from the iteration just before the final image crossed the boundary. This creates a pair of examples consisting of the "post"-success example (the final image) and "pre"-success example (the second-to-last image). This pre-success image is also required to be within the specified argmax softmax threshold, but be classified as anything besides \(t_{j}\).
After conducting our adversarial attack on a batch of synthetic images, we filter (remove) examples that do not meet either of the following criteria: 1) image not classified as its assigned target at the end of the adversarial attack or 2) either the final (post-success) image or the pre-success image were unable to meet the softmax threshold. This is to ensure that the examples we are using to guide distillation are beneficial in identifying some meaningful part of the feature space in the teacher, such as the decision boundary.
To aid in the adversarial attack process, we propose adding a Bold Driver heuristic to the attack step size to make it more adaptive and increase its ability to create adversarial examples that meet our conditions. Thus, if after a particular attack iteration the example has not reached the desired target label, the step size will be increased. If the example has crossed the decision boundary to the desired label but the argmax softmax value is above the specific threshold, the example will be reverted back to its previous iteration and the step size will be decreased. Finally, if an example has met the desired final conditions (correct classification label and below argmax softmax threshold), but the pre-success counterpart does not meet the softmax threshold condition, the image will be reverted to the previous iteration and the step size will be decreased (to move the pre-success example closer to the decision boundary).
Beyond just identifying the decision boundary in the teacher model, we find that adding an additional attack, which targets the pair of boundary examples to be perturbed "deeper" into the decision space of their respective classes, leads to additional performance gains. This attack simply clones the boundary examples and performs a targeted BIM attack, where the target labels are the examples' teacher predictions. We find that a single attack step with as little as \(\epsilon=1\) is sufficient, with no noticeable benefit to employing a larger \(\epsilon\) or more iterations. The full adversarial perturbation process is shown in Fig. 2 for an arbitrary class \(c\).
With a curated synthetic dataset ready to be used for knowledge distillation on a specific teacher network, and a method for altering the data to identify the decision boundaries in the teacher, we will next discuss our approach for distilling knowledge from the teacher to the student.
Figure 2: Decision boundary exploitation adversarial attack in the teacher feature space for an arbitrary class \(c\). The symbols \(\times\), \(\times\), and \(\times\) represent the original synthetic examples, “post”-success examples, “pre”-success examples, and “deeper” examples, respectively. Best viewed in color.
### Knowledge Distillation
In this section, we describe our method for transferring knowledge from teacher to student, which is based upon the standard approach for KD [27]. Given a pretrained teacher network \(\mathcal{F}_{t}\) and a randomly initialized student network \(\mathcal{F}_{s}\), an example is first augmented then perturbed using \(\mathcal{F}_{t}\). After completing the adversarial attack process, the newly perturbed example is passed forward through both networks to produce the teacher and student softmax distributions \(p_{t}\) and \(p_{s}\), respectively. The knowledge distillation loss is computed on these softmax scores as
\[\mathcal{L}(p_{t}\mid p_{s})=\sum_{i\;\in\;\mathcal{C}}\big{[}\;p_{t(i)}log\;p _{t(i)}-p_{t(i)}log\;p_{s(i)}\;\big{]} \tag{4}\]
which is simply the KL-divergence. Like [27], we also include a temperature parameter \(\tau\) to adjust the entropy of the output softmax distributions from the teacher and student networks before they are used to compute the loss (_i.e_., \(p_{t}\propto exp(\frac{log\;p_{t}}{\tau})\) and \(p_{s}\propto exp(\frac{log\;p_{s}}{\tau})\)).
We find that by combining our teacher-specific synthetic dataset with heavy data augmentation and our proposed adversarial attacks, knowledge can be sufficiently transferred from a teacher to a student using the standard KD approach.
## 4 Experiments
We first conducted experiments in a self-distillation setting, followed by a typical large teacher to small student KD scenario. Next, we compared to other relevant DFKD approaches, and then investigated the various components of our approach via ablation and sensitivity studies. Lastly, we conducted a qualitative privacy study on various models.
**Datasets and Networks.** We employed three established image classification datasets for our evaluation: CIFAR10 (C10) [34], CIFAR100 (C100) [34], and Tiny ImageNet (Tiny ImgNet) [15], which contain 10, 100, and 200 classes, respectively. For all datasets, we randomly sampled 10% of the training examples class-wise for validation. The remaining training examples were used to train the teacher networks, with the validation set used to select the best model.
A variety of network architectures are considered for each of the datasets, consisting of ResNets (RN) [25], Wide ResNets (WRN) [46], ResNeXts (RX) [63], MobileNetV2 (MNv2) [47], and ShuffleNetV2 (SNv2) [37]. Specific variants for each dataset will be discussed in the appropriate sections below. We trained each selected network and dataset combination using cross entropy to establish a baseline for each model and report those results in the tables.
**Baseline Training Details.** To train our teachers and reference baseline models, we used SGD with momentum (0.9) and weight decay (1e-4) and a one-hot cross-entropy loss. All networks were trained for 400 epochs with a batch size of 256 and a half-period cosine learning rate scheduler, beginning with an initial value of 0.1. Data augmentation consisted of RandAugment [12] (\(n=2,m=14\)), random horizontal flipping, and random cropping with padding. The epoch yielding the best validation accuracy was used to select the final model.
**Distillation Training Details.** For our DFKD experiments, we similarly employed SGD with momentum (0.9) and weight decay (1e-4) for our student networks, and utilized the standard KD loss (\(\tau=20\)) as defined previously [27]. Distillation was performed for 400 epochs for all datasets using a half-period cosine learning rate scheduler with an initial value of 0.1, and batch sizes of 128 for CIFAR10/100 and 64 for Tiny ImageNet. Our synthetic datasets consisted of 25K OpenGL shader images for each of the datasets. We additionally experimented with increasing the number of synthetic examples used on CIFAR10 in our sensitivity studies. Our standard data augmentation regime for KD consisted of RandAugment (\(n=4,m=14\)), random elastic transform, random inversion, random horizontal flipping, and random cropping with padding. Furthermore, we also included mixup [66] with \(\alpha\in\mathcal{U}(0,1)\), similar to [3]. As previously stated, we included more augmentation for KD to add a large amount of data diversity into the process.
The last major component of our approach is the adversarial attacks. The first ("border") attack for identifying decision boundaries in the teacher network is a targeted \(L_{\infty}\) BIM attack with \(\epsilon=10\), an initial step size of \(\alpha=1\), and a maximum of 12 iterations. The softmax threshold for this first attack is \(0.95\), with other values considered later in our sensitivity studies. After identifying the decision boundary and removing examples that did not meet the filtering criteria (Sect. 3.3), the second ("deeper") adversarial attack is then applied, which is a targeted \(L_{\infty}\) BIM attack with \(\epsilon=\alpha=1\) for a single step (no softmax threshold).
**Evaluation.** Three separate runs were performed for each distillation experiment using sequential seed values of 1, 2, and 3, which were made more complex using MD5 as suggested in [13, 30, 43]. The mean and standard deviation of the three trials are reported. The student model from the last epoch of distillation was selected for evaluation as validation checking _cannot_ be performed in this _data-free_ context.
### Results
The experimental results are shown in Table 1 for self-distillation, Table 2 for distilling from large teacher to smaller student, Table 3 for comparisons to related approaches, and Table 4 for ablation and sensitivity studies. Our qualitative privacy study is presented in Sect. 4.2.
**Self-Distillation.** To evaluate how well a network can transfer its knowledge to another network that is the same architecture, we chose one smaller network and one larger network for each dataset. This consisted of MNv2 and
WRN22x8 for CIFAR10, RN18 and RX29-64x4 for CIFAR100, and RN18 and RX29-64x8 for Tiny ImageNet. Results of this experiment are shown in Table 1. We see that for all datasets and network architectures, knowledge can be transferred from a pretrained network to a freshly initialized one using our approach, albeit at a small performance loss (which is typical for DFKD). Our method still achieves competitive performance levels, typically falling within 2% of the network's baseline accuracy. The accuracy degradation seen in our distilled students can be lessened if we include more synthetic examples or train for more epochs, as will be shown later. Furthermore, we will also revisit these self-distillation models later in our privacy studies.
**Distilling Down.** In most cases, KD is performed between a large, pretrained teacher model and a much smaller, freshly initialized student. Here, we employed teacher\(\rightarrow\)student pairs of RX29-64x4\(\rightarrow\)MNv2 and WRN22x8\(\rightarrow\)RN18 for CIFAR10, RN50\(\rightarrow\)MNv2 and RX29-64x4\(\rightarrow\)RN18 for CIFAR100, and WRN28x10\(\rightarrow\)SNv2 and RX29-64x8\(\rightarrow\)RN18 for Tiny ImageNet. We additionally trained a teacher RX50-32x8 model with mixup (\(\alpha=0.2\)) and distilled it to a RN18 for each of the datasets to investigate the behavior of our approach when an even stronger teacher is employed. In the future, we will include one teacher\(\rightarrow\)student pair for the standard ImageNet dataset. This experiment will use BiT-M-ResNet 152x2 [33] as a teacher and RN50 as a student, similar to [3], to demonstrate how our approach can scale up to larger datasets. This network pair was trained for 600 epochs with 50K synthetic OpenGL images. Pre-trained weights for that teacher model were taken from [3]. Results for all four datasets are reported in Table 2.
Knowledge is still able to be transferred from the large teacher to the small student using our proposed method. However, we observe that there is a larger gap between the baseline cross entropy student and the distilled student compared to the self-distillation results. It is likely that, because of either the change in network architecture or the complexity of the larger teacher models, more synthetic examples, even stronger data augmentation, or more distillation epochs could be needed to narrow the gap in performance.
**Comparisons.** Next, we compared to the existing DFKD methods of Contrastive Model Inversion (CMI) [18] and Pseudo Replay Enhanced DFKD (PRE) [4]. We include the reported RN34\(\rightarrow\)RN18 results of CMI and PRE from [4], which used their own teacher models and optimal hyperparameter settings (denoted as CMI* and PRE* in the table). However, for a more direct comparison, we also ran the code for CMI/PRE with the same hyperparameter settings and report those scores for our own teacher\(\rightarrow\)student pairs as a dual-comparison (denoted as CMI and PRE in the table). When implementing these approaches, we used _their code_ available on GitHub and the hyperparameters they specify for these datasets. Moreover, the authors only provide one set of hyperparameter values which we used for all experiments. We emphasize that we _do not alter the code in any form_ besides simply replacing their teachers with ours and removing validation-based checkpointing provided in their code (as this is _data-free_ KD where there is no ground truth imagery). In the case of Tiny ImageNet, we did need to decrease the batch size due to memory, however we proportionally increased the number of iterations for each epoch to account for this. Besides adjusting the batch size in this one situation, all hyperparameter settings are _as specified_ in the respective works.
Comparisons of CMI, PRE, and our method for CIFAR10, CIFAR100, and Tiny ImageNet are shown in Table 3, with the best scores emphasized in **bold**. We examined scenarios of self-distillation from teacher to a freshly initialized teacher and distilling from teacher to a smaller, freshly initialized student. We used WRN22x8, RX29-64x4, and RX29-64x8 as the teacher models for CIFAR10, CIFAR100, and Tiny ImageNet, respectively, and RN18 as the student network for all three datasets. Thus, our teacher networks are different from those used in the original works for CMI and PRE, but the student network is exactly the same. As mentioned, we do include the results for CMI and PRE reported in [4] (including their reported baseline network scores) which used a RN34\(\rightarrow\)RN18 network pair for all three datasets. Note that [4] also observed difficulties employing CMI to Tiny ImageNet, thus they did not provide any scores for that dataset.
As can be seen in Table 3, our approach significantly
\begin{table}
\begin{tabular}{c||c||c||c} \hline & & \multicolumn{2}{c}{Accuracy} \\ Dataset & Model & T / S & T \(\rightarrow\) S \\ \hline \hline \multirow{2}{*}{C10} & MNv2 & 95.05 & 93.86\(\pm\)1.0 \\ & WRN22x8 & 96.21 & 95.18\(\pm\)0.09 \\ \hline \multirow{2}{*}{C100} & RN18 & 74.30 & 72.56\(\pm\)0.23 \\ & RX29-64x4 & 77.82 & 76.67\(\pm\)0.04 \\ \hline Tiny & RN18 & 61.81 & 60.59\(\pm\)0.03 \\ & RX29-64x8 & 64.16 & 62.09\(\pm\)0.24 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy on various datasets when self-distilling from two networks of the same architecture using our approach.
\begin{table}
\begin{tabular}{c||c|c||c||c||c} \hline & & \multicolumn{2}{c||}{Accuracy} \\ Dataset & Teacher & Student & T & S & T \(\rightarrow\) S \\ \hline \hline \multirow{4}{*}{C10} & RX29-64x4 & MNv2 & 95.98 & 95.05 & 91.89\(\pm\)0.15 \\ & WRN22x8 & RN18 & 96.21 & 95.23 & 94.12\(\pm\)0.05 \\ & RX50-32x8 & RN18 & 96.90 & 95.23 & 93.93\(\pm\)0.14 \\ \hline \multirow{4}{*}{C100} & RN50 & MNv2 & 77.44 & 76.15 & 71.99\(\pm\)0.13 \\ & RX29-64x4 & RN18 & 77.82 & 74.30 & 71.73\(\pm\)0.13 \\ & RX50-32x8 & RN18 & 81.54 & 74.30 & 72.92\(\pm\)0.18 \\ \hline \multirow{4}{*}{Tiny} & WRN28x10 & SNv2 & 65.13 & 63.33 & 59.10\(\pm\)0.20 \\ & RX29-64x8 & RN18 & 64.16 & 61.81 & 56.05\(\pm\)0.22 \\ \cline{1-1} & RX50-32x8 & RN18 & 71.02 & 61.81 & 54.37\(\pm\)0.33 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy on various datasets when distilling from a large teacher to a smaller student using our approach.
outperforms CMI and PRE on all datasets and distillation scenarios for the values reported in [4] and our obtained scores. It could be argued that we did not search for the optimal hyperparameter values for the other approaches with respect to our teacher networks. However, we reiterate that we used the settings provided by the authors, indicating that substantial hyperparameter tuning is always required when employing different teacher networks [17]. Our approach on the other hand requires very minimal amounts of hyperparameter tuning and as will be shown in the next set of experiments, our method is much more robust to the choice of settings. Furthermore, the scores we obtained with our approach have significantly less variability compared to CMI and PRE, as shown by the standard deviation of the results.
**Ablation & Sensitivity Studies.** In this section, we conducted three different studies: an ablation study on our adversarial attack, a sensitivity study on the attack choices and settings, and a sensitivity study on the dataset and training settings. For these experiments, we specifically focused on CIFAR10 with a WRN22x8 teacher and a RN18 student. Results of all three studies are presented in Table 4.
For the attack ablation study, we observed that even when both adversarial attacks ("border" and "deeper") are _completely removed_ (_i.e._, just performing KD with the synthetic examples and data augmentation), information can still be transferred from the teacher to the student in a data-free manner. This suggests that employing in-domain versus out-of-domain data for standard KD does not necessarily matter as long as the dataset uniformly samples the teacher's output labels. Thus, in the case of the out-of-domain experiment in [3], it is likely that the base out-of-domain data predictions in the teacher model were long-tail imbalanced, which did not sample the decision space in the teacher sufficiently for performing KD. Furthermore, we see that our approach requires both the first border attack and the second deeper attack to reach the best performance possible. Additionally, the inclusion of the "pre"-success examples in the first border attack results in greater performance than using the "post"-success examples alone. However, filtering the examples based on the conditions outlined in Sect. 3.3 does not produce significantly different results from our full approach.
Next, we investigated the impact of our choices for our adversarial attacks. We altered the first border attack to be a Projected Gradient Descent (PGD) attack (_i.e._, a random uniform perturbation of \(\epsilon=10\) was applied to each example before the attack was conducted) and saw much worse results than using BIM. We observed that the initial perturbation of PGD tends to push most examples toward being initially classified as one of a few classes. Thus, likely leading our attack to mostly identify the decision boundary between these few labels and the targets. As for the values of \(\epsilon\) and the softmax threshold of the first border attack (the only hyperparameters for our method), we see that our approach is fairly robust to the values for these hyperparameters. In comparison to generator-based methods (Table 3), we see that our approach is much less sensitive to settings.
To conclude our ablation and sensitivity studies, we examined the sensitivity of our approach with respect to how much input data augmentation is utilized, which examples are used in the adversarial attacks and in the distillation loss, how many synthetic examples are used during distillation, and how long the student network is trained. For the data augmentations, we experimented with three differ
\begin{table}
\begin{tabular}{c|c|c} \hline Study & Experiment & T \(\rightarrow\) S \\ \hline \hline - & Full Approach & 94.12\(\pm\)0.05 \\ \hline \multirow{4}{*}{Ablation} & \multicolumn{2}{c}{(\(-\)) Both Adversarial Attacks} & 93.07\(\pm\)0.27 \\ & \multicolumn{2}{c}{(\(-\)) “Border” First Attack Examples} & 93.38\(\pm\)0.15 \\ & \multicolumn{2}{c}{(\(-\)) “Deeper” Second Attack Examples} & 93.01\(\pm\)0.11 \\ & \multicolumn{2}{c}{(\(-\)) “Pre”-Success Examples} & 93.10\(\pm\)0.17 \\ & \multicolumn{2}{c}{(\(-\)) “Filter Conditions} & 94.19\(\pm\)0.19 \\ \hline \hline \multirow{4}{*}{Attack} & PGD-based Attack & 91.62\(\pm\)0.70 \\ \cline{2-3} & \(\epsilon=6\) Initial Attack Fpsilon & 94.15\(\pm\)0.17 \\ \cline{2-3} & \(\epsilon=14\) Initial Attack Fpsilon & 94.12\(\pm\)0.22 \\ \cline{2-3} & \(\epsilon=4\) Second Attack Fpsilon & 94.23\(\pm\)0.16 \\ \cline{2-3} & \multicolumn{2}{c}{No Softmax Threshold} & 94.04\(\pm\)0.15 \\ \cline{2-3} & \multicolumn{2}{c}{0.50 Softmax Threshold} & 93.92\(\pm\)0.18 \\ \cline{2-3} & \multicolumn{2}{c}{0.75 Softmax Threshold} & 94.11\(\pm\)0.31 \\ \hline \hline \multirow{4}{*}{Dataset} & Minimal Data Augmentation w/ mixup & 93.57\(\pm\)0.19 \\ & No Data Augmentation w/o Mixup & 93.03\(\pm\)0.14 \\ \cline{2-3} & Standard Data Augmentation w/o Mixup & 92.92\(\pm\)1.68 \\ \cline{2-3} & Minimal Data Augmentation w/o Mixup & 89.40\(\pm\)0.67 \\ \cline{2-3} & No Data Augmentation w/o Mixup & 34.67\(\pm\)5.11 \\ \cline{2-3} & \multicolumn{2}{c}{Attack with Normal \& Mixup} & 94.23\(\pm\)0.13 \\ \cline{2-3} & Distilltill with Normal, Mixup \& Adversarial & 94.37\(\pm\)0.05 \\ \cline{2-3} & \multicolumn{2}{c}{500 Examples per Class} & 62.83\(\pm\)3.50 \\ \cline{2-3} & \multicolumn{2}{c}{1000 Examples per Class} & 90.93\(\pm\)0.60 \\ \cline{2-3} & \multicolumn{2}{c}{5000 Examples per Class} & 94.59\(\pm\)0.21 \\ \cline{2-3} & \multicolumn{2}{c}{200 Distillation Epochs} & 92.32\(\pm\)0.22 \\ \cline{2-3} & \multicolumn{2}{c}{800 Distillation Epochs} & 94.61\(\pm\)0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation and sensitivity studies on CIFAR10 with a WRN22x8 teacher and RN18 student.
\begin{table}
\begin{tabular}{c||c|c|c|c|c} \hline & & \multicolumn{4}{c}{Accuracy} \\ Dataset & Method & T & S & T\({}_{A}\)\(\rightarrow\)S\({}_{A}\) & T\({}_{A}\)\(\rightarrow\)S\({}_{B}\) \\ \hline \hline \multirow{4}{*}{C10} & CM* & 95.40 & 95.20 & - & 82.40\(\pm\)0.07 \\ & PRE* & \(\star\) & \(\star\) & - & 87.40\(\pm\)3.21 \\ \cline{2-3} & CM* & 96.21 & 95.23 & 76.74\(\pm\)3.47 & 64.69\(\pm\)3.68 \\ \cline{2-3} & PRE & & & 87.69\(\pm\)3.30 & 60.67\(\pm\)0.67 \\ \cline{2-3} & Ours & \(\star\) & \(\star\) & **95.18\(\pm\)**0.09 & **94.12\(\pm\)**0.05 \\ \hline \multirow{4}{*}{C100} & CM* & 77.90 & 77.1 & - & 55.20\(\pm\)0.91 \\ & PRE* & \(\star\) & \(\star\) & - & 70.20\(\pm\)3.33 \\ \cline{2-3} & CMI* & 95.40 & 95.20 & - & 46.60\(\pm\)0.73 \\ \cline{2-3} & PRE* & & & 56.41\(\pm\)3.62 & 41.39\(\pm\)4.44 \\ \cline{2-3} & Ours & \(\star\) & \(\star\) & **76.67\(\pm\)**0.04 & **71.73\(\pm\)**0.13 \\ \hline \multirow{4}{*}{Tiny ImSet} & CM* & 71.20 & 64.90 & - & - \\ \cline{2-3} & PRE* & \(\star\) & \(\star\) & - & 46.30\(\pm\)3.32 \\ \cline{2-3} & CMI* & 64.16 & 61.81 & 33.80\(\pm\)0.24 & 30.81\(\pm\)0.50 \\ \cline{2-3} & PRE & & & 9.14\(\pm\)3.75 & 8.59\(\pm\)5.12 \\ \cline{2-3} & Ours & \(\star\) & \(\star\) & **62.09\(\pm\)**0.24 & **56.05\(\pm\)**0.22 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of related DFKD approaches on CIFAR10, CIFAR100, and Tiny ImageNet. Results on RN34\(\rightarrow\)RN18 obtained from [4] are denoted as CMI* and PRE*.
ent levels of data augmentations: standard (Sect. 4 Distillation Training Details), minimal (no RandAugment, random elastic transform, or random inversion), and none. Additionally, we investigated the effects of removing mixup from our approach. From Table 4, it is obvious to see how crucial data augmentation and mixup are to our method. When no data augmentation is applied, mixup alone enables our approach to achieve over 90% accuracy on CIFAR10, which is about 60% higher than no data augmentation and no mixup. Conversely, when standard data augmentation is used but mixup is not, we see similar performance. Combining strong amounts of data augmentation and mixup together helps achieve the best performance possible.
Next, we examined the effects of using more types of examples in the adversarial attacks and in the distillation loss. By including the original data augmented synthetic examples (pre-mixup) with the mixup examples in the adversarial attacks, performance gains can be seen. Furthermore, when including both the original data augmented synthetic examples and the mixup examples (pre-attack) with the adversarially perturbed examples in the KD loss computation, performance gains can also be seen. We also observed that sizable performance gains can be obtained by increasing either the number of synthetic examples per class or the number of epochs for performing KD. Through this sensitivity study, we show that, while we obtained state-of-the-art results in our previous experiments, there are several avenues for increasing performance scores even further. Thus, in our CIFAR100 and Tiny ImageNet experiments and in our "distilling down" experiments, it is likely that we could bridge the gap between our distilled students and the baseline cross entropy student by increasing the number of synthetic examples and conducting more distillation epochs.
### Privacy
Lastly, as recent works have shown that information about the training data can be extracted from various models [6, 7, 24], we conducted a qualitative privacy examination on our teacher models and distilled students in the context of model inversion and data extraction. For this experiment, we synthesized images using DeepInversion [64] applied to three MobileNetV2 models trained on CIFAR10. One is the teacher network that was trained with the original dataset, one is a student network that was self-distilled from the aforementioned teacher using the original CIFAR10 data, and the last is a student network that was self-distilled from the aforementioned teacher using our approach. As mentioned in [64], their approach for model inversion can be sensitive to hyperparameters, thus we examined over 1000 different hyperparameter configurations and selected examples from the configuration yielding the lowest overall loss. These images are presented in Fig. 3, where all images shown were predicted correctly by their respective model.
It is obvious to see that recognizable images of a horse, airplane, truck, and bird were able to be extracted from the teacher model. Furthermore, we are also able to obtain slightly less detailed, but still recognizable, images of a horse, ship, truck, and bird from the CIFAR10 distilled student. However, when we look at the examples obtained using our synthetic distilled student model, we see that they are largely incoherent. In a couple of cases you can see vague depictions of what could be a cat or a car, but immense detail is missing. For the most part, images extracted from our distilled students appeared to be abstract textures that contained no recognizable objects. Thus, our approach could serve as a method for anonymizing the data that gets naturally embedded into deep neural networks from standard supervised training and ordinary KD.
## 5 Conclusion
We proposed a novel approach to data-free knowledge distillation that utilizes procedurally rendered OpenGL shader images, combined with heavy data augmentation and adversarial attacks, to transfer knowledge from a teacher network to a new student model. Our work is straightforward to implement and built upon standard KD, making it extendable to more advanced methods. Experiments demonstrated improved and more stable results over relevant DFKD approaches, establishing new state-of-the-art scores for multiple datasets. Additionally, we showed that our technique can better anonymize the data embedded in the teacher. While we presented results that are superior to other methods, we also see opportunities for improvements (as shown in Table 4) which we will explore in the future.
**Acknowledgements.** This research was supported by the U.S. Air Force Research Laboratory under Contract #GRT00054740 (Release #AFRL-2023-5248). We would additionally like to thank Skylar Wurster for his assistance.
Figure 3: Extracted CIFAR10 images from a MobileNetV2 teacher model (top row), a CIFAR10-distilled MobileNetV2 student model (middle row), and a OpenGL-distilled MobileNetV2 student model (bottom row). |
2307.01749 | A numerical method for wave-structure interactions in the Boussinesq
regime | The goal of this work is to study waves interacting with partially immersed
objects allowed to move freely in the vertical direction, and in a regime in
which the propagation of the waves is described by the one dimensional
Boussinesq-Abbott system. The problem can be reduced to a transmission problem
for this Boussinesq system, in which the transmission conditions between the
components of the domain at the left and at the right of the object are
determined through the resolution of coupled forced ODEs in time satisfied by
the vertical displacement of the object and the average discharge in the
portion of the fluid located under the object. We propose a new extended
formulation in which these ODEs are complemented by two other forced ODEs
satisfied by the trace of the surface elevation at the contact points. The
interest of this new extended formulation is that the forcing terms are easy to
compute numerically and that the surface elevation at the contact points is
furnished for free. Based on this formulation, we propose a second order scheme
that involves a generalization of the MacCormack scheme with nonlocal flux and
a source term, which is coupled to a second order Heun scheme for the ODEs. In
order to validate this scheme, several explicit solutions for this
wave-structure interaction problem are derived and can serve as benchmark for
future codes. As a byproduct, our method provides a second order scheme for the
generation of waves at the entrance of the numerical domain for the
Boussinesq-Abbott system. | Geoffrey Beck, David Lannes, Lisl Weynans | 2023-07-04T14:39:40Z | http://arxiv.org/abs/2307.01749v1 | # A numerical method for wave-structure interactions in the Boussinesq regime
###### Abstract.
The goal of this work is to study waves interacting with partially immersed objects allowed to move freely in the vertical direction, and in a regime in which the propagation of the waves is described by the one dimensional Boussinesq-Abbott system. The problem can be reduced to a transmission problem for this Boussinesq system, in which the transmission conditions between the components of the domain at the left and at the right of the object are determined through the resolution of coupled forced ODEs in time satisfied by the vertical displacement of the object and the average discharge in the portion of the fluid located under the object. We propose a new extended formulation in which these ODEs are complemented by two other forced ODEs satisfied by the trace of the surface elevation at the contact points. The interest of this new extended formulation is that the forcing terms are easy to compute numerically and that the surface elevation at the contact points is furnished for free. Based on this formulation, we propose a second order scheme that involves a generalization of the MacCormack scheme with nonlocal flux and a source term, which is coupled to a second order Heun scheme for the ODEs. In order to validate this scheme, several explicit solutions for this wave-structure interaction problem are derived and can serve as benchmark for future codes. As a byproduct, our method provides a second order scheme for the generation of waves at the entrance of the numerical domain for the Boussinesq-Abbott system.
**Mathematics Subject Classification:** 35G61, 35Q35, 74F10, 65M08
**Keywords:** Wave-structure interactions, Initial boundary value problems, Boussinesq system, Numerical analysis
## 1. Introduction
### Presentation of the problem
While the first studies of the interactions of waves with floating structures go back at least to John's paper [20], or to the phenomenological integro-differential equation derived by Cummins to describe the linear motion of floating structures [11], this research field became increasingly active in recent years. A first reason for this renewed interest is related to the development of renewable marine energies as one of the tools for energy transition. Indeed, several devices of offshore wind-turbines and wave-energy convertors involve partially immersed structures [3].
A second reason for the recent mathematical activity on wave-structure interactions is that this has been made technically feasible thanks to the recent progresses on the mathematical understanding of the propagation of water waves. The initial value problem in domains without boundaries (\(\mathbb{R}^{d}\) or \(\mathbb{T}^{d}\)) is now well understood for the full water waves (also called free-surface Euler) equations, as well as for asymptotic models in shallow water (such as the nonlinear shallow water equations, the Boussinesq systems, the Serre-Green-Naghdi equations). Recently, the initial value problem has also been studied in domains with a boundary. When the fluid domain
is delimited by vertical sidewalls, the water waves equations have been studied in [2]; in the case of non-vertical sidewalls, this problem has been considered in [34, 29] for the water waves equations, and in [25] for the shallow water and Green-Naghdi equations. The initial boundary value problem, in which one imposes initial and boundary datas, has also been investigated for the Boussinesq equations [10, 26]. These advances make it more realistic to address the issues raised by the presence of a partially immersed object.
From the numerical point of view, efficient numerical codes based on shallow water models have been developed recently and can be used to address realistic submersion issues (see for instance [35, 14]); here also, it is now a reasonable prospect to address the specific difficulties raised by wave-structure interactions.
The present paper is a contribution to the theoretical and numerical understanding of these interactions, inasmuch as it provides a precise description of the motion of a partially immersed object allowed to move freely in the vertical direction under the action of waves described by a nonlinear dispersive model (the standard Boussinesq-Abbott system), see Figure 1. It has to be considered as a partial (affirmative) answer to the wider question: can the efficient modelling of waves based on shallow water models be extended to allow the presence of floating structures? If this happens to be true, the gain in computational time would allow to investigate the behavior of many floating structures (the so-called farms of wave-energy convertors or offshore wind turbines), as well as their impact on the wave fields, which can have significant consequences in coastal regions. Answering such question is out of reach for CFD methods that can be used to describe the behavior of one wave-energy convertor, and also, to a lower extent, for potential methods (see for instance [13, 17]). On the other hand, the linear methods based on Cummins' equation used in commercial softwares such as Wamit neglect the nonlinear effects that can be important [32], especially in shallow water, and are unable to provide a precise description of the impact of a wave-farm on the wave-field.
The presence of a floating structure in a shallow water model can be taken into account following the approach proposed in [24] where the horizontal plane is decomposed into two regions: the interior region (below the floating object), and the exterior region (below the free surface waves). In the exterior regions, the standard
Figure 1. The floating object
(depth integrated) shallow water model is used, while in the interior region, an additional pressure term is present. This pressure term corresponds to the pressure exerted by the fluid on the object (and which eventually makes it move through Newton's equations), and can be understood as the Lagrange multiplier associated with the constraint that, under the object, the surface elevation of the waves is constrained as it must by definition coincide with the bottom of the object. It is possible to relax this constraint by approximating the pressure term by a pseudo-compressible relaxation; one can then use the same kind of asymptotic preserving schemes as for the low-Mach limit in compressible gases. This approach has been used in the present context in [15, 16], and is also relevant for other instances of partially congested flows [33, 12, 5]. In this paper, we rather consider the original (non relaxed) problem, which requires to understand precisely the coupling between the interior and exterior regions.
It turns out that this wave-structure interaction problem can be reduced to an initial boundary value problem for the wave model in the exterior region, with non standard boundary (or transmission) conditions. In the case where the horizontal dimension is \(d=1\), the object has vertical walls located at \(x=\pm\ell\) as in Figure 1 and is only allowed to move vertically, and if the wave model is given by the nonlinear shallow water equations, it was shown in [24] that this transmission problem takes the form (in dimensionless variables, see Section 2 for details),
\[\begin{cases}\partial_{t}\zeta+\ \partial_{x}q=0,\\ \partial_{t}q+\varepsilon\partial_{x}(\frac{1}{h}q^{2})+h\partial_{x}\zeta=0,\end{cases}\qquad\text{ on }(-\infty,-\ell)\cup(\ell,+\infty),\]
where \(\zeta\) is the elevation of the surface and \(q\) the horizontal discharge. Using the notation \(\llbracket q\rrbracket=q(\ell)-q(-\ell)\) and \(\langle q\rangle=\frac{1}{2}(q(-\ell)+q(\ell))\), the transmission conditions are given by
\[\llbracket q\rrbracket=-2\ell\dot{\delta}\quad\text{ and }\quad\langle q \rangle=\langle q_{\text{i}}\rangle\]
where the function \(\delta\) and \(\langle q_{\text{i}}\rangle\) (representing respectively the vertical displacement of the object and the mean discharge under the object) solve an ODE of the form
\[\frac{d}{dt}\theta=F(\theta,\zeta_{|_{x=-\ell}},\zeta_{|_{x=\ell}}),\]
with \(\theta=(\delta,\dot{\delta},\langle q_{\text{i}}\rangle)\) and \(F\) a smooth function of no importance at this stage of the discussion. The coupling acts in two ways: it is necessary to known \(\theta\) to solve the transmission problem for \((\zeta,q)\), and it is necessary to know the solution \((\zeta,q)\) to determine the forcing term \(F(\theta,\zeta_{|_{x=-\ell}},\zeta_{|_{x=\ell}})\) in the ODE for \(\theta\). A key point in the mathematical analysis of this problem is the regularity of the traces \(\zeta_{|_{x=\pm\ell}}\). Such a control is furnished by the construction of a Kreiss symmetrizer, as shown in [19] where general initial boundary value problems, possibly with a free boundary, are considered for a wide class of hyperbolic systems; they include the above transmission problem as well as the more complex free boundary problem one has to deal with when the lateral boundaries of the object are not vertical.
Numerically, the evaluation of the traces \(\zeta_{|_{x=\pm\ell}}\) also requires a careful treatment which relies on the Riemann invariants associated with the nonlinear shallow water equations [24]; we also refer to [6] for a higher order scheme, to [9] where a wave-energy device is simulated using this approach (the oscillating water column), and to [36] where controlability issues were also addressed. The more complex case of an object freely floating and with non-vertical walls (and therefore nontrivial dynamics
for the contact points) has been solved theoretically in [19], and numerically in [18] using ALE methods to treat the evolution of the contact points.
A variant of the above wave-structure interaction problem for the viscous nonlinear shallow water equations was also considered in [28] and an extension to the case of horizontal dimension \(d=2\) with radial symmetry has been considered theoretically in [7] and the so-called decay test (or return to equilibrium) investigated in the same configuration under an additional assumption of linearity in [8]. Let us also mention [31] where the dynamics of trapped air pockets are studied.
We propose here an extension in another direction. The principal drawback of the nonlinear shallow water equations is that they neglect the dispersive effects that play an important role is some important situations (they allow for instance the existence of solitary waves). The most simple models that generalize the nonlinear shallow water equations by adding dispersive terms are the Boussinesq equations (see [22] for a recent review on shallow water models). Replacing the nonlinear shallow water equations by the so called Boussinesq-Abbott system in the above example, one obtains the following transmission problem (see Section 2 for more details),
\[\begin{cases}\partial_{t}\zeta+\ \partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\varepsilon\partial_{x}(\tfrac {1}{h}q^{2})+h\partial_{x}\zeta=0,\end{cases}\qquad\text{on }(-\infty,-\ell)\cup(\ell,+\infty),\]
and with transmission conditions
\[\llbracket q\rrbracket=-2\ell\dot{\delta}\quad\text{ and }\quad\langle q \rangle=\langle q_{\text{i}}\rangle\]
where the function \(\delta\) and \(\langle q_{\text{i}}\rangle\) (representing respectively the vertical displacement of the object and the mean discharge under the object) solve an ODE of the form
\[\frac{d}{dt}\theta=F_{\kappa}(\theta,\zeta_{|_{x=-\ell}},\zeta_{|_{x=\ell}}, \kappa^{2}(\partial_{t}^{2}\zeta)_{|_{x=-\ell}},\kappa^{2}(\partial_{t}^{2} \zeta)_{|_{x=\ell}}),\]
with \(\theta=(\langle q_{\text{i}}\rangle,\dot{\delta},\delta)^{\text{T}}\) and \(F_{\kappa}\) a smooth function of no importance at this stage of the discussion.
The differences with the nondispersive case considered above are the operator \((1-\kappa^{2}\partial_{x}^{2})\) applied in front of \(\partial_{t}q\) in the evolution equations and a contribution of the trace of \(\partial_{t}^{2}\zeta\) to the forcing term in the ODE for \(\theta\). Formally, the nondispersive case is obtained by setting \(\kappa=0\), but the mathematical and numerical differences are considerable. Contrary to the hyperbolic case mentioned above, there is no general theory for initial boundary value problems associated with nonlinear dispersive systems and this is why several approximations have been used to bypass this issue. In [6], wave-structure interactions using a Boussinesq model was used, but the issue at the boundary was avoided by using the (dispersionless) nonlinear shallow water equations in a small region around the object; in [30] the behavior at the boundary was approximated at second order using Bessel expansions and matched asymptotics; in [21], Boussinesq type equations where computed in the whole domain, neglecting the singularities of the surface elevation and of the discharge at the contact line, while the presence of the object is taken into account by adding an additional pressure term in the interior region. There are also approximate methods based on sponge layers and artificial source terms which are often used to generate waves at the entrance of the numerical domain [37]. Such methods are far too rough to be used in the present case, where a precise description of the waves at the contact points is needed; indeed, as shown in [10], the behavior at the
contact points can be quite complex and exhibit dispersive boundary layers. This is why a new method to handle non-homogeneous initial boundary value problems for the Boussinesq equations was proposed in [26] and numerically implemented with an order 1 scheme.
The approach used in the present paper allows us to treat the issues related to the initial boundary value problem for Boussinesq-type equations without any approximation; a byproduct of independent interest of the present paper is that it furnishes a second order method for the generation of waves at the numerical boundary of the fluid domain for the Boussinesq equations, hereby complementing the first order generation scheme of [26].
As for the hyperbolic case discussed previously, the control of the traces of \(\zeta\) (and a fortiori of \(\partial_{t}^{2}\zeta\)) is a key ingredient of the analysis, both from the PDE and numerical perspectives; however, due to the presence of dispersion, there are no such things as a Kreiss symmetrizer or Riemann invariants to help us. To solve these issues, we propose in this paper an _extended formulation_ of the equations. After remarking that the traces \(\zeta_{|_{x=\pm\ell}}\) solve a second order forced ODE, we introduce new unknowns \(\underline{\zeta}_{\pm}\) defined as the solutions of this ODE. This allows us to replace \(\zeta_{|_{x=\pm\ell}}\) by \(\underline{\zeta}_{\pm}\) in the forcing term \(F_{\kappa}\) in the ODE for \(\theta\), hereby avoiding the computation of the traces. The resulting extended formulation just consists in replacing the above ODE for \(\theta\) by the higher dimensional ODE
\[\frac{d}{dt}\Theta=\mathcal{G}(\Theta,(R_{1}\mathfrak{f}_{\text{sw}})_{|_{x= \pm\ell}})\]
with \(\Theta=(\langle q_{\mathfrak{i}}\rangle,\dot{\delta},\underline{\zeta_{+}}, \underline{\zeta_{-}},\delta,\underline{\zeta_{+}},\underline{\zeta_{-}})^{ \text{T}}\) and \(\mathcal{G}\) is a smooth mapping. This ODE is forced by the terms \((R_{1}\mathfrak{f}_{\text{sw}})_{|_{x=\pm\ell}}\) which depend on the solution \((\zeta,q)\) of the Boussinesq equations in the fluid domain; the precise meaning of this term will be given in Section 2, the important thing being that the control of the trace of the quantities \(R_{1}\mathfrak{f}_{\text{sw}}\) at \(x=\pm\ell\) does not raise any theoretical nor numerical difficulty.
The second step of our approach consists in transforming this extended _transmission_ problem into an _initial value_ problem coupled with forced ODEs: this means that we do no longer have to bother with the boundary conditions (which are automatically propagated by the flow). This new formulation can be written as a system of conservation laws with nonlocal flux and an exponentially localized source term,
\[\partial_{t}U+\partial_{x}(\mathfrak{F}_{\kappa}(U))=\mathcal{S}_{\pm}( \Theta,(R_{1}\mathfrak{f}_{\text{sw}})_{|_{x=\pm\ell}})\mathfrak{b}(x\mp\ell )\quad\text{in}\quad\quad\pm(\ell,\infty),\]
where \(\mathfrak{F}_{\kappa}\) denotes the nonlocal flux, while \(\mathcal{S}_{\pm}\) is a smooth function of its arguments and \(\mathfrak{b}\) an exponentially localized function. The above ODE for \(\Theta\) allows one to compute the source term in this system of nonlocal conservation laws; conversely, the resolution of this system allows one to compute the forcing terms \((R_{1}\mathfrak{f}_{\text{sw}})_{|_{x=\pm\ell}}\) in the ODE for \(\Theta\): the coupling acts therefore both ways.
One of the advantages of this new formulation is that we can implement on it a second order scheme that couples a MacCormack predictor corrector scheme (generalized to handle nonlocal fluxes and a source term) for the computation of the waves, and a second order Heun scheme for the computation of the forced ODEs. We also exhibit several exact explicit solutions that we use to study the convergence of our code and that are of independent interest.
### Organization of the paper
In Section 2, we derive the formulation of the problem our numerical scheme is based on: we first recall in SS2.1 the reduction of [4] to a transmission problem for the Boussinesq equation on the two connected components of the exterior region, and then show in SS2.2 that the traces of the surface elevation at the contact points satisfy a forced second order ODE that we use to write the new augmented formulation of the transmission problem in SS2.3; this transmission problem is finally rewritten as an initial boundary value problem in SS2.4.
The numerical schemes are presented in Section 3. The initial value problem obtained in the previous section is a set of two conservation equations with nonlocal flux and an exponentially decaying source term whose coefficient is found by solving a set of forced second order ODEs. We propose two numerical schemes based on an abstract formulation of these equations. The first one, described in SS3.2, is of first order and is an adaptation of the Lax-Friedrichs scheme to the present context. The second one, studied in SS3.3, is of second order. It is based on the MacCormack predictor-corrector scheme for the two conservation PDEs (with adaptations to handle the nonlocal flux and the source term), and on a Heun scheme for the ODE part.
Numerical simulation are then presented in Section 4. We investigate several configurations exploring different aspects of the coupling between the Boussinesq equations and the forced ODEs used in the transmission conditions. Wave generation is considered in SS4.1, and is of independent interest as it provides a way to generate waves at the entrance of the numerical domain for the Boussinesq equations. The return to equilibrium test in which an object oscillates vertically after being released from an out of equilibrium position is studied in SS4.2; in the linear case, an explicit solution is exhibited and computed via Laplace transforms, and this solution is used to assess the precision of our scheme. Interactions of waves with a fixed object are then investigated in SS4.3; here also, an exact solution is derived in the linear case and used for validation. The most general configuration of waves interacting with an object allowed to move freely in the vertical direction is then considered in SS4.4.
Throughout this article, we work with an abstract and concise formulation of the equations. The precise equations, with the expressions of the various coefficients involved, is postponed to Appendix A.
### Notation
- The horizontal axis \(\mathbb{R}\) is decomposed throughout this paper into an _interior region_\(\mathcal{I}=(-\ell,\ell)\) and an _exterior region_\(\mathcal{E}=\mathcal{E}_{+}\cup\mathcal{E}_{-}\) with \(\mathcal{E}_{-}=(-\infty,-\ell)\) and \(\mathcal{E}_{+}=(\ell,\infty)\), and two _contact points_\(x=\pm\ell\).
- For any function \(f\in C(\overline{\mathcal{E}})\), we denote
\[f_{\pm}=f_{|_{x=\pm\ell}},\qquad[\![f]\!]=f_{+}-f_{-}\quad\text{ and }\quad \langle f\rangle=\frac{1}{2}\big{(}f_{+}+f_{-}\big{)}.\]
- If \(f\in C^{1}([0,T])\), we sometimes use the notation \(\dot{f}=\frac{d}{dt}f\).
- We denote by \(\mathfrak{f}_{\text{sw}}\) the momentum flux associated with the nonlinear shallow water equations,
\[\mathfrak{f}_{\text{sw}}=\left(\varepsilon\frac{q^{2}}{h}+\frac{h^{2}-1}{2 \varepsilon}\right). \tag{1}\]
## 2. An augmented formulation of the wave-structure interaction equations
The goal of this section is to derive the augmented formulation of the wave-structure equations that we shall use in Section 3 to propose numerical schemes. We first sketch in SS2.1 the main steps of the analysis of [4] that led to a formulation of the problem as a transmission problem between the two connected components of the fluid domain, and with transmission conditions determined through the resolution of an ODE forced by a source term involving the traces at the contact points of the surface elevation and of their second order time derivative. We then remark in SS2.2 that these traces solve themselves a second order ODE, but which is forced by a source term which is easier to compute. This observation is the key ingredient that allows us to derive in SS2.3 an augmented formulation. It has the same structure as the formulation derived in SS2.1, namely, it is a transmission problem coupled with a forced ODE. The crucial difference is that this ODE does no longer require the computation of the traces of the surface elevation at the contact points and that it can easily be computed numerically. Finally, we show in SS2.4 that this augmented transmission problem can be rewritten as an initial value problem, which is the structure the numerical schemes of Section 3 are based on.
### Reduction to a transmission problem coupled with scalar ODEs
We remind here the main steps of the derivation of the equations describing the interactions of a partially immersed object with one-dimensional waves in a regime where these waves can correctly be described by the Boussinesq-Abbott equations (see [22]); the object is assumed to have vertical sidewalls and can be either fixed, in forced vertical motion, or allowed to float freely in the vertical direction under the action of the waves.
In dimensionless variables, the equations involve two coefficients \(\varepsilon\) and \(\mu\), respectively called nonlinearity and shallowness parameters, and that are defined as
\[\varepsilon=\frac{\text{typical amplitude of the waves}}{\text{typical depth}}\quad\text{ and }\quad\mu=\Big{(}\frac{\text{typical depth}}{\text{typical horizontal scale}}\Big{)}^{2};\]
in the _weakly nonlinear shallow water regime_ in which the Boussinesq-Abbott equations are known to provide a good approximation of the motion of the waves, one has
\[\mu\ll 1\quad\text{ and }\quad\varepsilon=O(\mu);\]
these conditions are assumed throughout this article. For the sake of conciseness, we also introduce the parameter \(\kappa\) as
\[\kappa=\big{(}\frac{\mu}{3}\big{)}^{1/2};\]
this parameter plays an important role as it measures the size of the dispersive boundary layers that appear in the analysis of mixed initial boundary-value problems for the Boussinesq equations, which are a dispersive perturbation of an hyperbolic system [10].
As displayed in Figure 1, in dimensionless coordinates, the surface of the fluid is parametrized at time \(t\) by the function \(x\in\mathbb{R}\mapsto\varepsilon\zeta(t,x)\), and the horizontal discharge (the vertical integral of the horizontal component of the velocity field) at time \(t\) and position \(x\) is denoted \(q(t,x)\). We also sometimes denote by \(h\) the water
depth, \(h=1+\varepsilon\zeta\). Finally, we denote by \(\underline{P}(t,x)\) the pressure at the surface of the fluid, namely, \(\underline{P}=P(t,x,\varepsilon\zeta(t,x))\) if \(P\) denotes the pressure field in the fluid.
Regarding the solid object, we denote by \(\pm\ell\) the position of its vertical sidewalls and by \(\varepsilon\zeta_{\mathrm{w}}\) the parametrization on \((-\ell,\ell)\) of its bottom (the subscript "w" stands for "wetted part"); we also denote by \(\varepsilon\delta(t)\) the vertical deviation of the object from its equilibrium position, and by \(h_{\mathrm{eq}}\) the water depth at rest. These quantities are related through
\[\zeta_{\mathrm{w}}(t,x)=\delta(t)+\frac{1}{\varepsilon}\big{(}h_{\mathrm{eq} }(x)-1\big{)}.\]
**N.B.** For the sake of simplicity, we assume throughout this article that the center of mass is located at \(\{x=0\}\) and that \(h_{\mathrm{eq}}(x)\) is an even function.
The Boussinesq-Abbott equations for the motion of the waves are given for \(t>0\), \(x\in\mathbb{R}\) by
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\varepsilon\partial_{x}\big{(} \tfrac{1}{h}q^{2}\big{)}+h\partial_{x}\zeta=-\tfrac{1}{\varepsilon}h\partial _{x}\underline{P}\end{cases} \tag{2}\]
(with \(h=1+\varepsilon\zeta\)). We now have to distinguish between the _exterior_ region \(\mathcal{E}=(-\infty,-\ell)\cup(\ell,\infty)\) where the surface of the water is in contact with the air and the _interior_ region \(\mathcal{I}=(-\ell,\ell)\) where it is in contact with the object,
* In the _exterior_ region \(\mathcal{E}\), the surface elevation \(\zeta\) is free, but the surface pressure \(\underline{P}\) is constrained, assumed to be equal to the (constant) atmospheric pressure \(P_{\mathrm{atm}}\), \[\underline{P}(t,x)=P_{\mathrm{atm}}\quad\text{ for }t>0,\quad x\in\mathcal{E};\] the right-hand side in the second equation of (2) therefore vanishes.
* In the _interior_ region \(\mathcal{E}\), it is the reverse: the surface elevation is constrained because it has to coincide with the bottom of the object, \[\zeta(t,x)=\zeta_{\mathrm{w}}(t,x)\quad\text{ for }t>0,\quad x\in\mathcal{I},\] but there is no constraint on the surface pressure \(\underline{P}\) which, under the general approach of [24], can be understood as the Lagrange multiplier associated with the constraint on the surface elevation. Plugging the constraint equation in the first equation of (2) one directly gets that \[q(t,x)=-x\dot{\delta}+\langle q_{\mathrm{i}}\rangle(t),\] where \(\langle q_{\mathrm{i}}\rangle\) is a time dependent function corresponding to the average discharge over the interior region. Using this relation and applying \(\partial_{x}\) to the second equation in (2) provides an elliptic equation for \(\underline{P}\), \[-\partial_{x}\big{(}\frac{1}{\varepsilon}h_{\mathrm{w}}\partial_{x} \underline{P}\big{)}=-\ddot{\delta}+\partial_{x}\big{[}h_{\mathrm{w}}\partial _{x}\zeta_{\mathrm{w}}+\varepsilon\partial_{x}\big{(}\frac{1}{h_{\mathrm{w}}}( -x\dot{\delta}+\langle q_{\mathrm{i}}\rangle)^{2}\big{)}\big{]},\] for \(x\in(-\ell,\ell)\) and with \(h_{\mathrm{w}}=h_{\mathrm{eq}}+\varepsilon\delta\). If we know the boundary values of \(\underline{P}\) at \(x=-\ell+0\) and \(x=\ell-0\), this elliptic equation can be solved and it provides an expression for \(\underline{P}\) in terms of \(h_{\mathrm{eq}}\), \(\delta\), \(\langle q_{\mathrm{i}}\rangle\) and of this boundary data. Using this expression in the second equation of (2) then provides an expression for \(\frac{d}{dt}\langle q_{\mathrm{i}}\rangle\) in terms of the same quantities.
We also need coupling conditions at the contact points \(x=\mp\ell\) between the exterior and interior region. There are two of them,
* Continuity of the horizontal discharge. Taking into account the expression of the discharge derived above in the interior region, this condition yields (3) \[q(t,-\ell-0)=\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle(t)\quad\text{ and }\quad q(t,\ell+0)=-\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle(t).\]
* Conservation of the total energy. Imposing conservation of the total (i.e., fluid+solid) energy classically provides the boundary data needed to solve the elliptic equation derived for the surface pressure in the interior region [28, 7, 10]. We refer to [4] for the derivation of these boundary data in the present context, but do not provide it here explicitly for the sake of conciseness.
To summarize, we have the standard Boussinesq-Abbott equation in the exterior region, with boundary condition on the discharge \(q\) at \(\mp\ell\) that are given in terms of two functions of time, namely, \(\langle q_{\mathrm{i}}\rangle\) and \(\delta\). As said above, the fact that the elliptic equation for the pressure in the interior region has been solved provides an evolution equation for \(\langle q_{\mathrm{i}}\rangle\); the last thing to do is therefore to determine \(\delta\). If the object is fixed or in forced motion then \(\delta\) is given; otherwise, it is of course given by Newton's equation. The three cases can be considered simultaneously by allowing an external force to be applied to the solid (if the solid is fixed or in forced motion, this external force \(F_{\mathrm{ext}}\) represents the vertical force exerted on the solid to maintain it fixed or with the desired motion). The outcome of this analysis, as shown1 in Theorem 3.1 of [4] is that the wave-structure interaction problem under consideration can be reduced to a transmission problem. Using the notations
Footnote 1: The presence of the external force is not taken into account in that reference. It is however straightforward to add it in Newton’s equation; note that in the present dimensionless setting, the force has been nondimensionalized by \(2\ell\rho g\).
\[\langle f\rangle=\frac{1}{2}\big{(}f(\ell)-f(-\ell)\big{)}\quad\text{ and }\quad \llbracket f\rrbracket=f(\ell)-f(-\ell)\]
for all \(f\in C((-\infty,-\ell]\cup[\ell,\infty))\), this transmission problem can be written
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\mathfrak{f}_{\mathrm{ sw}}=0\end{cases}\quad\quad\text{for }\quad t>0,\quad x\in\mathcal{E} \tag{4}\]
where \(\mathfrak{f}_{\mathrm{sw}}\) is the shallow water momentum flux given by (1), and with transmission conditions across the floating object given by
\[\langle q\rangle=\langle q_{\mathrm{i}}\rangle\quad\text{ and }\quad\llbracket q \rrbracket=-2\ell\dot{\delta}, \tag{5}\]
where \(\langle q_{\mathrm{i}}\rangle\) and \(\delta\) are functions of time solving
\[\alpha(\varepsilon\delta)\frac{d}{dt}\langle q_{\mathrm{i}} \rangle+\varepsilon\alpha^{\prime}(\varepsilon\delta)\dot{\delta}\langle q_{ \mathrm{i}}\rangle =-\frac{1}{2\ell}\llbracket\zeta+\mathfrak{G}\rrbracket, \tag{7}\] \[\tau_{\kappa}(\varepsilon\delta)^{2}\ddot{\delta}+\delta- \varepsilon\beta(\varepsilon\delta)\dot{\delta}^{2}-\varepsilon\frac{1}{2} \alpha^{\prime}(\varepsilon\delta)\langle q_{\mathrm{i}}\rangle^{2} =\langle\zeta+\mathfrak{G}\rangle+F_{\mathrm{ext}}, \tag{6}\]
where \(\mathfrak{G}\) is the function defined on \(\mathcal{E}\) by
\[\mathfrak{G}=\varepsilon\frac{1}{2}\frac{q^{2}}{h^{2}}-\kappa^{2}\frac{1}{h} \partial_{x}\partial_{t}q, \tag{8}\]
and where the explicit expression of the functions \(\alpha\), \(\tau_{\kappa}\) and \(\beta\), of no importance at this point of the discussion, are provided in SSA.1 of Appendix A. We just want to emphasize that the coefficient \(\tau_{\kappa}(\varepsilon\delta)^{2}\) in front of \(\ddot{\delta}\) in (7) takes into account the
contribution of the added mass effect (when a solid moves in a fluid, not only must it accelerate its own mass but also the mass of the fluid around it).
The initial value problem corresponding to (4)-(7) is studied and solved in [4]. Its structure is that of a transmission problem coupled with a set of ODEs on \(\langle q_{\mathrm{i}}\rangle\) and \(\delta\). This coupling acts in both ways: on the one hand, it is necessary to know \(\langle q_{\mathrm{i}}\rangle\) and \(\delta\) in order to solve the transmission problem (4)-(5) and on the other hand, one needs to know the solution \((\zeta,q)\) of this transmission problem to compute the source term in the right-hand side of (6)-(7). From the numerical view point, this last step is not easy to treat since one has to compute the numerical trace of \(\zeta\) and \(\partial_{t}\partial_{x}q\) at the contact points \(x=\pm\ell\). The key ingredient we propose here to overcome this difficulty is to work with an augmented formulation of the problem, with additional functions of time involved in the system of ODEs for \(\delta\) and \(\langle q_{\mathrm{i}}\rangle\), but where the computation of such traces is no longer needed.
### The trace equations
The source terms in the right-hand sides of (6)-(7) involve the trace of \(\zeta+\mathfrak{G}\) at \(x=\pm\ell\), with \(\mathfrak{G}\) given by (8). Since (5) implies that \(q_{|_{x=\pm\ell}}=\mp\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle\) and remarking that one deduces from the first equation of (4) that \(\partial_{x}\partial_{t}q=-\partial_{t}^{2}\zeta\), we have
\[\mathfrak{G}_{|_{x=\pm\ell}}=\varepsilon\frac{1}{2}\Big{(}\frac{\mp\ell\dot{ \delta}+\langle q_{\mathrm{i}}\rangle}{1+\varepsilon\zeta_{\pm}}\Big{)}^{2}+ \kappa^{2}\frac{1}{1+\varepsilon\zeta_{\pm}}\ddot{\zeta}_{\pm},\]
with \(\zeta_{\pm}:=\zeta_{|_{x=\pm\ell}}\). The difficulty therefore lies in the computation of the trace of \(\zeta\) at \(x=\pm\ell\) and of their second time derivative. The augmented formulation consists in treating \(\zeta_{\pm}\) as a new unknown function of time instead of getting it by taking the traces of \(\zeta\) at the contact points. This is made possible by the following proposition which provides a second order ODE satisfied by \(\zeta_{+}\) and \(\zeta_{-}\). This requires first the introduction of the Dirichlet and Neumann inverses of the operator \((1-\kappa^{2}\partial_{x}^{2})\) on \(\mathcal{E}\), respectively denoted by \(R_{0}\) and \(R_{1}\). They are defined for all \(F\in L^{2}(\mathcal{E})\) by
\[R_{0}F=u\quad\text{ with }\begin{cases}(1-\kappa^{2}\partial_{x}^{2})u=F \quad\text{ on }\mathcal{E},\\ u_{|_{x=\pm\ell}}=0,\end{cases}\]
and
\[R_{1}F=v\quad\text{ with }\begin{cases}(1-\kappa^{2}\partial_{x}^{2})v=F \quad\text{ on }\mathcal{E},\\ \partial_{x}v_{|_{x=\pm\ell}}=0.\end{cases} \tag{9}\]
We can now state the following proposition. Note that the ODEs satisfied by \(\zeta_{\pm}\) only make sense in the presence of dispersion (\(\kappa>0\)).
**Proposition 2.1**.: _Let \(f\) and \(g\) be two continuous functions of time. If \((\zeta,q)\) is a smooth solution to_
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\mathfrak{f}_{\mathrm{ sw}}=0,\end{cases}\quad t>0,\quad x\in\mathcal{E} \tag{10}\]
_with \(\mathfrak{f}_{\mathrm{sw}}\) as in (1) and with transmission conditions_
\[\langle q\rangle(t)=f(t)\quad\text{ and }\quad\llbracket q\rrbracket(t)=2g(t), \qquad t>0 \tag{11}\]
_then \(\zeta_{\pm}=\zeta_{|_{x=\pm\ell}}\) solve the ODEs_
\[\begin{cases}\partial_{t}^{2}\zeta_{+}+\frac{1}{\kappa^{2}}\zeta_{+}+\frac{ \varepsilon}{\kappa^{2}}\big{(}\frac{1}{2}\zeta_{+}^{2}+\frac{(f+g)^{2}}{1+ \varepsilon\zeta_{+}}\big{)}=\frac{1}{\kappa^{2}}(R_{1}\mathsf{f}_{\rm sw})_{ +}+\frac{1}{\kappa}(\dot{f}+\dot{g}),\\ \partial_{t}^{2}\zeta_{-}+\frac{1}{\kappa^{2}}\zeta_{-}+\frac{ \varepsilon}{\kappa^{2}}\big{(}\frac{1}{2}\zeta_{-}^{2}+\frac{(f-g)^{2}}{1+ \varepsilon\zeta_{-}}\big{)}=\frac{1}{\kappa^{2}}(R_{1}\mathsf{f}_{\rm sw})_{ -}-\frac{1}{\kappa}(\dot{f}-\dot{g}),\end{cases} \tag{12}\]
_where we used the notation \((R_{1}\mathsf{f}_{\rm sw})_{\pm}=(R_{1}\mathsf{f}_{\rm sw})_{|_{x=\pm\ell}}\)._
Proof.: Applying \(R_{0}\) to the second equation in (10) and using the boundary condition (11), one gets
\[\partial_{t}q+R_{0}\partial_{x}\mathsf{f}_{\rm sw}=(\dot{f}\pm\dot{g})\exp(- \frac{|x\mp\ell|}{\kappa})\quad\text{ on }\quad\mathcal{E}^{\pm}.\]
Remarking further that \(R_{0}\partial_{x}=\partial_{x}R_{1}\), the problem is therefore reduced to
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ \partial_{t}q+\partial_{x}R_{1}\mathsf{f}_{\rm sw}=(\dot{f}\pm\dot{g})\exp(- \frac{|x\mp\ell|}{\kappa}).\end{cases} \tag{13}\]
Differentiating with respect to \(x\) the second equation of (13) and using the fact that \(\partial_{t}\partial_{x}q=-\partial_{t}^{2}\zeta\), one gets
\[-\partial_{t}^{2}\zeta+\partial_{x}^{2}R_{1}\mathsf{f}_{\rm sw}=\mp\frac{1}{ \kappa}(\dot{f}\pm\dot{g})\exp\big{(}-\frac{1}{\kappa}(|x\mp\ell|)\big{)}.\]
Since moreover \(\partial_{x}^{2}=-\frac{1}{\kappa^{2}}(1-\kappa^{2}\partial_{x}^{2})+\frac{1 }{\kappa^{2}}\), we deduce that
\[-\partial_{t}^{2}\zeta-\frac{1}{\kappa^{2}}\mathsf{f}_{\rm sw}+\frac{1}{ \kappa^{2}}R_{1}\mathsf{f}_{\rm sw}=\mp\frac{1}{\kappa}(\dot{f}\pm\dot{g})\exp \big{(}-\frac{1}{\kappa}(|x\mp\ell|)\big{)}.\]
Taking the trace at \(x=\pm\ell\), and substituting \(\mathsf{f}_{\rm sw}|_{x=\pm\ell}=\zeta_{\pm}+\varepsilon\big{(}\frac{1}{2} \zeta_{\pm}^{2}+\frac{(f\pm g)^{2}}{1+\varepsilon\zeta_{\pm}}\big{)}\), we obtain the equations stated in the proposition.
### The augmented formulation
Proposition 2.1 can be applied to the wave-structure interaction system (4)-(7) with \(f=\langle q_{\rm i}\rangle\) and \(g=-\ell\dot{\delta}\). Together with (6)-(7), this shows that \(\langle q_{\rm i}\rangle\), \(\delta\), \(\zeta_{+}\) and \(\zeta_{-}\) solve the second order differential system
\[\mathcal{M}[\varepsilon\delta,\varepsilon\zeta_{\pm}]\frac{d}{dt}\begin{pmatrix} \langle q_{\rm i}\rangle\\ \dot{\delta}\\ \dot{\zeta}_{+}\\ \dot{\zeta}_{-}\end{pmatrix}+\begin{pmatrix}\frac{1}{2\ell}[\zeta]\\ \delta-\langle\zeta\rangle\\ \zeta_{+}\\ \zeta_{-}\end{pmatrix}=\varepsilon\boldsymbol{\mathfrak{Q}}[\varepsilon \delta,\varepsilon\zeta_{\pm}](\langle q_{\rm i}\rangle,\dot{\delta},\zeta_ {\pm})+\begin{pmatrix}0\\ F_{\rm ext}\\ (R_{1}\mathsf{f}_{\rm sw})_{+}\\ (R_{1}\mathsf{f}_{\rm sw})_{-}\end{pmatrix}, \tag{14}\]
where \(\mathcal{M}[\varepsilon\delta,\varepsilon\zeta_{\pm}]\) is the invertible matrix
\[\mathcal{M}[\varepsilon\delta,\varepsilon\zeta_{\pm}]:=\left(\begin{array}{ ccc}\alpha(\varepsilon\delta)&0&\frac{\kappa^{2}}{2\ell}\frac{1}{1+\varepsilon\zeta_{+}}&- \frac{\kappa^{2}}{2\ell}\frac{1}{1+\varepsilon\zeta_{-}}\\ 0&\tau_{\kappa}(\varepsilon\delta)^{2}&-\frac{1}{2}\frac{\kappa^{2}}{1+ \varepsilon\zeta_{+}}&-\frac{1}{2}\frac{\kappa^{2}}{1+\varepsilon\zeta_{-}} \\ \hline-\kappa&\ell\kappa&\kappa^{2}&0\\ \kappa&\ell\kappa&0&\kappa^{2}\end{array}\right), \tag{15}\]
while \(\boldsymbol{\mathfrak{Q}}[\varepsilon\delta,\varepsilon\zeta_{\pm}](\langle q_{ \rm i}\rangle,\dot{\delta},\zeta_{\pm})\) is a four-dimensional vector whose entries are quadratic forms in \((\langle q_{\rm i}\rangle,\dot{\delta},\zeta_{\pm})\) with coefficients depending on \(\varepsilon\delta\), \(\varepsilon\zeta_{+}\) and \(\varepsilon\zeta_{-}\) (the exact expression of these terms is of no importance at this point, and we refer the reader to Appendix A). The second order differential system (14) can classically be transformed into a first order ODE on \(\langle q_{\rm i}\rangle\), \(\dot{\delta}\), \(\dot{\zeta}_{+}\), \(\zeta_{-}\), \(\delta\), \(\zeta_{+}\) and \(\zeta_{-}\) with forcing terms \((R_{1}\mathsf{f}_{\rm sw})_{\pm}\) and \(F_{\rm ext}\) (see Appendix A.2). The augmented formulation is obtained
by replacing \(\zeta_{+}\) and \(\zeta_{-}\) by two additional unknowns \(\underline{\zeta}_{+}\) and \(\underline{\zeta}_{-}\) in this first order ODE. It reads therefore
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\mathsf{f}_{\mathrm{sw }}=0\end{cases}\qquad\text{ for }\quad t>0,\quad x\in\mathcal{E}, \tag{16}\]
where \(\mathsf{f}_{\mathrm{sw}}\) is as in (1), and with transmission conditions across the floating object given by
\[\langle q\rangle=\langle q_{\mathrm{i}}\rangle\quad\text{ and }\quad\llbracket q \rrbracket=-2\ell\dot{\delta}, \tag{17}\]
where \(\langle q_{\mathrm{i}}\rangle\) and \(\delta\) are functions of time determined by the first order ODE
\[\frac{d}{dt}\Theta=\mathcal{G}\big{(}\Theta,(R_{1}\mathsf{f}_{\mathrm{sw}})_ {+},(R_{1}\mathsf{f}_{\mathrm{sw}})_{-},F_{\mathrm{ext}}\big{)}, \tag{18}\]
with \(\Theta:=\big{(}\langle q_{\mathrm{i}}\rangle,\dot{\delta},\underline{\zeta_{ +}},\underline{\zeta_{-}},\delta,\underline{\zeta_{+}},\underline{\zeta_{-}} \big{)}^{\mathrm{T}}\) and where \(\mathcal{G}\) is a smooth function of its arguments and whose exact expression is given in Appendix A. It is a consequence of Proposition 2.2 below that if the initial data for \(\underline{\zeta}_{\pm}\) and \(\underline{\zeta}_{\pm}\) are chosen appropriately, then \(\zeta_{\pm}=\underline{\zeta}_{\pm}\) and \(\dot{\zeta}_{\pm}=\dot{\underline{\zeta}}_{\pm}\) for all times, as expected (see Proposition 2.2 below).
_Remark 2.1_.: The difference between the augmented formulation (16)-(18) and the original formulation (4)-(7) lies in the ODE used to determine the functions \(\delta\) and \(\langle q_{\mathrm{i}}\rangle\) involved in the transmission conditions. In the original formulation, one has a first order 3-dimensional ODE (on \(\delta\), \(\dot{\delta}\) and \(\langle q_{\mathrm{i}}\rangle\)), which is forced by \(F_{\mathrm{ext}}\), \((R_{1}\mathsf{f}_{\mathrm{sw}})_{|_{x=\pm\ell}}\), \(\zeta_{|_{x=\pm\ell}}\) and \(\frac{d^{2}}{dt^{2}}\zeta_{|_{x=\pm\ell}}\). In the augmented formulation, the ODE is of higher dimension, namely, it is a first order 7-dimensional ODE (on \(\delta\), \(\dot{\delta}\), \(\langle q_{\mathrm{i}}\rangle\), \(\underline{\zeta}_{\pm}\) and \(\dot{\underline{\zeta}}_{\pm}\)), but it is forced only by \(F_{\mathrm{ext}}\) and \((R_{1}\mathsf{f}_{\mathrm{sw}})_{|_{x=\pm\ell}}\). These two quantities do not raise any difficulty since \(F_{\mathrm{ext}}\) is a given external force and \((R_{1}\mathsf{f}_{\mathrm{sw}})_{|_{x=\pm\ell}}\) can easily be computed numerically (see SS3.1.2 below), contrary to the traces of \(\zeta\) and \(\partial_{t}^{2}\zeta\) at the contact points that appear in the original formulation and that are very delicate to compute.
### Transformation into an initial value problem
We reformulate in this section the wave-structure _transmission_ problem (16)-(18) in the form of an _initial value_ problem that is easier to handle from a numerical point of view. This formulation is the new augmented formulation (with additional variables \(\underline{\zeta}_{\pm}\)) we shall base our numerical schemes on. In [4] (see also the lecture notes [23]), the well-posedness of the standard formulation (4)-(7) is proved, and it could similarly be obtained for the augmented formulation; for the sake of conciseness, we do not give here such a result and just prove that both formulations have the same regular solutions, and that the additional variables \(\underline{\zeta}_{\pm}\) coincide with the traces \(\zeta_{|_{x=\pm\ell}}\) under certain compatibility conditions on the initial data. We use the following notation for the source term in the reformulated momentum equation,
\[\mathcal{S}_{\pm}\big{(}\Theta,(R_{1}\mathsf{f}_{\mathrm{sw}})_{\pm},F_{ \mathrm{ext}}\big{)}:=\mathcal{G}_{1}\big{(}\Theta,(R_{1}\mathsf{f}_{\mathrm{ sw}})_{\pm},F_{\mathrm{ext}}\big{)}\mp\ell\mathcal{G}_{2}\big{(}\Theta,(R_{1} \mathsf{f}_{\mathrm{sw}})_{\pm},F_{\mathrm{ext}}\big{)}, \tag{19}\]
where \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) denote the first two components of the mapping \(\mathcal{G}\) in the right-hand side of the ODE (18). We also recall that we denote \(\mathcal{E}^{-}=(-\infty,-\ell)\) and \(\mathcal{E}^{+}=(\ell,\infty)\) the two connected components of the fluid domain \(\mathcal{E}\).
**Proposition 2.2**.: _Let \((\zeta,q)\) and \(\Theta=\big{(}\langle q_{\rm i}\rangle,\dot{\delta},\dot{\zeta}_{+},\dot{\zeta}_{- },\delta,\zeta_{+},\zeta_{-}\big{)}^{\rm T}\) be a regular solution to the transmission problem (16)-(18) with initial data \(U=(\zeta^{\rm in},q^{\rm in})\) and \(\Theta^{\rm in}\). Then \((\zeta,q)\) and \(\Theta\) also solve the initial value problem_
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ \partial_{t}q+\partial_{x}R_{1}\mathsf{f}_{\rm sw}=\mathcal{S}_{\pm}\big{(} \Theta,(R_{1}\mathsf{f}_{\rm sw})_{\pm},F_{\rm ext}\big{)}\exp(-\frac{1}{ \kappa}|x\mp\ell|)\end{cases}\qquad\text{ on }\mathcal{E}^{\pm}, \tag{20}\]
_and_
\[\frac{d}{dt}\Theta=\mathcal{G}\big{(}\Theta,(R_{1}\mathsf{f}_{\rm sw})_{+},(R _{1}\mathsf{f}_{\rm sw})_{-},F_{\rm ext}\big{)}. \tag{21}\]
_The converse is true, provided that the initial data satisfy the compatibility conditions_
\[\langle q^{\rm in}\rangle=\Theta^{\rm in}_{1},\qquad[\![q^{\rm in}]\!]=-2\ell \Theta^{\rm in}_{2}. \tag{22}\]
_If moreover the initial data also satisfy_
\[\zeta^{\rm in}_{|_{x=\ell}}=\Theta^{\rm in}_{6},\qquad\zeta^{\rm in}_{|_{x=- \ell}}=\Theta^{\rm in}_{7},\qquad-(\partial_{x}q^{\rm in})_{|_{x=\ell}}= \Theta^{\rm in}_{3},\qquad-(\partial_{x}q^{\rm in})_{|_{x=-\ell}}=\Theta^{\rm in }_{4}, \tag{23}\]
_then for all times, one has \(\zeta_{|_{x=\pm\ell}}=\underline{\zeta}_{\pm}\)._
_Remark 2.2_.: The proposition deals with the most general situation to cover in a unified way all the situations considered in this article. It can be simplified in various cases, as shown in Appendix A. For instance,
* When the object is freely floating, one takes \(F_{\rm ext}\equiv 0\).
* When the data and the object are symmetric with respect to the vertical axis \(x=0\), then \(\langle q_{\rm i}\rangle=0\).
* If the object is in forced motion, i.e. if \(\delta\equiv\delta_{\rm forced}\) for some given function \(\delta_{\rm forced}\), then the ODE (21) can be reduced to a 5-dimensional ODE on \(\big{(}\langle q_{\rm i}\rangle,\dot{\zeta}_{+},\dot{\zeta}_{+},\underline{ \zeta}_{+},\underline{\zeta}_{-}\big{)}^{\rm T}\). Note that in this situation, an external force is needed to maintain the object fixed (the exact expression of this force is derived in Remark A.3).
Proof.: Let us first prove the direct implication. Proceeding as in the proof of Proposition 2.1, we can rewrite the second equation of (16) on each component \(\mathcal{E}^{\pm}\) of \(\mathcal{E}\) under the form
\[\partial_{t}q+\partial_{x}R_{1}\mathsf{f}_{\rm sw}=\frac{d}{dt}(q_{|_{x=\pm \ell}})\exp(-\frac{1}{\kappa}|x\mp\ell|)\quad\text{ on }\quad\mathcal{E}^{\pm}.\]
From the transmission conditions (17), we have \(q_{|_{x=\pm\ell}}=\mp\ell\delta+\langle q_{\rm i}\rangle\) so that the result follows from the observation that, owing to (18), one has
\[\frac{d}{dt}\langle q_{\rm i}\rangle=\mathcal{G}_{1}(\Theta,(R_{1}\mathsf{f} _{\rm sw})_{\pm},F_{\rm ext})\quad\text{ and }\quad\frac{d}{dt}\dot{\delta}=\mathcal{G}_{2}(\Theta,(R_{1}\mathsf{f}_{\rm sw })_{\pm},F_{\rm ext}).\]
Conversely, if \((\zeta,q)\) solves (20), it suffices to apply \((1-\kappa^{2}\partial_{x}^{2})\) to the second equation to show that \((\zeta,q)\) solves (16). The equation (18) on \(\Theta\) is the same as (21), so that the only thing we need to prove is that the transmission conditions (17) hold. Taking the trace of the second equation of (20) at the contact points and taking the average and the jump, we find that
\[\frac{d}{dt}\langle q\rangle=\mathcal{G}_{1}(\Theta,(R_{1}\mathsf{f}_{\rm sw })_{\pm},F_{\rm ext})\quad\text{ and }\quad\frac{d}{dt}[\![q]\!]=-2\ell\mathcal{G}_{2}(\Theta,(R_{1}\mathsf{f}_{ \rm sw})_{\pm},F_{\rm ext}),\]
or equivalently (from the definition of \(\mathcal{G}\)),
\[\frac{d}{dt}\langle q\rangle=\frac{d}{dt}\langle q_{\mathrm{i}}\rangle\quad\text{ and }\quad\frac{d}{dt}\llbracket q\rrbracket=\frac{d}{dt}\big{(}-2\ell\dot{\delta} \big{)}.\]
This shows that the time derivative of the transmission conditions (17) are satisfied; the compatibility conditions (22) show moreover that the transmission condition is satisfied at \(t=0\). It is therefore satisfied for all times.
For the last assertion, we can use Proposition 2.1 to show that \(\zeta_{|_{x=\pm\ell}}\) and \(\underline{\zeta}_{\pm}\) satisfy the same second order ODE in time. The additional condition (23) ensures that these initial data and the initial value of the first time derivative coincide (we also used the first equation of (16) to substitute \(\frac{d}{dt}(\zeta_{|_{x=\pm\ell}})=-(\partial_{x}q)_{|_{x=\pm\ell}}\)). They are therefore identical for all times.
## 3. Numerical schemes
We present in this section one first order and one second order numerical scheme for the resolution of the augmented formulations derived in this article. We explain these schemes for the general formulation (20)-(21). We recall that these equations are conservation laws with a nonlocal flux and and an exponentially localized source term,
\[\partial_{t}U+\partial_{x}\big{(}\mathfrak{F}_{\kappa}(U)\big{)}=\mathcal{S}_ {\pm}(\Theta,(R_{1}\mathfrak{f}_{\mathrm{sw}})_{-},(R_{1}\mathfrak{f}_{ \mathrm{sw}})_{+},F_{\mathrm{ext}})\mathfrak{b}(x\mp\ell)\qquad\text{ in }\mathcal{E}_{\pm} \tag{24}\]
with \(U=(\zeta,q)^{T}\) and where \(\mathfrak{F}_{\kappa}(U)\) is the nonlocal flux given by
\[\mathfrak{F}_{\kappa}(U)=\big{(}q,R_{1}\mathfrak{f}_{\mathrm{sw}}\big{)}^{T}, \tag{25}\]
while the source terms \(\mathcal{S}_{\pm}\) are as in (19) and \(\mathfrak{b}\) is the shape of source term
\[\mathfrak{b}(x)=\big{(}0,\exp(-\frac{|x|}{\kappa})\big{)}^{\mathrm{T}}. \tag{26}\]
The quantity \(\Theta\) is defined as \(\Theta=\big{(}\langle q_{\mathrm{i}}\rangle,\dot{\delta},\dot{\zeta}_{+},\dot {\zeta}_{-},\delta,\zeta_{+},\zeta_{-}\big{)}^{\mathrm{T}}\) and solves a system of 7 first order ODEs forced by \((R_{1}\mathfrak{f}_{\mathrm{sw}})_{+}\), \((R_{1}\mathfrak{f}_{\mathrm{sw}})_{-}\) and \(F_{\mathrm{ext}}\),
\[\frac{d}{dt}\Theta=\mathcal{G}\big{(}\Theta,(R_{1}\mathfrak{f}_{\mathrm{sw}}) _{+},(R_{1}\mathfrak{f}_{\mathrm{sw}})_{-},F_{\mathrm{ext}}\big{)}. \tag{27}\]
_Remark 3.1_.: As explained in Remark 2.2, in some of the examples considered in this paper, the ODE (27) can be reduced to a possibly lower dimensional ODE; we refer to Appendix A where such simplifications are derived.
### Notations
We gather here the main notations used to write our numerical schemes. We first set our notations for the discretized quantities, and then explain how we define the discrete version of the nonlocal operator \(R_{1}\) defined in (9).
Figure 2. Space discretzation
#### 3.1.1. Discretization
We denote by \(\Delta x\) the mesh size and decompose the two components \(\mathcal{E}^{-}\) and \(\mathcal{E}^{+}\) of the exterior domain into a disjoint union of cells (see figure 2),
\[\mathcal{E}^{-}=\Big{(}\bigcup_{i=-\infty}^{-1}\mathcal{C}_{i}\Big{)}\cup \mathcal{C}_{0^{-}}\quad\text{ and }\quad\mathcal{E}^{+}=\mathcal{C}_{0^{+}}\cup\Big{(}\bigcup_{i=1}^{ \infty}\mathcal{C}_{i}\Big{)}\]
with
\[\mathcal{C}_{i}=(x_{i-1/2},x_{i+1/2})\text{ if }i\neq 0\quad\text{ and }\quad \mathcal{C}_{0^{-}}=(x_{-1/2},-\ell),\qquad\mathcal{C}_{0^{+}}=(\ell,x_{1/2})\]
and where
\[x_{i+1/2}=-\ell+(i+1/2)\Delta x\quad\text{ if }i<0\quad\text{ and }\quad x_{i-1/2}=\ell+(i-1/2)\Delta x\quad\text{ if }i>0.\]
_Remark 3.2_.: Of course, the numerical domain is of finite size but we work with large enough domains so that the influence of the left and right boundaries of the numerical domain are not seen in the computations near the solid object. For the sake of clarity, we do not mention these boundaries in the presentation of the numerical scheme.
We also write \(\Delta t>0\) the time stepping and denote by
\[U^{n}=U(n\Delta t),\qquad\Theta^{n}=\Theta(n\Delta t)\quad\text{ and }\quad F _{\text{ext}}^{n}=F_{\text{ext}}(n\Delta t)\]
the values of \(U=(\zeta,q)^{\text{T}}\), of the \(\mathbb{R}^{7}\)-valued vector \(\Theta\) involved in the ODE (27), and of the external force \(F_{\text{ext}}\) at each time step. We further denote by \(U_{i}^{n}\) (\(i\in\big{(}\mathbb{Z}\backslash\{0\}\cup\{0^{-},0^{+}\}\big{)}\) the approximation of \(\mathcal{U}^{n}\) in the middle of the cell \(\mathcal{C}_{i}\) furnished by the numerical scheme.
#### 3.1.2. About the nonlocal operator \(R_{1}\)
The equations (24)-(27) involve the quantities \(R_{1}\mathfrak{f}_{\text{sw}}\) and \((R_{1}\mathfrak{f}_{\text{sw}})_{\pm}\), where we recall that \(R_{1}\) is the inverse of \((1-\kappa^{2}\partial_{x}^{2})\) on \(\mathcal{E}^{-}\cup\mathcal{E}^{+}\) with Neumann boundary condition at \(\pm\ell\), as defined in (9), and that \((R_{1}\mathfrak{f}_{\text{sw}})_{\pm}\) stands for the trace of \(R_{1}\mathfrak{f}_{\text{sw}}\) at \(\pm\ell\).
We keep the same notation \(R_{1}\) for the discrete inverse of the operator \((1-\kappa\partial_{x}^{2})\) with homogeneous Neumann condition at the boundary. We use here a standard centered second order finite difference approximation for the discretization of \(\partial_{x}^{2}\). More precisely, if \(F=(f_{i})_{|i|\geq 1}\), we denote by \(R_{1}F\) the vector \(R_{1}F=V\) where \(V=(v_{i})_{|i|\geq 1}\) is given by the resolution of the equations
\[v_{i}-\kappa^{2}\frac{v_{i+1}-2v_{i}+v_{i-1}}{\Delta x^{2}}=f_{i},\qquad|i|\geq 2\]
while, for \(i=\pm 1\) a second order discretization of the Neumann boundary condition leads to
\[v_{-1}-\kappa^{2}\frac{2}{3}\frac{v_{-2}-v_{-1}}{\Delta x^{2}}=f_{-1}\quad \text{ and }\quad v_{1}-\kappa^{2}\frac{2}{3}\frac{v_{2}-v_{1}}{\Delta x^{2}}=f_{1}.\]
Similarly, we still denote by \((R_{1}F)_{\pm}\) the discrete version of the traces \(R_{1}F\) at the boundaries; they are naturally defined by the second order approximation
\[\big{(}R_{1}F\big{)}_{-}=\frac{4}{3}v_{-1}-\frac{1}{3}v_{-2}\quad\text{ and }\quad\big{(}R_{1}F\big{)}_{+}=\frac{4}{3}v_{1}-\frac{1}{3}v_{2}. \tag{28}\]
### A first order scheme
We propose here an adaptation of the Lax-Friedrichs scheme for the conservation laws with nonlocal flux (24). This scheme is an extension of the scheme used in [26] for the numerical simulation of the Boussinesq equations with generating boundary condition (i.e. with data on \(\zeta\) at the entrance of the numerical domain). It reads
\[\frac{U_{i}^{n+1}-U_{i}^{n}}{\Delta t}+\frac{1}{\Delta x}\big{(}\mathfrak{F}_{ \kappa,i+1/2}^{n}-\mathfrak{F}_{\kappa,i-1/2}^{n}\big{)}=\mathcal{S}_{\pm}^{n} \mathfrak{b}_{i},\qquad\pm i\geq 1,\quad n\geq 0, \tag{29}\]
with
\[\mathcal{S}_{\pm}^{n}=\mathcal{S}_{\pm}(\Theta^{n},(R_{1}\mathfrak{f}_{\rm sw }^{n})_{+},(R_{1}\mathfrak{f}_{\rm sw}^{n})_{-},F_{\rm ext}^{n})\quad\text{ and }\quad \mathfrak{b}_{i}=\mathfrak{b}(i\Delta x)\quad\text{ if }\pm i>0; \tag{30}\]
the discrete flux correspond to the Lax-Friedrichs scheme,
\[\begin{cases}\mathfrak{F}_{\kappa,i+1/2}^{n}=\frac{1}{2}\big{(}\mathfrak{F}_{ \kappa,i+1}^{n}+\mathfrak{F}_{\kappa,i}^{n}\big{)}-\frac{\Delta x}{2\Delta t} \big{(}U_{i+1}^{n}-U_{i}^{n}\big{)}&\text{if }i\leq-2,\\ \mathfrak{F}_{\kappa,i-1/2}^{n}=\frac{1}{2}\big{(}\mathfrak{F}_{\kappa,i}^{n} +\mathfrak{F}_{\kappa,i-1}^{n}\big{)}-\frac{\Delta x}{2\Delta t}\big{(}U_{i} ^{n}-U_{i-1}^{n}\big{)}&\text{if }i\geq 2,\end{cases} \tag{31}\]
with the notations
\[\mathfrak{F}_{\kappa,i}^{n}=\big{(}q_{i}^{n},(R_{1}\mathfrak{f}_{\rm sw}^{n}) _{i}\big{)}^{\rm T}\quad\text{ and }\quad\mathfrak{f}_{\rm sw}^{n}=\mathfrak{f}_{\rm sw}(U^{n});\]
finally, for \(i=\pm 1\), we must adapt (31) in the following way,
\[\begin{cases}\mathfrak{F}_{\kappa,-1/2}^{n}=\frac{1}{2}\big{(}\mathfrak{F}_{ \kappa,0^{-}}^{n}+\mathfrak{F}_{\kappa,-1}^{n}\big{)}-\frac{\Delta x}{2\Delta t }\big{(}U_{0^{-}}^{n}-U_{-1}^{n}\big{)}\\ \mathfrak{F}_{\kappa,1/2}^{n}=\frac{1}{2}\big{(}\mathfrak{F}_{\kappa,1}^{n}+ \mathfrak{F}_{\kappa,0^{+}}^{n}\big{)}-\frac{\Delta x}{2\Delta t}\big{(}U_{1} ^{n}-U_{0^{+}}^{n}\big{)}\end{cases} \tag{32}\]
with
\[\mathfrak{F}_{\kappa,0^{\pm}}^{n}=\big{(}q_{0^{\pm}}^{n},(R_{1}\mathfrak{f}_{ \rm sw}^{n})_{\pm}\big{)}^{\rm T}; \tag{33}\]
the component \((R_{1}\mathfrak{f}_{\rm sw}^{n})_{\pm}\) is computed according to (28), but we still need to define \(q_{0^{\pm}}^{n}\). By definition \(q_{0^{\pm}}^{n}\) is the approximation at time \(n\Delta t\) of the trace of the discharge \(q\) at \(\pm\ell\). From the transmission conditions (17) of the continuous problem, we have \(q_{|_{\kappa=\pm}\ell}=\langle q_{i}\rangle\mp\ell\dot{\delta}\). Recalling also that \(\langle q_{i}\rangle\) and \(\dot{\delta}\) are respectively the first and second components of \(\Theta\), this relation can be rewritten \(q_{|_{\kappa=\pm\ell}}=\Theta_{1}\mp\ell\Theta_{2}\). At the discrete level, this leads to the following definition for \(q_{0^{\pm}}^{n}\),
\[q_{0^{\pm}}^{n}=\Theta_{1}^{n}\mp\ell\Theta_{2}^{n}. \tag{34}\]
The equation (27) is discretized with a first-order explicit Euler scheme:
\[\frac{\Theta^{n+1}-\Theta^{n}}{\Delta t}=\mathcal{G}\big{(}\Theta^{n},(R_{1} \mathfrak{f}_{\rm sw}^{n})_{+},(R_{1}\mathfrak{f}_{\rm sw}^{n})_{-},F_{\rm ext }^{n}\big{)}. \tag{35}\]
The equations (29)-(35) furnish an induction relation that allows to compute \(U^{n+1}\) and \(\Theta^{n+1}\) in terms of \(U^{n}\) and \(\Theta^{n}\). It need of course to be initiated with initial data that are taken of the form
\[U_{i}^{0}=(\zeta_{i}^{\rm in},q^{\rm in})_{i}^{\rm T}\qquad(i\in\big{(}\mathbb{ Z}\backslash\{0\}\big{)}\cup\{0^{-},0^{+}\}), \tag{36}\]
with \(\zeta^{\rm in}\) and \(q^{\rm in}\) describing the initial wave field in the exterior domain, and
\[\Theta^{0}=\big{(}\langle q_{i}\rangle^{\rm in},\delta^{(1)},\underline{\zeta }_{+}^{(1)},\underline{\zeta}_{-}^{(1)},\delta^{(0)},\underline{\zeta}_{+}^{(0 )},\underline{\zeta}_{-}^{(0)}\big{)}^{\rm T} \tag{37}\]
satisfies the discrete version of the compatibility conditions of Proposition 2.2, namely,
\[\langle q^{\rm in}\rangle=\langle q_{i}\rangle^{\rm in},\qquad\llbracket q^{\rm in }\rrbracket=-2\ell\delta^{(1)},\qquad\underline{\zeta}_{\pm}^{(0)}=\zeta_{ \pm}^{\rm in},\qquad\underline{\zeta}_{\pm}^{(1)}=-(\partial_{x}q^{\rm in})_ {\pm}. \tag{38}\]
### A second order scheme
We propose here an adaptation of the MacCormack scheme for the conservation laws with nonlocal flux (24), coupled with a second-order Heun integration scheme for the system of 7 first-order ODEs (27). Both are predictor-corrector schemes. We use the same notations as in the previous subsection and can decompose the scheme into four main steps:
- _Prediction step for the MacCormack scheme_. This reads
\[\frac{U_{i}^{n,*}-U_{i}^{n}}{\Delta t}+\frac{1}{\Delta x}\big{(}\mathfrak{F}_{ \kappa,i}^{n}-\mathfrak{F}_{\kappa,i-1}^{n}\big{)}=\mathcal{S}_{+}^{n} \mathfrak{b}_{i},\qquad i>1,\quad n\geq 0, \tag{39}\]
with \(U_{i}^{n,*}=(\zeta_{i}^{n,*},q_{i}^{n,*})^{\mathrm{T}}\). We use a symmetric scheme with respect to \(x=0\) so that, for negative values of \(i\), we use a forward rather than backward derivative for the flux,
\[\frac{U_{i}^{n,*}-U_{i}^{n}}{\Delta t}+\frac{1}{\Delta x}\big{(}\mathfrak{F}_{ \kappa,i+1}^{n}-\mathfrak{F}_{\kappa,i}^{n}\big{)}=\mathcal{S}_{-}^{n} \mathfrak{b}_{i},\qquad i<-1,\quad n\geq 0, \tag{40}\]
For \(i=1\) and \(i=-1\) it reads
\[\frac{U_{1}^{n,*}-U_{1}^{n}}{\Delta t}+\frac{1}{\Delta x}\big{(} \mathfrak{F}_{\kappa,1}^{n}-\mathfrak{F}_{\kappa,0^{+}}^{n}\big{)} =\mathcal{S}_{+}^{n}\mathfrak{b}_{1}, \tag{42}\] \[\frac{U_{-1}^{n,*}-U_{-1}^{n}}{\Delta t}+\frac{1}{\Delta x} \big{(}\mathfrak{F}_{\kappa,0^{-}}^{n}-\mathfrak{F}_{\kappa,-1}^{n}\big{)} =\mathcal{S}_{-}^{n}\mathfrak{b}_{-1}, \tag{41}\]
for \(n\geq 0\) and with \(\mathfrak{F}_{\kappa,0^{\pm}}^{n}\) as in (33).
- _Prediction step for the Heun scheme_. This step is similar to a first-order explicit Euler scheme,
\[\frac{\Theta^{n,*}-\Theta^{n}}{\Delta t}=\mathcal{G}\big{(}\Theta^{n},(R_{1} \mathfrak{f}_{\mathrm{sw}}^{n})_{+},(R_{1}\mathfrak{f}_{\mathrm{sw}}^{n})_{- },F_{\mathrm{ext}}^{n}\big{)}. \tag{43}\]
- _Corrector step for the MacCormack scheme_. With the quantities computed in the previous steps, we define
\[\mathfrak{f}_{\mathrm{sw}}^{n,*}=\mathfrak{f}_{\mathrm{sw}}(U^{n,*})\quad \text{ and }\quad q_{0^{\pm}}^{n,*}=\Theta_{1}^{n,*}\mp\ell\Theta_{2}^{n,*} \tag{44}\]
as well as an intermediate non-local flux and an intermediate source term,
\[\mathfrak{F}_{\kappa,i}^{n,*} =\big{(}q_{i}^{n,*},(R_{1}\mathfrak{f}_{\mathrm{sw}}^{n,*})_{i} \big{)}^{\mathrm{T}}\qquad|i|\geq 1,\quad n\geq 0, \tag{46}\] \[\mathcal{S}_{\pm}^{n,*} =\mathcal{S}_{\pm}(\Theta^{n,*},(R_{1}\mathfrak{f}_{\mathrm{sw}}^ {n,*})_{+},(R_{1}\mathfrak{f}_{\mathrm{sw}}^{n,*})_{-},F_{\mathrm{ext}}^{n}) \quad n\geq 0. \tag{45}\]
The correction step for the MacCormack scheme then reads
\[\frac{U_{i}^{n+1}-U_{i}^{n}}{\Delta t}+\frac{\mathfrak{F}_{\kappa,i}^{n}- \mathfrak{F}_{\kappa,i-1}^{n}+\mathfrak{F}_{\kappa,i+1}^{n,*}-\mathfrak{F}_{ \kappa,i}^{n,*}}{2\Delta x}=\frac{\mathcal{S}_{+}^{n}+\mathcal{S}_{+}^{n,*}}{2 }\mathfrak{b}_{i}\qquad i\geq 1, \tag{47}\]
for \(n\geq 0\). Here again, we take a symmetric scheme so that for \(i\leq-1\), we take a forward difference of \(\mathfrak{F}^{n}\) and a backward difference of \(\mathfrak{F}^{n,*}\),
\[\frac{U_{i}^{n+1}-U_{i}^{n}}{\Delta t}+\frac{\mathfrak{F}_{\kappa,i+1}^{n}- \mathfrak{F}_{\kappa,i}^{n}+\mathfrak{F}_{\kappa,i}^{n,*}-\mathfrak{F}_{ \kappa,i-1}^{n,*}}{2\Delta x}=\frac{\mathcal{S}_{-}^{n}+\mathcal{S}_{-}^{n,*}}{ 2}\mathfrak{b}_{i}\qquad i\leq-1; \tag{48}\]
in particular, there is no need to defined boundary values \(\mathfrak{F}_{\kappa,0^{\pm}}^{n,*}\), of the intermediate flux.
_-Correction step for the Heun scheme_. This reads, for \(n\geq 0\),
\[\frac{\Theta^{n+1}-\Theta^{n}}{\Delta t}=\frac{\mathcal{G}\big{(}\Theta^{n},(R_{ 1}\mathfrak{f}_{\mathrm{sw}}^{n})_{\pm},F_{\mathrm{ext}}^{n}\big{)}+\mathcal{G }\big{(}\Theta^{n,*},(R_{1}\mathfrak{f}_{\mathrm{sw}}^{n,*})_{\pm},F_{\mathrm{ ext}}^{n+1}\big{)}}{2}. \tag{49}\]
The initial data have the same form as for the first order scheme described in the previous subsection.
## 4. Numerical simulations
We have seen in SS2.1 that the wave-structure interaction problem under consideration in this paper can be reduced to a transmission problem potentially coupled to two forced ODEs for the vertical displacement \(\delta\) of the object and the mean discharge \(\langle q_{i}\rangle\) under the object.
We first consider in SS4.1 a situation where this coupling is absent. This corresponds to the case where a wave is generated in a wave tank by moving the object vertically with a prescribed motion. This example is of particular interest since it provides an efficient way to generate waves for the Boussinesq equations at the entrance of a numerical domain if we have at our disposal time series of the horizontal discharge at the boundary, hereby extending the result of [26] where data on the surface elevation were used.
We then consider in SS4.2 the return to equilibrium problem (also called decay test or drop test by engineers) which consists in releasing an object from an out of equilibrium position and to observe its oscillations. These examples involve the coupling of the transmission problem with the ODE on \(\delta\). In the linear case, we are able to derive exact explicit solutions that we compute to check the numerical convergence of our scheme; the nonlinear case is then investigated and the importance of the dispersive effects pointed out by comparing with simulations based on the nonlinear shallow water equations instead of the Boussinesq system.
We then investigate in SS4.3 a configuration where the transmission problem is coupled to the interior discharge \(\langle q_{i}\rangle\), namely, the interaction of waves with a fixed partially immersed object. Here again, we derive an explicit exact solution in the linear case that we use to validate that this coupling is also of second order. The nonlinear case is then considered.
Finally, a configuration involving the most general coupling (with both \(\delta\) and \(\langle q_{i}\rangle\) is considered in SS4.4; it consists in the interaction of a solitary wave with an object freely floating in the vertical direction.
### Wave generation
The first physical configuration we consider consists in creating waves in a fluid initially at rest by moving up and down a partially immersed object. By symmetry, it is enough to consider the waves in the right component \(\mathcal{E}^{+}=(\ell,\infty)\) of the fluid domain. As shown in SSA.3 of Appendix A, the mathematical formulation of this problem is a particular case of the following initial boundary value problem with boundary condition on the discharge \(q\), namely,
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,&t>0,\quad x>\ell\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\mathfrak{f}_{\mathrm{ sw}}=0,&t>0,\quad x>\ell\end{cases} \tag{50}\]
with \(\mathfrak{f}_{\mathrm{sw}}\) as in (1) and with boundary condition
\[q_{|_{x=\ell}}(t)=g(t),\qquad t>0 \tag{51}\]
and initial condition
\[(\zeta,q)_{|_{t=0}}(x)=(\zeta^{\rm in},q^{\rm in})(x),\qquad x>\ell, \tag{52}\]
and where \(g\), \(\zeta^{\rm in}\) and \(q^{\rm in}\) are some given functions satisfying the compatibility condition
\[q^{\rm in}(x=0)=g(t=0), \tag{53}\]
which is obviously necessary to obtain solutions that are continuous at the origin in time and space. This problem is somehow symmetric to the one considered in [26] where a boundary condition on \(\zeta\) rather than \(q\) was considered and where a first order scheme was proposed.
_Remark 4.1_.: For the wave generation problem, one has \((\zeta^{\rm in},q^{\rm in})=(0,0)\) and \(g(t)=-\ell\dot{\delta}_{\rm forced}\), where \(\delta_{\rm forced}\) is the prescribed vertical displacement of the center of mass of the object.
Contrary to the other physical configurations we consider in this article, the wave generation problem (or more generally, the initial boundary value problem (50)-(53)) does not require the resolution of an ODE to determine the boundary data on the discharge. The formulation as an initial value problem given in Proposition 2.2 then reduces to
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ \partial_{t}q+\partial_{x}R_{1}\mathfrak{f}_{\rm sw}=\mathcal{S}_{+}(t)\exp( -\frac{x-\ell}{\kappa})\end{cases}\qquad\text{with}\quad\mathcal{S}_{+}(t)= \dot{g}(t), \tag{54}\]
for \(x>\ell\) and with initial condition (52) satisfying (53).
The numerical scheme presented in SS3.2 and SS3.3 can be simplified by skipping the second and fourth step related to the Heun scheme, and by taking simply for the first-order scheme:
\[\mathcal{S}^{n}=\frac{g^{n+1}-g^{n}}{\Delta t},\]
and for the second-order scheme:
\[\mathcal{S}^{n}=\frac{g^{n+1}-g^{n-1}}{2\Delta t}\quad\text{ and }\quad\mathcal{S}^ {n,*}=\frac{g^{n+2}-g^{n}}{2\Delta t}.\]
The wave generation problem gives us the opportunity to validate our numerical code with a nonlinear case. The Boussinesq-Abbott equations admits solitary waves solutions of the form
\[(\zeta,q)(t,x)=\big{(}\zeta_{c}(x-x_{0}-ct),c\zeta_{c}(x-x_{0}-ct)\big{)},\]
where \(c>0\), \(x_{0}\in\mathbb{R}\) and \(\zeta_{c}\) is a smooth, even and fastly decaying function. These solutions can be used to test the precision of the code. For the Boussinesq-Abbott equations, there is no explicit formula for \(\zeta_{c}\) and it is determined by the resolution of a nonlinear second order ODE, namely,
\[c^{2}\kappa^{2}\zeta_{c}^{\prime\prime}-c^{2}\frac{\zeta}{1+\varepsilon\zeta} +\frac{\varepsilon^{2}\zeta^{2}+2\varepsilon\zeta}{2\varepsilon}=0, \tag{55}\]
with
\[c^{2}=\frac{\varepsilon}{6}\frac{3\zeta_{\max}^{2}+\varepsilon\zeta_{\max}^{3 }}{\zeta_{\max}-\frac{\ln(1+\varepsilon\zeta_{\max})}{\varepsilon}}\]
(see for instance [26] for more details on the computations); these formula furnish a family of solitary waves parametrized by their maximal amplitude \(\zeta_{\max}\). Solving
the above ODE with a standard high precision ODE solver provides us a solution to (50) that we use to assess the precision of the numerical solution obtained with our numerical scheme for (54) with discharge boundary data \(g(t)=c\zeta_{c}(0-x_{0}-ct)\) and initial data \((\zeta^{\mathrm{in}},q^{\mathrm{in}})=(\zeta_{c}(x-x_{0}-0),c\zeta_{c}(x-x_{0}- 0))\).
_Remark 4.2_.: For very fine meshes, spurious oscillations may appear. These oscillations are reminiscent of the oscillations that appear when using dispersive schemes (such as the Lax-Wendroff or MacCormack schemes) to simulate shock waves. Flux-limiters methods are typically used to control this phenomenon [27]. Here, these oscillations are created at the boundary, whose position is fixed, and we use a very simple efficient method consisting in adding an artificial viscosity on a finite number \(n_{0}\) of cells near the boundary. More precisely, in the right-component of the fluid domain (the left component is treated symmetrically) we add the following term in the right-hand side of the first component of (48),
\[\nu\frac{\Delta x}{\Delta t}\big{(}\zeta^{n}_{i+1}-2\zeta^{n}_{i}+\zeta^{n}_{i -1}\big{)}\qquad 1\leq i\leq n_{0}\]
with \(\nu>0\) a fixed coefficient, that we take equal to \(2.136\). This corresponds to an artificial viscosity \(\nu\frac{(\Delta x)^{3}}{\Delta t}\partial_{x}^{2}\zeta\); for a fixed ration \(\Delta x/\Delta t\), this viscosity is of order \(2\) and therefore does not alter the overall second order of the MacCormack scheme.
We choose for this test \(\zeta_{\mathrm{max}}=1\) and \(3\kappa^{2}=\varepsilon=0.3\). Once \(c\) is computed, we solve the differential equation (55) with a high order numerical method in order to obtain our reference solution. The size of the computational domain is \(L=6\). The space step is computed as \(\Delta_{x}=L/N\), with \(N=200,240,300,400\). We take a constant time step \(\Delta_{t}=0.8\,\Delta_{x}\). The maximum of the soliton is initially located on the left of the computational domain, at \(x=-L/2\), so that the initial datum in the small domain is almost zero, and then the soliton propagates inside it. The numerical results at final time \(T_{f}=20\) are presented for both schemes on Figures 3 and 4, showing respectively a first-order and a second-order convergence.
These results correspond to a final time where the soliton has completely entered the computational domain, so that the influence of the dispersive boundary layer due to the generating condition at the left side of the domain is nearly zero. However, if one would choose a final time where the soliton is still entering the computational domain, then one would notice that the error for the variable \(\zeta\) is only first-order in the vicinity of the left boundary of the computational domain, while it is still second-order for the variable \(q\). This first-order error is probably due to a lack of accuracy in the numerical evaluation of the spatial derivative of \(q\) near the left boundary.
### Return to equilibrium
We consider here the return to equilibrium problem (also called decay test), which consists in dropping the floating object from an out of equilibrium position and to let it oscillate vertically and stabilize towards its equilibrium position. This is a problem of practical importance because it is used by engineers to characterize some buoyancy properties of the solid, and theoretically because it leads to simpler equations than the general wave-structure equations. For instance, in the nonlinear non dispersive case (\(\varepsilon\neq 0\), \(\kappa=0\)), it is possible to show that the dimensionless vertical displacement \(\varepsilon\delta\) of the solid with respect to its equilibrium position is fully described by a second order nonlinear scalar ODE [24, 4] and that in the linear dispersive case (\(\varepsilon=0\), \(\kappa\neq 0\)) it is governed by
a second order linear integro-differential equation [4]. In the nondispersive case, similar equations have also been derived in the presence of viscosity [28] in the linear case, as well as in the \(2D\) radial and partially linear case [8]. In the presence of nonlinear _and_ dispersive effects (\(\varepsilon\neq 0\), \(\kappa\neq 0\)), it does not seem possible to derive such a simple equation for the motion of the solid and the wave-structure equations must therefore be solved.
As for the wave generation problem, there is a symmetry in this problem which allows to consider only the right part \(\mathcal{E}^{+}\) of the fluid domain and the governing wave-structure interaction equations reduce to an initial boundary value problem of the form (50) with \(g=-\ell\dot{\delta}\). The difference is that the vertical displacement \(\delta\) is no longer a given function \(\delta\) but is found through the resolution of Newton's equation (see (7) for its general expression). Since this equation involves the trace of \(\zeta\) at the contact point, we have to work with the augmented formulation provided by Proposition 2.2. Since in this particular case, one has \(F_{\rm ext}\equiv 0\), \(\langle q_{i}\rangle\equiv 0\) and \(\zeta_{+}=\zeta_{-}\), the 7-dimensional ODE on \(\Theta\) can be simplified into a simpler 4-dimensional ODE (see SSA.3 in Appendix A). The interest of this test case is that, since the
Figure 4. Solitary wave with generating condition on \(q\), \(L^{\infty}\) error for \(3\kappa^{2}=0.3\) with second-order scheme, left: \(\zeta\), right: \(q\).
Figure 3. Solitary wave with generating condition on \(q\), \(L^{\infty}\) error for \(3\kappa^{2}=0.3\) with first-order scheme, left: \(\zeta\), right: \(q\).
interior discharge identically vanishes, it allows us to investigate specifically the coupling between the waves and the vertical displacement of the object. We first consider in SS4.2.1 the linear case for which explicit solutions exist and can be used to investigate the precision of the code, and then show in SS4.2.2 some simulations in the nonlinear case.
#### 4.2.1. Convergence error in the linear case
We first consider the linear case (\(\varepsilon=0\)) since in the case, it was shown in [4] that the evolution of \(\delta\) can be found by solving a linear second order integro-differential equation, namely,
\[\big{(}\tau_{\kappa}(0)^{2}+\ell\kappa\big{)}\ddot{\delta}+\ell\mathcal{K}_{ \kappa}^{1}\ast\dot{\delta}+\delta=0,\]
with initial conditions \(\delta(0)=\delta_{0}\) and \(\dot{\delta}(0)=0\) and where \(\tau_{\kappa}(\cdot)\) is defined in Appendix A and the kernel \(\mathcal{K}_{\kappa}^{1}\) is given in terms of the first Bessel function \(J_{1}\) by the relation
\[\mathcal{K}_{\kappa}^{1}(t)=\frac{1}{t}J_{1}\big{(}\frac{t}{\kappa}\big{)}.\]
The solution of this integro-differential equation is given explicitly by taking the Laplace transform (denoted with a hat),
\[\widehat{\delta}(s)=\frac{\tau_{\kappa}(0)^{2}s+\ell\sqrt{1+\kappa^{2}s^{2}}} {\tau_{\kappa}(0)^{2}s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}\delta_{0},\qquad s \in\mathbb{C}_{0}, \tag{56}\]
where \(\mathbb{C}_{0}\) is the half-plane of complex numbers \(s\) such that \(\Re s>0\). The vertical displacement deduced from the exact formula (56), and denoted \(\delta_{\text{exact}}\) is compared with the surface elevation \(\delta\) found by solving the wave-structure equations using the numerical schemes presented in SS3. In order to discard possible numerical errors in the computation of the inverse Laplace transform one has to apply to (56) two different inversion methods (the Euler and Talbot methods [1]); we impose that they match up to \(10^{-4}\) terms to consider the solution provided as relevant to be considered as an exact solution for our convergence studies.
In our numerical tests we chose \(3\kappa^{2}=0.3\) or \(3\kappa^{2}=0.1\), \(h_{eq}=1-0.3\), \(l=4\) and the size of the computational domain \(L=30\). The space steps \(\Delta_{x}\) were computed as \(\Delta_{x}=(L-l)/(N+1)\), with \(N=300,400,500,600\) for the first-order scheme and \(N=60,120,240,320\) for the second-order scheme. The time step was computed as \(\Delta_{t}=0.9\Delta_{x}\). The numerical results at final time \(T_{f}=15\) computed with the first-order and the second-order schemes show respectively a first-order convergence, see Figure 5 and a second-order convergence, see Figure 6. On Figure 7 we compare the numerical results for the two schemes for \(3\kappa^{2}=0.3\) and \(N=100\), showing evidence that it is advantageous to use the second-order scheme.
#### 4.2.2. The nonlinear case
We do not have any exact solution to compare with in the nonlinear case, and we therefore use mesh-convergence to study the order of our schemes. The reference solution is computed with a very refined mesh: \(N=2400\). We chose \(\epsilon=0.3\), \(3\kappa^{2}=0.3\) or \(3\kappa^{2}=0.1\), \(h_{eq}=1-0.3\), \(l=4\) and the size of the computational domain \(L=30\). The space steps \(\Delta_{x}\) were computed as \(\Delta_{x}=(L-l)/(N+1)\), with \(N=160,200,240,300,400\) for the first-order scheme and \(N=120,160,200\) for the second-order scheme. The meshes are defined so that the points of the coarse meshes always coincide with the points of the very refined mesh of the reference solution. The time step was computed as \(\Delta_{t}=0.7\Delta_{x}\). The numerical results at final time \(T_{f}=15\) computed with the first-order and the
second-order schemes show respectively a first-order convergence, see Figure 8 and a second-order convergence, see Figure 9.
We perform another test to study qualitatively the influence of the dispersion on the nonlinear decay test. We compare the trajectories obtained for different values of \(\kappa\) with the trajectory obtained in the non dispersive case (\(\kappa=0\)). In the latter case, it was shown in [4] that the evolution of \(\delta\) can be found, under some smallness assumptions, by solving a second order differential equation of the form
\[\tau_{0}(\varepsilon\delta)^{2}\ddot{\delta}+\ell\dot{\delta}+\delta+ \varepsilon B(\varepsilon\delta)\dot{\delta}^{2}=0, \tag{57}\]
where \(B(\cdot)\) is a smooth function whose exact expression can be found in Corollary 4.3 of [4]. This nondispersive solution is used as reference to illustrate the contribution of the dispersive terms. On Figure 10 we compare this solution with the numerical results for \(\varepsilon=0.3\), \(h_{eq}=1-0.3\), \(l=0.25\) and various values of \(3\kappa^{2}\) ranging from \(0.05\) to \(1\).
Figure 5. Return to equilibrium, linear case: convergence results for \(\delta\) with the first-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 6. Return to equilibrium, linear case: convergence results for \(\delta\) with the second-order-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
### Waves interacting with a fixed object
We consider here waves that are interacting with a fixed partially immersed object. This is a particular example of prescribed motion (\(\delta\equiv 0\)), but contrary to the wave generation problem, the waves are not supposed to be symmetric with respect to the central axis \(\{x=0\}\). It follows that the interior discharge \(\langle q_{\mathrm{i}}\rangle\) does not vanish identically and that it must be found by solving the ODE (6). From a mathematical point of view, this physical configuration is somehow symmetric to the return to equilibrium problem in the sense that it allows one to focus on the coupling of the fluid equation with the dynamic of the interior discharge \(\langle q_{\mathrm{i}}\rangle\) (since \(\delta\equiv 0\)), while for the return to equilibrium problem the coupling was only with the vertical displacement \(\delta\) (since in that case \(\langle q_{\mathrm{i}}\rangle\equiv 0\)). In this case, the 7-dimensional ODE on \(\Theta\) of the augmented formulation given in Proposition 2.2 can be reduced to a 5-dimensional ODE, as explained in SSA.3 of Appendix A. We first study in SS4.3.1 the linear case for which
Figure 8. Return to equilibrium, non-linear case: mesh-convergence results for \(\delta\) with the first-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 7. Return to equilibrium, linear case: temporal evolution of \(\delta\), comparison between the first-order scheme and the second-order scheme, \(3\kappa^{2}=0.3\) and \(N=100\).
we exhibit a family of explicit solutions that we use to validate our code; the nonlinear case is then considered in SS4.3.2
#### 4.3.1. Convergence error in the linear case
In order to investigate the ability of our scheme to correctly describe the coupling of the Boussinesq-Abbott equation with the average interior discharge \(\langle q_{\mathrm{i}}\rangle\) we exhibit an explicit solution of the equations in the linear case (\(\varepsilon=0\)). In that case, the wave-structure equations (4)-(7) take the form
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\zeta=0\end{cases} \quad\text{ in }\quad\mathcal{E}^{\pm} \tag{58}\]
with transmission conditions
\[\llbracket q\rrbracket=0\quad\text{ and }\quad\langle q\rangle=\langle q_{ \mathrm{i}}\rangle \tag{59}\]
Figure 10. Return to equilibrium, non-linear case: temporal evolution of \(\delta\) for \(\epsilon=0.3\), comparison between results with the second-order scheme obtained with \(N=400\), different values of \(\mu=3\kappa^{2}\), and the exact solution in the non dispersive case.
Figure 9. Return to equilibrium, non-linear case: mesh-convergence results for \(\delta\) with the second-order-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
and where \(\langle q_{\mathrm{i}}\rangle\) solves the forced ODE
\[\alpha(0)\frac{d}{dt}\langle q_{\mathrm{i}}\rangle=-\frac{1}{2\ell}\llbracket \zeta+\kappa^{2}\partial_{t}^{2}\zeta\rrbracket. \tag{60}\]
A family of exact solutions which are periodic in time is given in the following proposition (which can be checked with basic computations omitted here).
**Proposition 4.1**.: _Let \(k\neq 0\) and \(\omega\neq 0\) satisfy the the dispersion relation_
\[\omega^{2}=\frac{k^{2}}{1+\kappa^{2}k^{2}}.\]
_For all \((\zeta_{+}^{c},\zeta_{-}^{c},\zeta^{s},q_{+}^{s},q_{-}^{s},q^{c})\in\mathbb{R} ^{6}\), the functions \((\zeta,q)\) defined in \(\mathcal{E}^{\pm}\) by_
\[\begin{cases}\zeta(x\mp\ell,t)=&\frac{k}{2}\big{[}(\zeta_{\pm}^{c}+q^{c})\cos (kx-\omega t)+(\zeta_{\pm}^{c}-q^{c})\cos(kx+\omega t)\\ &+(\zeta^{s}+q_{\pm}^{s})\sin(kx-\omega t)+(\zeta^{s}-q_{\pm}^{s})\sin(kx+ \omega t)\big{]},\\ q(x\mp\ell,t)=&\frac{\omega}{2}\big{[}(\zeta_{\pm}^{c}+q^{c})\cos(kx-\omega t )-(\zeta_{\pm}^{c}-q^{c})\cos(kx+\omega t)\\ &+(\zeta^{s}+q_{\pm}^{s})\sin(kx-\omega t)-(\zeta^{s}-q_{\pm}^{s})\sin(kx+ \omega t)\big{]},\end{cases}\]
_solve (58)-(59) with initial data_
\[\begin{cases}\zeta^{\mathrm{in}}(x\pm\ell)=k\zeta_{\pm}^{c}\cos(kx)+k\zeta^{s }\sin(kx),&x\in\mathcal{E}_{\pm}\\ q^{\mathrm{in}}(x\pm\ell)=\omega q^{c}\cos(kx)+\omega q_{\pm}^{s}\sin(kx),&x\in \mathcal{E}_{\pm}\end{cases}\]
_and with_
\[\langle q_{\mathrm{i}}\rangle(t)=\omega\big{[}q^{c}\cos(\omega t)-\zeta^{s} \sin(\omega t)\big{]}.\]
_If moreover_
\[q^{c}=-\frac{1}{2\ell\alpha(0)k}(q_{+}^{s}-q_{-}^{s}),\quad\text{ and }\quad \zeta^{s}=\frac{1}{2\ell\alpha(0)k}(\zeta_{+}^{c}-\zeta_{-}^{c})\]
_then (60) is also satisfied, with initial data \(\langle q_{\mathrm{i}}\rangle^{\mathrm{in}}=\omega q^{c}\)._
In our numerical tests we chose \(3\kappa^{2}=0.3\) or \(3\kappa^{2}=0.1\), \(h_{eq}=1-0.2\), \(l=1\), \(k=2\) and the size of the computational domain \(L=10\). The space steps \(\Delta_{x}\) were computed as \(\Delta_{x}=(L-l)/(N+1)\), with \(N=200,240,300,360,400\) for the second-order scheme. The time step was computed as \(\Delta_{t}=0.9\Delta_{x}\). To impose the exact solution on both left and right outer boundaries we use the wave generation method described in SS4.1. The numerical results at final time \(T_{f}=1\) show a second-order convergence, see Figure 11. On Figure 12 one can observe the shape of this exact solution, computed with \(N=400\) points.
#### 4.3.2. The nonlinear case
In the absence of explicit solution in the nonlinear case, we use mesh-convergence to study the precision of our schemes. In this test the initial condition is the solitary wave described in SS4.1, with \(\zeta_{\max}=0.2\), centered at \(x=-15\), at the left side of the fixed object. The size of the computational domain is \(L=30\). For this test the reference solution is computed with a very refined mesh: \(N=2400\). We chose \(\epsilon=0.3\), \(3\kappa^{2}=0.3\) or \(3\kappa^{2}=0.1\), \(h_{eq}=1-0.3\), \(l=4\). The space steps \(\Delta_{x}\) were computed as \(\Delta_{x}=(L-l)/(N+1)\), with \(N=160,200,240,300,400\) for the first-order scheme and \(N=100,120,160\) for the second-order scheme. The meshes are defined so that the points of the coarse meshes always coincide with the points of the very refined mesh of the reference solution. The time step was computed as \(\Delta_{t}=0.7\Delta_{x}\). The numerical results at final time \(T_{f}=20\) computed with the first-order and the second-order schemes show respectively a first-order
convergence for \(\langle q_{\rm i}\rangle\), see Figure 13 and a second-order convergence, see Figure 14. On Figure 15 one can observe the shape of the numerical solution, computed with \(N=400\) points.
see Figures 19, 20 and 21. On Figure 22 one can observe the shape of the numerical solution, computed with \(N=400\) points. A comparison between Figure 22 and Figure 15 shows that the profiles of the reflected and transmitted waves differ. In particular, when the object is allowed to move, the reflected and transmitted wave are preceded by a depression wave that is not present when the object if fixed.
\(\big{(}\langle q_{\rm i}\rangle,\dot{\delta},\dot{\underline{\zeta}}_{+},\dot{ \underline{\zeta}}_{-},\delta,\underline{\zeta}_{+},\underline{\zeta}_{-}\big{)} ^{\rm T}\) that we wrote in abstract form as
\[\frac{d}{dt}\Theta=\mathcal{G}\big{(}\Theta,(R_{1}\mathfrak{f}_{\rm sw})_{+},(R _{1}\mathfrak{f}_{\rm sw})_{-},F_{\rm ext}\big{)}, \tag{61}\]
where \(\mathcal{G}\) is a smooth function of its arguments. The goal of this section is to derive the explicit expression of the mapping \(\mathcal{G}\) which is used for our numerical computations. We first provide in SSA.1 the explicit expression of various coefficients that appear in the wave-structure equations and then derive in SSA.2 the explicit expression of the mapping \(\mathcal{G}\) in the most general case. We then point out the simplifications that can be performed when the object is in fixed or forced motion (in SSA.3) and when the system has a symmetry with respect to the vertical axis \(\{x=0\}\) (in SSA.4 ).
**N.B.** We recall that for the sake of simplicity, we assume throughout this article that \(h_{\rm eq}(x)\) is an even function.
Figure 16. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\delta\) with the first-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 15. Interaction with a fixed object, non-linear case: profile of solutions, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right), computed with the second-order scheme and \(N=400\).
### Explicit expressions of some coefficients
We did not made explicit in the main text of the article most of the constants that appear in the wave-structure interaction equations studied in this paper and derived in [4] because they were not relevant for the mathematical and numerical analysis of these equations. Of course, they are necessary for realistic simulations of wave-structure interactions and we provide them here. Let us first remind that the configuration under consideration is a floating object with vertical sidewalls located at \(x=\pm\ell\) and that can only move in the vertical direction. In dimensionless variables, we denote by \(h_{\rm eq}(x)\) the water depth below the object at equilibrium and by \(\varepsilon\delta(t)\) the displacement of the object at time \(t\) from its equilibrium position, so that that the water depth under the object at time \(t\) is \(h_{\rm eq}(x)+\varepsilon\delta(t)\). The dimensionless mass \(m\) of the object can be defined through Archimedes' principle,
\[m=\frac{1}{2\ell}\int_{-\ell}^{\ell}(1-h_{\rm eq})\]
Figure 17. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\langle q_{\rm i}\rangle\) with the first-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 18. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\zeta_{+}\) with the first-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
and the formulas below will also involve two scalar functions \(\alpha(\varepsilon\delta)\) and \(\beta(\varepsilon\delta)\) defined as
\[\alpha(\varepsilon\delta)=\frac{1}{2\ell}\int_{-\ell}^{\ell}\frac{1}{h_{\rm eq} (x)+\varepsilon\delta}\mathrm{d}x\quad\text{ and }\quad\beta(\varepsilon\delta)=\frac{1}{2}\frac{1}{2\ell}\int_{-\ell}^{ \ell}\frac{x^{2}}{(h_{\rm eq}(x)+\varepsilon\delta)^{2}}\mathrm{d}x;\]
the quantities \(\tau_{\kappa}(\varepsilon\delta)^{2}\) that appears in Newton's equation is given by
\[\tau_{\kappa}(\varepsilon\delta)^{2}=3\kappa^{2}m+\frac{1}{2\ell}\int_{-\ell }^{\ell}\frac{x^{2}}{h_{\rm eq}+\varepsilon\delta}\mathrm{d}x+\kappa^{2} \langle\frac{1}{h_{\rm eq}+\varepsilon\delta}\rangle.\]
Figure 19. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\delta\) with the second-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 20. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\langle q_{\rm i}\rangle\) with the second-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
### The general case
As seen in (14), the first four components of \(\Theta\) satisfy
\[\mathcal{M}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\frac{d}{dt} \begin{pmatrix}\langle q_{i}\rangle\\ \dot{\delta}\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}+\begin{pmatrix}\frac{1}{2\ell}[\underline {\zeta}]\\ \delta-\underline{\langle\zeta\rangle}\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}=\varepsilon\mathbf{\Omega}[\varepsilon \delta,\varepsilon\underline{\zeta}_{\pm}](\langle q_{i}\rangle,\dot{\delta},\underline{\zeta}_{\pm})+\begin{pmatrix}0\\ F_{\text{ext}}\\ (R_{1}\text{fsw})_{+}\\ (R_{1}\text{fsw})_{-}\end{pmatrix} \tag{62}\]
where \(\langle\underline{\zeta}\rangle:=\frac{1}{2}(\underline{\zeta}_{+}+\underline {\zeta}_{-})\), \([\underline{\zeta}]:=\zeta_{+}-\underline{\zeta}_{-}\), and \(\mathcal{M}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\) is the invertible matrix
\[\mathcal{M}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]:=\left( \begin{array}{ccc|ccc}\alpha(\varepsilon\delta)&0&\frac{\kappa^{2}}{2\ell} \frac{1}{h_{+}}&-\frac{\kappa^{2}}{2\ell}\frac{1}{h_{-}}\\ 0&\tau_{\kappa}(\varepsilon\delta)^{2}&-\frac{1}{2}\frac{\kappa^{2}}{h_{+}}&- \frac{1}{2}\frac{\kappa^{2}}{h_{-}}\\ \hline-\kappa&\ell\kappa&\kappa^{2}&0\\ \kappa&\ell\kappa&0&\kappa^{2}\end{array}\right), \tag{63}\]
Figure 21. Interaction with a freely floating object, non-linear case: mesh-convergence results for \(\zeta_{+}\) with the second-order scheme, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right).
Figure 22. Interaction with a freely floating object, non-linear case: profile of solutions, \(3\kappa^{2}=0.1\) (left) and \(3\kappa^{2}=0.3\) (right), computed with the second-order scheme and \(N=400\).
with \(\underline{h}_{\pm}=1+\varepsilon\underline{\zeta}_{\pm}\); simple computations also show that the quadratic term \(\boldsymbol{\mathfrak{Q}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\) is of the form
\[\boldsymbol{\mathfrak{Q}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm} ]=\big{(}\boldsymbol{\mathfrak{Q}}_{\mathrm{i}}[\varepsilon\delta,\varepsilon \underline{\zeta}_{\pm}],\boldsymbol{\mathfrak{Q}}_{\delta}[\varepsilon\delta, \varepsilon\underline{\zeta}_{\pm}],\boldsymbol{\mathfrak{Q}}_{+}[ \varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}],\boldsymbol{\mathfrak{Q }}_{-}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\big{)}^{\mathrm{T}}\]
with (writing simply \(\boldsymbol{\mathfrak{Q}}_{\mathrm{i}}=\boldsymbol{\mathfrak{Q}}_{\mathrm{i}}[ \varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\), etc.)
\[\boldsymbol{\mathfrak{Q}}_{\mathrm{i}}(\langle q_{\mathrm{i}} \rangle,\dot{\delta},\underline{\zeta}_{\pm}) =-\alpha^{\prime}(\varepsilon\delta)\langle q_{\mathrm{i}}\rangle \dot{\delta}-\frac{1}{4\ell}\big{[}\Big{(}\frac{-\ell\dot{\delta}+\langle q_{ \mathrm{i}}\rangle}{\underline{h}_{+}}\Big{)}^{2}-\Big{(}\frac{\ell\dot{ \delta}+\langle q_{\mathrm{i}}\rangle}{\underline{h}_{-}}\Big{)}^{2}\big{]},\] \[\boldsymbol{\mathfrak{Q}}_{\delta}(\langle q_{\mathrm{i}}\rangle, \dot{\delta},\underline{\zeta}_{\pm}) =\beta(\varepsilon\delta)\dot{\delta}^{2}+\frac{1}{2}\alpha^{ \prime}(\varepsilon\delta)\langle q_{\mathrm{i}}\rangle^{2}+\frac{1}{4}\big{[} \Big{(}\frac{-\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle}{\underline{h}_{ +}}\Big{)}^{2}+\Big{(}\frac{\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle}{ \underline{h}_{-}}\Big{)}^{2}\big{]},\] \[\boldsymbol{\mathfrak{Q}}_{+}(\langle q_{\mathrm{i}}\rangle, \dot{\delta},\underline{\zeta}_{\pm}) =\frac{1}{2}\underline{\zeta}_{+}^{2}-\frac{1}{\underline{h}_{+}} \big{(}-\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle\big{)}^{2},\] \[\boldsymbol{\mathfrak{Q}}_{-}(\langle q_{\mathrm{i}}\rangle, \dot{\delta},\underline{\zeta}_{\pm}) =\frac{1}{2}\underline{\zeta}_{-}^{2}-\frac{1}{\underline{h}_{-}} \big{(}\ell\dot{\delta}+\langle q_{\mathrm{i}}\rangle\big{)}^{2}.\]
The matrix \(\mathcal{M}\) is a \(4\times 4\) matrix whose inverse is quite complicated; we therefore transform it into a block-triangular matrix by multiplying (62) by the matrix
\[\left(\begin{array}{c|cc}1&0&-\frac{1}{2\ell}\frac{1}{\underline{h}_{+}}& \frac{1}{2\ell}\frac{1}{\underline{h}_{-}}\\ 0&1&\frac{1}{2}\frac{1}{\underline{h}_{+}}&\frac{1}{2}\frac{1}{\underline{h}_{- }}\\ \hline 0&0&1&0\\ 0&0&0&1\end{array}\right);\]
the resulting equation takes the form
\[\widetilde{\mathcal{M}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm} ]\frac{d}{dt}\begin{pmatrix}\langle q_{\mathrm{i}}\rangle\\ \dot{\delta}\\ \dot{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}+\begin{pmatrix}0\\ \delta\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}=\varepsilon\widetilde{\boldsymbol{ \mathfrak{Q}}}(\langle q_{\mathrm{i}}\rangle,\dot{\delta},\underline{\zeta}_{ \pm})+\left(\begin{array}{c}-\frac{1}{2\ell}\big{[}\frac{1}{\underline{h}}R_ {1}\mathbf{f}_{\mathrm{sw}}\big{]}\\ \langle\frac{1}{\underline{h}}R_{1}\mathbf{f}_{\mathrm{sw}}\rangle+F_{ \mathrm{ext}}\\ (R_{1}\mathbf{f}_{\mathrm{sw}})_{+}\\ (R_{1}\mathbf{f}_{\mathrm{sw}})_{-}\end{array}\right) \tag{64}\]
where the matrix \(\widetilde{\mathcal{M}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\) is block-triangular,
\[\widetilde{\mathcal{M}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm} ]:=\left(\begin{array}{ccc|cc}\alpha(\varepsilon\delta)+\frac{\kappa}{\kappa} \langle\frac{1}{\underline{h}}\rangle&-\frac{\kappa}{2}\big{[}\frac{1}{ \underline{h}}\big{]}&0&0\\ -\frac{\kappa}{2}\big{[}\frac{1}{\underline{h}}\big{]}&\tau_{\kappa}(\varepsilon \delta)^{2}+\kappa\ell\langle\frac{1}{\underline{h}}\rangle&0&0\\ \hline-\kappa&\ell\kappa&\kappa^{2}&0\\ \kappa&\ell\kappa&0&\kappa^{2}\end{array}\right), \tag{65}\]
and the components \(\widetilde{\boldsymbol{\mathfrak{Q}}}_{\mathrm{i}}\), \(\widetilde{\boldsymbol{\mathfrak{Q}}}_{\delta}\) and \(\widetilde{\boldsymbol{\mathfrak{Q}}}_{\pm}\) are given by
\[\widetilde{\boldsymbol{\mathfrak{Q}}}_{\mathrm{i}}(\langle q_{ \mathrm{i}}\rangle, \dot{\delta}, \underline{\zeta}_{\pm})=-\frac{1}{4\ell}\big{[}\frac{1}{ \underline{h}}\underline{\zeta}^{2}\big{]}\] \[+\frac{1}{4}\ell\big{[}\frac{1}{\underline{h}^{2}}\big{]}\dot{ \delta}^{2}+\frac{1}{4\ell}\big{[}\frac{1}{\underline{h}^{2}}\big{]}\langle q_{ \mathrm{i}}\rangle^{2}-\big{(}\alpha^{\prime}(\varepsilon\delta)+\langle \frac{1}{\underline{h}^{2}}\rangle\big{)}\dot{\delta}\langle q_{\mathrm{i}}\rangle,\] \[\widetilde{\boldsymbol{\mathfrak{Q}}}_{\delta}(\langle q_{\mathrm{i}}\rangle, \dot{\delta}, \underline{\zeta}_{\pm})=\frac{1}{2}\langle\frac{1}{\underline{h}} \underline{\zeta}^{2}\rangle\] \[+\big{(}\beta(\varepsilon\delta)-\frac{1}{2}\ell^{2}\langle\frac{1 }{\underline{h}^{2}}\rangle\big{)}\dot{\delta}^{2}+\frac{1}{2}\big{(}\alpha^{ \prime}(\varepsilon\delta)-\langle\frac{1}{\underline{h}^{2}}\rangle\big{)} \langle q_{\mathrm{i}}\rangle^{2}+\frac{1}{2}\ell\big{[}\frac{1}{\underline{h}^{2} }\big{]}\dot{\delta}\langle q_{\mathrm{i}}\rangle,\]
and
\[\widetilde{\mathfrak{Q}}_{+}(\langle q_{\mathfrak{i}}\rangle,\dot{ \delta},\underline{\zeta}_{\pm}) =-\frac{1}{2}\underline{\zeta}_{+}{}^{2}-\frac{1}{\underline{h}_{+}} \langle q_{\mathfrak{i}}\rangle^{2}-\ell^{2}\frac{1}{\underline{h}_{+}}\dot{ \delta}^{2}+2\ell\frac{1}{\underline{h}_{+}}\langle q_{\mathfrak{i}}\rangle \dot{\delta},\] \[\widetilde{\mathfrak{Q}}_{-}(\langle q_{\mathfrak{i}}\rangle,\dot{ \delta},\underline{\zeta}_{\pm}) =-\frac{1}{2}\underline{\zeta}_{-}^{2}-\frac{1}{\underline{h}_{-}} \langle q_{\mathfrak{i}}\rangle^{2}-\ell^{2}\frac{1}{\underline{h}_{-}}\dot{ \delta}^{2}-2\ell\frac{1}{\underline{h}_{-}}\langle q_{\mathfrak{i}}\rangle \dot{\delta}.\]
Since by definition of \(\mathcal{G}\) one has
\[\frac{d}{dt}\big{(}\langle q_{\mathfrak{i}}\rangle,\dot{\delta},\dot{ \underline{\zeta}}_{+},\dot{\underline{\zeta}}_{-}\big{)}^{\mathrm{T}}= \mathcal{G}_{\mathrm{I}}(\Theta,(R_{1}\mathfrak{f}_{\mathrm{sw}})_{\pm},F_{ \mathrm{ext}}) \tag{66}\]
with the notation \(\mathcal{G}_{\mathrm{I}}:=\big{(}\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{G}_{ 3},\mathcal{G}_{4}\big{)}^{\mathrm{T}}\); we deduce from (64) that
\[\mathcal{G}_{\mathrm{I}}(\Theta,(R_{1}\mathfrak{f}_{\mathrm{sw}})_{\pm},F_{ \mathrm{ext}})\]
while
\[\mathcal{G}_{5}(\Theta)=\dot{\delta},\qquad\mathcal{G}_{6}(\Theta)=\dot{ \underline{\zeta}_{+}},\qquad\mathcal{G}_{7}(\Theta)=\dot{\underline{\zeta}_{ -}}.\]
_Remark A.1_.: For the numerical computations, we use the explicit expression for the inverse of the matrix \(\widetilde{\mathcal{M}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm}]\), namely,
\[\widetilde{\mathcal{M}}[\varepsilon\delta,\varepsilon\underline{\zeta}_{\pm} ]^{-1}=\left(\begin{array}{ccc}-\frac{4}{D}\big{(}\tau_{\kappa}(\varepsilon \delta)^{2}+\kappa\ell\langle\frac{1}{\underline{h}}\rangle\big{)}&-\frac{2 \kappa}{D}\big{[}\frac{1}{\underline{h}}\big{]}&0&0\\ -\frac{2\kappa}{D}\big{[}\frac{1}{\underline{h}}\big{]}&-\frac{4}{D}\big{(} \alpha(\varepsilon\delta)+\frac{\kappa}{L}\langle\frac{1}{\underline{h}} \rangle\big{)}&0&0\\ -\frac{4}{\kappa D}\big{(}\tau_{\kappa}(\varepsilon\delta)^{2}+\kappa\ell \frac{1}{\underline{h}_{+}}\big{)}&\frac{4}{\kappa D}\big{(}\kappa\frac{1}{ \underline{h}_{+}}+\ell\alpha(\varepsilon\delta)\big{)}&\frac{1}{\kappa^{2}} &0\\ \frac{4}{\kappa D}\big{(}\tau_{\kappa}(\varepsilon\delta)^{2}+\kappa\ell \frac{1}{\underline{h}_{+}}\big{)}&\frac{4}{\kappa D}\big{(}\kappa\frac{1}{ \underline{h}_{+}}+\ell\alpha(\varepsilon\delta)\big{)}&0&\frac{1}{\kappa^{2}} \end{array}\right) \tag{67}\]
with
\[D=-4\big{(}\alpha(\varepsilon\delta)+\frac{\kappa}{\ell}\langle\frac{1}{ \underline{h}}\rangle\big{)}\times\big{(}\tau_{\kappa}(\varepsilon\delta)^{2 }+\kappa\ell\langle\frac{1}{\underline{h}}\rangle\big{)}+\kappa^{2}\big{[} \frac{1}{\underline{h}}\big{]}^{2}. \tag{68}\]
### The case of an object fixed or in forced motion
When the object is fixed or in forced motion, the position of the center of mass is known and \(\delta\) is therefore equal to some given function \(\delta_{\mathrm{forced}}\) (\(\delta_{\mathrm{forced}}\equiv 0\) if the solid is fixed). The ODE (61) can be reduced to an ODE on \(\mathbb{R}^{5}\) instead of \(\mathbb{R}^{7}\). The variable \(\Theta\) now stands for \(\Theta:=\big{(}\langle q_{\mathfrak{i}}\rangle,\underline{\zeta_{+}},\underline {\zeta_{-}},\underline{\zeta_{+}},\underline{\zeta_{-}}\bigr{)}^{\mathrm{T}}\) and (64) can be simplified into
\[\widetilde{\mathcal{M}}_{\mathrm{forced}}[t,\varepsilon\underline {\zeta}_{\pm}]\frac{d}{dt}\begin{pmatrix}\langle q_{\mathfrak{i}}\rangle\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}+\begin{pmatrix}0\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}= \varepsilon\widetilde{\mathfrak{Q}}_{\mathrm{forced}}[t, \varepsilon\underline{\zeta}_{\pm}](\langle q_{\mathfrak{i}}\rangle,\zeta_{ \pm})\] \[+\begin{pmatrix}-\frac{1}{2\ell}\big{[}\frac{1}{\underline{h}}R_{1 }\mathfrak{f}_{\mathrm{sw}}\big{]}\\ (R_{1}\mathfrak{f}_{\mathrm{sw}})_{+}\\ (R_{1}\mathfrak{f}_{\mathrm{sw}})_{-}\end{pmatrix}+\begin{pmatrix}\frac{1} {2}\kappa\big{[}\frac{1}{\underline{h}}\big{]}\\ -\ell\kappa\\ \end{pmatrix}\breve{\delta}_{\mathrm{forced}}, \tag{69}\]
where the matrix \(\widetilde{\mathcal{M}}_{\mathrm{forced}}[t,\varepsilon\underline{\zeta}_{\pm}]\) is block-triangular,
\[\widetilde{\mathcal{M}}_{\mathrm{forced}}[t,\varepsilon\underline{\zeta}_{\pm}] :=\left(\begin{array}{c|c}\alpha(\varepsilon\delta_{\mathrm{forced}})+\frac{ \kappa}{\ell}\langle\frac{1}{\underline{h}}\rangle&0&0\\ -\kappa&\kappa^{2}&0\\ \kappa&0&\kappa^{2}\end{array}\right), \tag{70}\]
and
\[\widetilde{\boldsymbol{\Omega}}_{\mathrm{forced}}[t,\varepsilon\underline{ \zeta}_{\pm}](\langle q_{\mathrm{i}}\rangle,\underline{\zeta}_{\pm}):=\begin{pmatrix} \widetilde{\mathfrak{Q}}_{\mathrm{i}}[\varepsilon\delta_{\mathrm{forced}}, \varepsilon\underline{\zeta}_{\pm}](\langle q_{\mathrm{i}}\rangle,\dot{ \delta}_{\mathrm{forced}},\underline{\zeta}_{\pm})\\ \widetilde{\mathfrak{Q}}_{+}[\varepsilon\delta_{\mathrm{forced}}, \varepsilon\underline{\zeta}_{\pm}](\langle q_{\mathrm{i}}\rangle,\dot{ \delta}_{\mathrm{forced}},\underline{\zeta}_{\pm})\\ \widetilde{\mathfrak{Q}}_{-}[\varepsilon\delta_{\mathrm{forced}}, \varepsilon\underline{\zeta}_{\pm}](\langle q_{\mathrm{i}}\rangle,\dot{ \delta}_{\mathrm{forced}},\underline{\zeta}_{\pm})\end{pmatrix},\]
and \(\mathfrak{Q}_{\mathrm{i}}\) and \(\mathfrak{Q}_{\pm}\) as in the previous section.
_Remark A.2_.: We have made explicit the dependence of \(\widetilde{\mathcal{M}}_{\mathrm{forced}}\) and \(\widetilde{\boldsymbol{\Omega}}_{\mathrm{forced}}\) on the time variable \(t\) because \(\delta_{\mathrm{forced}}\) is now an explicit function of time, and is now an non autonomous contribution to the ODE for \(\Theta\) (except of course if the object is fixed, in which case \(\delta_{\mathrm{forced}}\equiv 0\)).
With \(\mathcal{G}_{\mathrm{I}}\) now being three dimensional (but with an extra dependence on \(t\)), \(\mathcal{G}_{\mathrm{I}}:=\big{(}\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{G}_ {3}\big{)}^{\mathrm{T}}\), we have therefore
\[\frac{d}{dt}\big{(}\langle q_{\mathrm{i}}\rangle,\dot{\underline{\zeta}}_{+}, \dot{\underline{\zeta}}_{-}\big{)}^{\mathrm{T}}=\mathcal{G}_{\mathrm{I}}(t, \Theta,(R_{\mathrm{I}}\mathfrak{f}_{\mathrm{sw}})_{\pm}) \tag{71}\]
and
\[\times\Big{[}-\begin{pmatrix}0\\ \underline{\zeta}_{+}\\ \underline{\zeta}_{-}\end{pmatrix}+\varepsilon\widetilde{\boldsymbol{\Omega}}_ {\mathrm{forced}}[t,\varepsilon\underline{\zeta}_{\pm}](\langle q_{\mathrm{i} }\rangle,\underline{\zeta}_{\pm})+\begin{pmatrix}-\frac{1}{2\ell}\big{[} \frac{1}{\underline{h}}R_{\mathrm{I}}\mathfrak{f}_{\mathrm{sw}}\big{]}\\ (R_{\mathrm{I}}\mathfrak{f}_{\mathrm{sw}})_{+}\\ (R_{\mathrm{I}}\mathfrak{f}_{\mathrm{sw}})_{-}\end{pmatrix}+\begin{pmatrix} \frac{1}{2}\kappa\big{[}\frac{1}{\underline{h}}\big{]}\\ -\ell\kappa\\ -\ell\kappa\end{pmatrix}\ddot{\delta}_{\mathrm{forced}}\Big{]},\]
while \(\mathcal{G}_{4}(\Theta)=\dot{\underline{\zeta}}_{+}\), \(\mathcal{G}_{5}(\Theta)=\dot{\underline{\zeta}}_{-}\) and
\[\widetilde{M}_{\mathrm{forced}}[t,\varepsilon\underline{\zeta}_{\pm}]^{-1}= \left(\begin{array}{ccc}\frac{1}{\alpha+\frac{\kappa}{ \ell}\langle\frac{1}{\underline{h}}\rangle}&0&0\\ \frac{1}{\kappa}\frac{1}{\alpha+\frac{\kappa}{\ell}\langle\frac{1}{ \underline{h}}\rangle}&\frac{1}{\kappa^{2}}&0\\ -\frac{1}{\kappa}\frac{1}{\alpha+\frac{\kappa}{\ell}\langle\frac{1}{ \underline{h}}\rangle}&0&\frac{1}{\kappa^{2}}\end{array}\right).\]
_Remark A.3_.: The second component of (66) (evolution equation on \(\delta\)) does not appear any longer in (71) but remains of course valid. It can be used to answer the following control problem: what external force should we apply to the object so that the vertical displacement of its center of gravity coincides with \(\delta_{\mathrm{forced}}\)? The answer is explicitly given (writing \(\mathfrak{Q}_{\delta}=\mathfrak{Q}_{\delta}[\varepsilon\delta_{\mathrm{forced} },\varepsilon\underline{\zeta}_{\pm}]\), etc.)
\[F_{\mathrm{ext}}= \delta-\langle\frac{1}{h}R_{\mathrm{I}}\mathfrak{f}_{\mathrm{sw}} \rangle-\varepsilon\mathfrak{Q}_{\delta}(\langle q_{\mathrm{i}}\rangle,\dot{ \delta}_{\mathrm{forced}},\underline{\zeta}_{\pm})\] \[-\frac{1}{4}\frac{D_{\mathrm{forced}}}{\alpha(\varepsilon \delta_{\mathrm{forced}})+\frac{\kappa}{\ell}\langle\frac{1}{\underline{h}} \rangle}\ddot{\delta}_{\mathrm{forced}}\] \[-\frac{1}{2}\frac{\kappa}{\alpha(\varepsilon\delta_{\mathrm{forced }})+\frac{\kappa}{\ell}\langle\frac{1}{\underline{h}}\rangle}\llbracket \frac{1}{\underline{h}}\rrbracket\Big{[}\varepsilon\mathfrak{Q}_{\mathrm{i}}( \langle q_{\mathrm{i}}\rangle,\dot{\delta}_{\mathrm{forced}},\underline{\zeta}_{ \pm})-\frac{1}{2\ell}\llbracket\frac{1}{\underline{h}}R_{\mathrm{I}} \mathfrak{f}_{\mathrm{sw}}\rrbracket\Big{]},\]
where \(D_{\rm forced}\) is deduced from the expression given for \(D\) in (68) by substituting \(\delta_{\rm forced}\) to \(\delta\).
### Simplifications in the symmetric case
When the object is symmetric with respect to the vertical axis \(\{x=0\}\) (i.e. if \(h_{\rm eq}\) is an even function), as assumed throughout this article, it is possible to consider symmetric flows for which \(\zeta\) is an even function, \(q\) is odd, and \(\langle q_{\rm i}\rangle\equiv 0\) (such conditions are propagated by the equations from the initial data). This is for instance the case for waves generated by a floating object in a fluid initially at rest. By symmetry, the augmented transmission problem (16)-(18) reduces to an augmented initial boundary value problem on the half-line \(\mathcal{E}^{+}=(\ell,\infty)\),
\[\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\varepsilon\partial_{x}\big{(} \frac{1}{h}q^{2}\big{)}+h\partial_{x}\zeta=0\end{cases}\qquad\text{for}\quad \ t>0,\quad x\in(\ell,\infty) \tag{72}\]
with boundary condition
\[q_{|_{x=\ell}}=-\ell\dot{\delta}, \tag{73}\]
where \(\delta\) is a function of time determined by the first order ODE
\[\frac{d}{dt}\Theta=\mathcal{G}\big{(}\Theta,(R_{1}\mathrm{f}_{\rm sw})_{+},F _{\rm ext}\big{)}, \tag{74}\]
with \(\Theta:=\big{(}\dot{\delta},\underline{\zeta_{+}},\delta,\underline{\zeta_{+ }}\big{)}^{\rm T}\) and where \(\mathcal{G}=(\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{G}_{3},\mathcal{G}_{4})^ {\rm T}\) is given by
\[\begin{pmatrix}\mathcal{G}_{1}\\ \mathcal{G}_{2}\end{pmatrix}=\mathcal{M}_{\rm sym}[\varepsilon\delta, \varepsilon\zeta_{+}]^{-1}\Big{[}-\begin{pmatrix}\delta\\ \zeta_{+}\end{pmatrix}+\varepsilon\begin{pmatrix}\mathfrak{L}_{\delta}^{\rm sym }[\varepsilon\delta,\varepsilon\zeta_{+}](\dot{\delta},\zeta_{+})\\ \mathfrak{L}_{+}^{\rm sym}[\varepsilon\delta,\varepsilon\zeta_{+}](\dot{ \delta},\zeta_{+})\end{pmatrix}\] \[+\begin{pmatrix}\frac{1}{h_{+}}(R_{1}\mathrm{f}_{\rm sw})_{+}+F _{\rm ext}\\ (R_{1}\mathrm{f}_{\rm sw})_{+}\end{pmatrix}\Big{]}\]
and \(\mathcal{G}_{3}=\dot{\delta}\), \(\mathcal{G}_{4}=\dot{\zeta_{+}}\), and with
\[\mathcal{M}_{\rm sym}[\varepsilon\delta,\varepsilon\zeta_{+}]=\begin{pmatrix} \tau_{\kappa}(\varepsilon\delta)^{2}+\kappa\ell\frac{1}{h_{+}}&0\\ \ell\kappa&\kappa^{2}\end{pmatrix},\]
while \(\mathfrak{L}_{\delta}^{\rm sym}\) and \(\mathfrak{L}_{+}^{\rm sym}\) are obtained by replacing \(\langle q_{\rm i}\rangle=0\) and \(\zeta_{-}=\zeta_{+}\) in the formula derived above for \(\mathfrak{L}_{\delta}\) and \(\mathfrak{L}_{\delta}\).
Two particular physical situations of particular interest fit into the symmetric framework and are investigated in this paper:
* The return to equilibrium. An object is released from an out of equilibrium position in a fluid initially at rest. This situation is described by (72)-(74) with \(F_{\rm ext}=0\) and initial conditions \((\zeta,q)_{|_{t=0}}=(0,0)\) and \((\dot{\delta},\dot{\zeta}_{+},\delta,\zeta_{+})_{|_{t=0}}=(0,0,\delta^{\rm in},0)\).
* Wave generation. Waves are generated in a fluid initially at rest by moving the object up and down with a prescribed motion \(\delta_{\rm forced}\). The problem then reduces to an initial boundary value problem with boundary condition on \(q\), namely, \(q_{|_{x=\ell}}=g\), with \(g=-\ell\dot{\delta}_{\rm forced}\). This boundary data is explicitly given and does not require the resolution of a first order ODE as the other problems considered here.
## Acknowledgment
D. L. was partially supported by the grant ANR-18-CE40-0027-01 Singflows. |
2306.17202 | An end-to-end framework for gene expression classification by
integrating a background knowledge graph: application to cancer prognosis
prediction | Biological data may be separated into primary data, such as gene expression,
and secondary data, such as pathways and protein-protein interactions. Methods
using secondary data to enhance the analysis of primary data are promising,
because secondary data have background information that is not included in
primary data. In this study, we proposed an end-to-end framework to integrally
handle secondary data to construct a classification model for primary data. We
applied this framework to cancer prognosis prediction using gene expression
data and a biological network. Cross-validation results indicated that our
model achieved higher accuracy compared with a deep neural network model
without background biological network information. Experiments conducted in
patient groups by cancer type showed improvement in ROC-area under the curve
for many groups. Visualizations of high accuracy cancer types identified
contributing genes and pathways by enrichment analysis. Known biomarkers and
novel biomarker candidates were identified through these experiments. | Kazuma Inoue, Ryosuke Kojima, Mayumi Kamada, Yasushi Okuno | 2023-06-29T11:20:47Z | http://arxiv.org/abs/2306.17202v1 | An end-to-end framework for gene expression classification by integrating a background knowledge graph: application to cancer prognosis prediction
###### Abstract
**Motivation:** Biological data may be separated into primary data, such as gene expression, and secondary data, such as pathways and protein-protein interactions. Methods using secondary data to enhance the analysis of primary data are promising, because secondary data have background information that is not included in primary data. In this study, we proposed an end-to-end framework to integrally handle secondary data to construct a classification model for primary data. We applied this framework to cancer prognosis prediction using gene expression data and a biological network.
**Results:** Cross-validation results indicated that our model achieved higher accuracy compared with a deep neural network model without background biological network information. Experiments conducted in patient groups by cancer type showed improvement in ROC-area under the curve for many groups. Visualizations of high accuracy cancer types identified contributing genes and pathways by enrichment analysis. Known biomarkers and novel biomarker candidates were identified through these experiments.
**Availability:** This framework is available at [https://github.com/clinfo/SLGCN_cancer_prognosis](https://github.com/clinfo/SLGCN_cancer_prognosis).
**Contact:** [email protected], [email protected]
Supplementary information: Supplementary data are available at _Bioinformatics_ online.
## 1 Introduction
Biological systems are supported by interactions between various molecules that conduct biological activities (Aloy, P. and Russell, R., 2008; Braun, P. and Gingras, A.C., 2012). To comprehensively understand biological systems, integrating various types of data, such as experimental omics and the literature, is important. In general, such data can be categorized into primary and secondary data. The former may be defined as data directly measured from experiments, such as gene expression data, obtained from a patient sample. The latter refers to data obtained by analyzing and aggregating primary data, such as signal transduction pathways and protein-protein interactions (PPIs). The secondary data obtained by analyzing primary data is disseminated through publications and databases, which may be reused as background knowledge for further research. Studies related to primary data analysis focus on differences in genomic variants and gene expression among samples. Recently, the use of deep learning models for such primary data has been reported (Issa, N.T. et al., 2021). For example, prognosis prediction from gene expression data using deep learning was reported (Poinion, O.B. et al., 2021). In contrast, advanced analysis of secondary data often focuses on molecular networks, such as pathways and PPIs. It addresses issues, such as link prediction, subgraph extraction, and
addresses issues, such as link prediction, subgraph extraction, and topology classification (Abdel-Hafiz, M. et al., 2022; Bamunu Mudiyanselage, T. et al., 2022). With recent developments in deep learning, graph neural networks (GNN) have achieved a state-of-the-art for many tasks related to the analysis of secondary data (Defferrard, M. et al., 2016; Hamilton, W. et al., 2017; Kipf, T.N. and Welling, M., 2017).
Approaches to complement secondary data using primary data have also been reported. Examples include methods to obtain subnetworks related to breast cancer metastasis using gene expression data and detect important nodes on the graph representing molecular networks using microarray data (Chuang, H.Y. et al., 2007; Emig, D. et al., 2013). In contrast, approaches that use secondary data to enhance primary data analysis exist (Chereda, H. et al., 2019, 2021; Ramirez, R. et al., 2020, 2021). For example, using cancer pathway as secondary data, cancer prognosis was predicted from gene expression data. (Zheng, X. et al., 2020, 2021). The use of secondary data to enhance primary data analysis is a promising approach, as it enables one to leverage the vast amount of background information that is not included in the primary data. However, previous methods used separately processed secondary data as additional features of the primary data; thus, these methods lacked an end-to-end framework to efficiently integrate these data types.
We propose a novel deep learning framework to integrally analyze secondary data, such as a molecular network, to construct a prediction model from primary data, such as omics data. This enables a prediction at individual levels using background information of the biological network. We apply the framework to cancer prognosis prediction using gene expression data and a biomolecular network.
Prognosis in cancer varies considerably from patient to patient, in part, because of genetic differences. Therefore, various deep learning methods to predict cancer patient survival based on genetic information have been developed (Ching, T. et al., 2018; Huang, Z. et al., 2020; Katzman, J.L. et al., 2018; Pavageau, M. et al., 2021). In this study, we applied our framework to the problem of predicting prognosis considering the background network. Specifically, our proposed framework consists of two parts: a GNN, which trains molecular interactions represented by a graph as background knowledge, and a deep neural network (DNN), which predicts patient survival based on gene expression data that varies for each individual. This model predicts patient survival over specific years by utilizing biomolecular interaction information and gene expression data from an individual patient. We evaluated our model using The Cancer Genome Atlas (TCGA) database and confirmed the effectiveness of the proposed framework by comparison with conventional models.
## 2 Methods
### Knowledge graph construction
In this study, a knowledge graph representing molecular interaction information was constructed using the Pathway Commons database (Cerami, E.G. et al., 2011), which is a large-scale dataset containing biological pathways and interactions in various datasets. It primarily consists of two biomolecules and their relationship. The knowledge graph consists of nodes and edges representing biomolecules and the relationship between them. There were 13 types, such as "phosphorylation" and "chemical affects."
### Individual cancer patient information
Individual cancer patient information for 33 different cancers were obtained from TCGA. The gene expression data for cancer patients estimated from RNA-Seq was acquired by recount2 (Collado-Torres, L. et al., 2017) and log-transformed from transcripts per million for features using the natural logarithm. Recounts2 is an online resource that compiles gene expression data obtained from many research projects, including TCGA. Here, we selected only cancer-related genes. The genes listed for the Molecular Signature Database (MsigDB) (Liberozon, A. et al., 2015) and the LM22 immune gene signatures (Neyman, A.M. et al., 2015) were selected. Genes that did not exist in the knowledge graph nodes were eliminated. The expression data for 4,448 genes were used. We also obtained clinical information for each patient, such as overall survival and cancer type, which was summarized by Liu J et al (Liu, J. et al., 2018). The cancer types were encoded into 33-dimensional one-hot vectors and used as features for each patient.
### Sample selection and labeling
Verified samples were selected. Patients were divided based on a median number of censored days (819 days). Samples with prolonged survival were considered to have responded to treatment with drugs or surgery. We excluded 364 samples who survived over 3,595 days, which corresponded to the top 5% in order of survival time of censored patients with more than 819 days as well as deceased patients. The remaining 10,823 samples were labeled as alive or dead within verified years. We predicted 1- to 5-year patient survival. For the "n" year prediction (\(n\) = [1, 2,..., 5]), the samples censored within "n" years (\(n\) * 365 days) were excluded because the survival states when "n" years passed could not be judged. If patients survived over "n" years, a label "1" was assigned, and if they did not, a label "0" was assigned. Samples details are shown in Supplementary Table 1.
### Model architecture
Our proposed framework consisted of two parts: the GNN for calculating molecular interaction features and the DNN for predicting patient prognosis using gene expression data.
#### 2.4.1 GNN part
In the GNN part, the knowledge graph representing molecular interaction information was used as the input and latent vectors for each graph node were obtained from the training step. The knowledge graph \(G\) represents the molecular interaction as background knowledge and is described as \(G=\left(V,E\right),\) in which \(V\) and \(E\) are finite sets of nodes and edges, respectively. Here, \(E\) is described as \(E=\left(u,r,v\right)\): \(u\) and \(v\) are graph nodes and \(r\) is an edge between them. The GNN part consisted of three layers of a Graph Isomorphism Network (GIN) (Xu, K. et al., 2019), Conv Block and a Concatenate Block. The latent vector \(\textbf{z}_{u}\) of each node \(u\) was calculated as follows.
\[\textbf{z}_{u}=\ GNN(u,G) \tag{1}\]
The calculation formula for the latent vectors in the GIN layer was defined by formula (2).
\[\textbf{z}^{t+1}=\sigma\left(\sum_{t}\textbf{W}(t)\sigma\left(\ell\Pi Com( \textbf{z}^{t},\zeta_{t_{t}})\right)+\textbf{b}_{t_{t}}\right) \tag{2}\]
\(\textbf{z}^{t}\)represents a matrix consisting of the node vectors in the \(\ell\) layer and \(\mathcal{G}_{(r)}\) is a subgraph having only edges meaning relation \(r\).
#### 2.4.2 Individual DNN part
For the DNN part, cancer patient prognosis was predicted using the latent vectors calculated in the GNN part and the clinical information for each patient. The input dataset \(D\) was defined as \(D=\left\{\left(\textbf{X}_{1}\textbf{y}_{1}\right),\left(\textbf{X}_{2} \textbf{y}_{2}\right),\)\(\ldots,\)\(\left(\textbf{X}_{N}\textbf{y}_{N}\right)\right\}\). Here, \(\textbf{X}_{i}=\left(\textbf{u},\textbf{f}_{u}\right),\textbf{u}\) is a gene, and \(\textbf{f}_{u}\) is the expression value for gene \(u\) of patient \(i\). \(\mathcal{y}_{i}\) is a survival label, and if sample \(i\) died within "m" years, \(\mathcal{y}_{i}=0\), otherwise \(\mathcal{y}_{i}=1\).
The output in the DNN part was as follows.
\[\hat{y}_{i}=NN(\textbf{X}_{i}(\textbf{z}_{u}:\ u\in V)) \tag{3}\]
The DNN part consisted of the two-modal block, the aggregation block, and the multi-modal block. First, the two-modal block applied to all genes having expression features and using \(\textbf{z}_{u}\) and gene expression features \(\textbf{f}_{u}\). The output was defined as follows:
\[\textbf{s}_{u}=\textbf{z}_{u}+\textbf{W}\textbf{f}_{u} \tag{4}\]
\(W\) was a parameter matrix of this neural network.
Second, the output in the aggregation block was calculated using \(\textbf{s}_{u}\),
\[t=h(\sum_{u}g(\textbf{s}_{u})\ ) \tag{5}\]
in which \(g\) and \(h\) were computed as the multi-layer perceptron (Zaheer, M. et al, 2017). An activation function in the middle layers was Exponential Linear Unit and batch normalization was introduced. A prediction value \(\hat{y}\) was calculated using \(t\) and sample vectors \(\textbf{e}_{i}\). Sample vectors represent patient clinical information. In this study, we included their cancer types; thus, there were 33-dimensional one-hot vectors.
The correct labels were 0 or 1, so the proposed framework was a binary classification. The activate function in the output layer is a sigmoid function, so the prediction value \(\hat{y}\) was a real number between 0 to 1. We conducted a 5-fold cross-validation for each verification year for accuracy evaluations.
\[\hat{y}_{i}=\ Sigmoid(MNN(t,\textbf{e}_{i})) \tag{6}\]
### Learning methods
Model learning was conducted in two steps. In the first step, the link prediction was adapted with the graph to pre-train the GNN. Then, fine-tuning for the whole network including the DNN part was performed. We could select whether these two steps were connected or not. When
Figure 1: **Model architecture.** There are three inputs: a knowledge graph, patient gene expression data, and cancer type. The model has two parts: the GNN part and the NN part. Prgngnoses are predicted in the DNN part. Feedback learning and visualization of feature contribution can be conducted.
they are connected, it is designated end-to-end. Its fine-tuning is conducted throughout the entire model. Link prediction is a learning method for the probability of existing an edge between two nodes. In other words, it predicts whether an edge exists between two nodes. By pre-training the GNN part with link prediction, the graph structure represents biomolecular interaction information. The loss function of the link prediction was defined as follows:
\[L\ _{pre}=-\ell_{+}(u,v)\ -\ \ell_{-}(u,v) \tag{7}\]
\(u\) and \(v\) were sampled from \(E\) at random and \(v^{\prime}\) was sampled from \(V\) at random.
\[\ell_{+}(u,v)\ =\ log(\sigma(u^{T}v)) \tag{8}\]
\[\ell_{-}(u,v)\ =\ log(\sigma(-u^{T}v)) \tag{9}\]
When a link with relation was predicted, a weight matrix **W** was used and the Dist-Mult model (Yang, B. et al., 2014) was employed, which is a widely used method for knowledge graph embedding to complement a relation between the nodes.
\[\ell_{+}(u,r,v)\ =\ log(\sigma(u^{T}\textbf{W}_{r}v)) \tag{10}\]
\[\ell_{-}(u,r,v)\ =\ log(\sigma(-u^{T}\textbf{W}_{r}v)) \tag{11}\]
Fine-tuning was done by backward computation using a loss function of \(\hat{y}\) and \(y\). In the DNN part, a sigmoid function was employed as an activation function in the output layer and binary cross-entropy was used as a loss function.
\[L_{frac}(y,y)=\ -\hat{y}logy-(1-y)log\ (1-\hat{y}) \tag{12}\]
### Model evaluation
In this study, 1- to 5-year patient survival predictions were performed and we evaluated these prediction models by a 5-fold cross-validation. Using the final epoch in the training set of each fold, the area under the curve (AUC) of the test set was calculated and mean AUC values were used for the evaluation. We used scikit-learn v0.24.2 (Pedregosa, F. et al., 2011) and pytorch v1.7.0 (Paszke, A. et al., 2019) for model implementation and evaluation. Our framework had two modes: the end-to-end or not. The end-to-end mode learns through the GNN part and the DNN part at once. The not end-to-end model conducts pre-training of the GNN part and updates only the DNN part weights. In other words, the end-to-end model can update graph node features depending on the DNN prediction.
### Prediction interpretation
To determine which features contribute to survival prediction, we used the integrated gradients (IG) (Sundararajan, M. et al., 2017) method to visualize the features contribution. The IG of the patient \(i\) was defined by using baseline \(x^{\prime}\) and input features \(x\). A baseline \(x^{\prime}\) is a standard value when the model determines their contributions and it is used when their feature values are 0. Thus, the IG of a patient \(i\) is defined as follows:
\[IG_{i}(x)=(x_{i}-x^{\prime})\ \
We selected the end-to-end model, in which pre-training was 50 epochs, and analyzed the results. For all verification years, our proposed framework (GNN \(+\) DNN part) outperformed the conventional model (only the DNN part). It is important to note that our model (GNN \(+\) DNN) achieved equivalent accuracy to the model with the DNN part and cancer types. In addition, when cancer types were added to our model, the performance yielded the highest accuracy.
Fig. 2 shows how the ROC-AUC for the 5-year models changed by pre-training epochs. Our model was selected by either adopting the end-to-end method or not. It can learn consistently two parts if selecting the former, and if not, it cannot renew the GNN part and only learn from the DNN part. In addition, we selected pre-training epochs arbitrarily and found that it could predict more highly and stably if "end-to-end" was selected for the learning method.
### Accuracy differences by cancer type
In this section, we considered how the accuracy of each cancer type changed with or without the GNN part. Fig. 3. shows the accuracy comparison of the two models for the 3-year prediction. The total number of cancer types used for learning and prediction was 33. In this experiment, some of the AUCs could not be calculated because of the number of patients that survived and died in each fold on cross-validation. Fig. 3 shows that our model resulted in higher ROC-AUC values for many cancer types compared with the conventional model. For all verification years, 83-95% of the cancer types had improved ROC-AUC values compared with the conventional model. In particular, adrenocortical carcinoma (ACC), kidney renal papillary cell carcinoma (KIRP), brain lower grade glioma (LGG), and mesothelioma (MESO) had more accurate predictions compared with the others. We confirmed this tendency for all verification years. Although we found no correlation between accuracy improvement and the corresponding patient death ratios and sample sizes, specific associations with clinical features and onset organs were not observed.
### Visualization of the feature contributions using integrated gradients
We used the IG method to examine the contributed inputs to the prediction, including nodes in the GNN part and features in the DNN part.
#### 3.3.1 IG analysis: graph nodes
We calculated the IG values for the graph nodes in the GNN part and determined the relationship between the characteristics of the graph nodes and the IG values. Centrality measures the degree to which a graph node is central and acts as a hub in a knowledge graph. The degree of centrality represents the number of edges that a node contains. Fig. 4 shows a positive correlation (R \(=\) 0.727) between the IG value and the degree of centralities for the graph nodes. Other centrality measures, such as the closeness of centrality, also exhibited the same tendency (Details in Supplementary Fig. 1). It showed that the nodes with a high IG value were high centrality nodes in the knowledge graph. Nodes with high centrality are important for the interaction with various molecules. We confirmed that these crucial nodes contributed to prediction.
We performed an enrichment analysis using the top 100/300/500/1000 average IG values for the graph nodes using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) (Huang, D.W. et al., 2007) as well as gene sets from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (Kanehisa, M. and Goto, S. 2000; Kanchisa, M. et al., 2016) (Table 2 and Supplementary Table 2). Table 2 shows the result for the top 10 pathways for the top 100 nodes, in which cancer-associated pathways were identified. This indicates that our model learned that the nodes representing cancer-associated biomolecules in the graph contributed to prediction.
\begin{table}
\begin{tabular}{c c c c c} \hline year & DNN & DNN + cancer types & GNN + DNN DNN & + cancer types \\ \hline
1 & 0.6312 & 0.7425 & 0.7382 & 0.7585 \\
2 & 0.6168 & 0.7672 & 0.7678 & 0.7596 \\
3 & 0.6188 & 0.7624 & 0.7733 & 0.7890 \\
4 & 0.6173 & 0.7674 & 0.7714 & 0.7900 \\
5 & 0.5971 & 0.7644 & 0.7581 & 0.7850 \\ \hline \end{tabular}
\end{table}
Table 1: ROC-AUC of four models for all verification years. The column “year” represents the verification year. Models predicting whether patients died within each verification year.
Figure 2: **Accuracy transitions in the 5-year model with or without pre-training in the GNN part.** The vertical axis and horizontal axis represent the AUC and pre-training epochs, respectively. Blue boxes represent the end-to-end model AUC and orange boxes are not the end-to-end model.
#### 3.3.2 IG analysis: graph nodes for individual cancer types
For all 33 cancer types, we made node lists, in which the IG values were ranked in the top 1500. The top 1500 nodes of IG values were obtained and compared for differences among all cancer types. Focusing on the four cancer types (ACC, KIRP, LGG, and MESO) for which the addition of the GNN part significantly improved accuracy in section 3.2, we examined the effect of the graph in detail.
We performed a t-test for each listed node for one cancer type and the other 32 cancer types as well as for the nodes exhibiting high IG values (p-value \(<\) 0.05). For KIRP, IG values for the 17 nodes in Fig. 5 were significantly different from that of the other cancer types. These nodes were connected in the input graph. CHEBI:2504 indicates an aflatoxin node that acts a hub node. Aflatoxin is a carcinogen, particularly in the liver, and its relevance to kidney cancer is known (Bbosa, G.S. et al., 2013; Li, H. et al., 2018; Marchese, S. et al., 2018). In addition, 56 nodes showed significant differences for MESO (Supplementary Table 3). For these nodes, we conducted an enrichment analysis using the KEGG pathway database. The results indicated that the nodes were enriched in the Mitogen-activated Protein Kinase (MAPK) signaling pathway. A series of events (FGF2 induction, MAPK pathway activation, and MMP1 induction) are known to be important for epithelial-to-mesenchymal transition (EMT) signaling in MESO (Schell, K. et al., 2018; Ramundo, V. et al., 2021). For ACC, six nodes were identified as significantly different nodes (Supplementary Table 3). SF1 is known as a diagnostic marker for ACC (Almeida, M.Q. et al., 2010; Ehrhund, A. et al., 2012).
\begin{table}
\begin{tabular}{c c c c} \hline
**Rank** & **KEGG Term** & **Count** & **P-value** \\ \hline
1 & Pathways in cancer & 49 & 4.61E-29 \\
2 & Kaposi sarcoma-associated & & \\ & herpesvirus infection & 33 & 4.48E-27 \\
3 & Hepatitis B & 31 & 6.41E-27 \\
4 & Lipid and atherosclerosis & 32 & 2.31E-24 \\
5 & Prostate cancer & 24 & 2.71E-23 \\
#### 3.3.3 IG analysis; the gene expression
The top 200 gene lists for each cancer were obtained by calculating the IG values for the gene expression data. The top 200 genes consisted of the top 100 death-contributing genes and the top 100 survival-associated genes. For the top 200 genes, we identified differences among the models and cancer types as well as between genes with cancer type-specific IG values.
We compared these gene lists among the three models (DNN, DNN \(+\) cancer types, GNN \(+\) DNN). Fig. 6 shows a Venn diagram comparing the three model gene lists for LGG. The genes were altered when the GNN part was added and this tendency was confirmed in almost all cancer types. It revealed that gene expression features were changed and contained genes that were associated with prognosis, whereas the accuracy improved because of the molecular interaction information. We confirmed that the ROC-AUC was improved by the GNN part in the previous section (Table 1). The genes listed only in the GNN \(+\) DNN part may contribute to high accuracy prediction.
Next, we investigated cancer-specific relevant genes by comparing the gene lists of the GNN \(+\) DNN model among the various cancer types. FURIN, PLAC8, PBK, and LMNB1 were listed only for LGG. FURIN and PBK are known prognostic factors and their increased expression is associated with poor prognosis in LGG (Feng, T. et al., 2021; Zhou, B. and Gao, S. 2021). These results indicate that genes uniquely listed in each cancer type are related to prognosis.
For the top 200 IG genes of the GNN \(+\) DNN model, we obtained genes with IG values significantly (p-value \(<\) 0.05) different from other cancer types. In MESO, the IG values of ITGA10 and COL4A1 were significantly high. These two genes are related to an extracellular matrix (ECM) receptor, which is a non-cellular element and contributes to tissue morphogenesis and differentiation. ECM is associated with the growth of MESO cells and is considered a potential treatment target (Pass, H.I. et al., 2005; Tajima, K. et al., 2010). In LGG, 81 genes exhibited significantly high IG values (Supplementary Table 4). An enrichment analysis was performed for these genes (Table 3) and several were associated with cranial nerve diseases, such as spinocerebellar degeneration, prion diseases, and Alzheimer's disease. These results indicate that our model utilized features clinically relevant to cancer prognosis and clinical conditions for each cancer type. In addition, the results suggest that the graph nodes and genes contributing to prediction may represent novel biomarkers.
## 4 Conclusion
We propose a new end-to-end framework integrating primary data and secondary data represented as a graph and applied this framework to predict cancer patient prognosis. We regarded biomolecular interactions as a background knowledge graph and individual gene expression data as primary data. Compared with the conventional prediction model for cancer patient prognosis, we improved prediction accuracy by combining the biomolecular information and the individual gene expression data. Moreover, the IG method enabled us to visualize which nodes represented genes and genes that had prognostic value.
Fig. 5: **The KIRP subgraph.** The nodes exhibiting significant differences in a t-test. The colors represent IG values and their sizes indicate their degree centralities.
Fig. 6: **A Venn diagram comparing the top 200 genes of the three models.** The red circle represents the GNN \(+\) DNN part, the green represents the DNN part, and the blue represents the DNN \(+\) cancer types.
\begin{table}
\begin{tabular}{c c c c} \hline Rank & KEGG Term & count & P-value \\ \hline
1 & Pion disease & 7 & 2.17E-3 \\ & Pathways of neurodegeneration & & \\
2 & – multiple diseases & 8 & 8.52E-3 \\
3 & Parkinson disease & 6 & 1.00E-2 \\
4 & Alzheimer disease & 7 & 1.13E-2 \\
5 & Huntington disease & 6 & 1.74E-2 \\
6 & Proteasome & 3 & 2.19E-2 \\
7 & Amyotrophic lateral sclerosis & 6 & 3.38E-2 \\
8 & Spinocerebellar sclerosis & 4 & 3.41E-2 \\
9 & Epstein-Barr virus infection & 4 & 7.93E-2 \\ & Glycosaminoglycan & & \\
10 & biosynthesis – chondroitin & 2 & 9.54E-2 \\ & sulfate / dermatan sulfate & & \\ \hline \end{tabular}
\end{table}
Table 3: The enrichment analysis results for the 81 high IG value genes for LGG.
Our results showed graph nodes and gene expression data with high IG values, contributed inputs, and were consistent with known biological information, such as cancer-related pathways and prognostic factors. We obtained new genes with high IG values, which may potentially represent novel biomarker candidates. In addition, our model without cancer type labels could achieve a similar prediction accuracy. The results suggest that our model captures potential biological knowledge for the target disease, thus our framework may also be useful for other diseases.
To apply these findings to the clinic, more accuracy is needed. Rich clinical information improves the prediction accuracy (Huang, S. et al., 2014). We used only cancer types as patient information. To improve accuracy for clinical applications, additional clinical information, such as cancer stage, sex, age, and medical history is needed.
## Funding
This work was supported by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Application of Molecular Dynamics Simulation to Precision Medicine Using Big Data Integration System for Drug Discovery, [JPMXP1020202021]) and Cabinet Office, Government of Japan,
Public/Private R&D Investment Strategic Expansion Program (PRISM). This research was also supported by JST Moonshot R&D Grant Number [JPMJMS2024, and JSPS KAKENHI Grant No.21H03537, Japan.
|
2307.07338 | Collisions of red giants in galactic nuclei | In stellar-dense environments, stars can collide with each other. For
collisions close to a supermassive black hole (SMBH), the collisional kinetic
energy can be so large that the colliding stars can be completely destroyed,
potentially releasing an amount of energy comparable to that of a supernova.
Such violent collisions, which we call BH-driven disruptive collisions (BDCs),
have been examined mostly analytically, with the non-linear hydrodynamical
effects being left largely unstudied. Using the moving-mesh hydrodynamics code
{\small AREPO}, we investigate high-velocity ($>10^{3}$ km/s) collisions
between 1M$_{\odot}$ giants with varying radii, impact parameters, and initial
approaching velocities, and estimate their observables. Very strong shocks
across the collision surface efficiently convert $\gtrsim10\%$ of the initial
kinetic energy into radiation energy. The outcome is a gas cloud expanding
supersonically, homologously, and quasi-spherically, generating a flare with a
peak luminosity $\simeq 10^{41}-10^{44}$ erg/s in the extreme UV band ($\simeq
10$ eV). The luminosity decreases approximately following a power-law
$t^{-0.7}$ initially, then $t^{-0.4}$ after $t\simeq$10 days at which point it
would be bright in the optical band ($\lesssim 1$eV). Subsequent, and possibly
even brighter, emission would be generated due to the accretion of the gas
cloud onto the nearby SMBH, possibly lasting up to multi-year timescales. This
inevitable BH-collision product interaction can contribute to the growth of BHs
at all mass scales, in particular, seed BHs at high redshifts. Furthermore, the
proximity of the events to the central BH makes them a potential tool for
probing the existence of dormant BHs, even very massive ones which cannot be
probed by tidal disruption events. | Taeho Ryu, Pau Amaro Seoane, Andrew M. Taylor, Sebastian T. Ohlmann | 2023-07-14T13:37:01Z | http://arxiv.org/abs/2307.07338v4 | # Collisions of red giants in galactic nuclei
###### Abstract
In stellar-dense environments, stars can collide with each other. For collisions close to a supermassive black hole (SMBH), the collisional kinetic energy can be so large that the colliding stars can be completely destroyed, potentially releasing an amount of energy comparable to that of a supernova. Such violent collisions, which we call BH-driven disruptive collisions (BDCs), have been examined mostly analytically, with the non-linear hydrodynamical effects being left largely unstudied. Using the moving-mesh hydrodynamics code arepo, we investigate high-velocity (\(>10^{3}\) km/s) collisions between \(1\rm M_{\odot}\) giants with varying radii, impact parameters, and initial approaching velocities, and estimate their observables. Very strong shocks across the collision surface efficiently convert \(\gtrsim 10\%\) of the initial kinetic energy into radiation energy. The outcome is a gas cloud expanding supersonically, homologously, and quasi-spherically, generating a flare with a peak luminosity \(\simeq 10^{41}-10^{44}\) erg/s in the extreme UV band (\(\simeq 10\) eV). The luminosity decreases approximately following a power-law \(t^{-0.7}\) initially, then \(t^{-0.4}\) after \(t\simeq\)10 days at which point it would be bright in the optical band (\(\lesssim 1\)eV). Subsequent, and possibly even brighter, emission would be generated due to the accretion of the gas cloud onto the nearby SMBH, possibly lasting up to multi-year timescales. This inevitable BH-collision product interaction can contribute to the growth of BHs at all mass scales, in particular, seed BHs at high redshifts. Furthermore, the proximity of the events to the central BH makes them a potential tool for probing the existence of dormant BHs, even very massive ones which cannot be probed by tidal disruption events.
keywords:
## 1 Introduction
Dynamical interactions between stars in stellar-dense environments, e.g., globular clusters and galactic centers, play a crucial role in driving the evolution of the host and determining its thermodynamic state (Hut et al., 1992). If the stellar density is sufficiently high, stars can collide with relative velocities comparable to the dispersion velocity of the host. In globular clusters, up to 40% of main-sequence stars in the core would undergo a collision during the lifetime of the cluster (Hills and Day, 1976). For clusters with very high number densities (\(\gtrsim 10^{7}\) pc\({}^{-3}\)), a star may suffer multiple such collisions (Dale and Davies, 2006).
Galactic centers are extreme environments where stars are densely packed (e.g., \(10^{6}-10^{7}\) pc\({}^{-3}\) for nuclear clusters, Neumayer et al., 2020 and references therein) around a supermassive black hole (SMBH). Because the relative velocity between stars near the SMBH is roughly the Keplerian speed \(\propto r^{-0.5}\), stars near the BH would collide at very high speeds (e.g., \(v_{\rm rel}\gtrsim 2000\)km/s within \(\simeq 0.1\) pc around a \(10^{7}\) M\({}_{\odot}\) BH). If the kinetic energy of the collision (\(\gtrsim 10^{50}\) erg for a collision between two stars with mass \(M_{\star}=1\) M\({}_{\odot}\) and \(v_{\rm rel}\gtrsim 2000\) km/s) is greater than the binding energy of the stars (\(10^{48}-10^{49}\) erg for \(M_{\star}=1\) M\({}_{\odot}\)), the stars would be completely destroyed, leaving behind an expanding gas cloud. If even a small fraction of the collisional kinetic energy is converted into radiation, the high-velocity collision can generate a bright electromagnetic transient from the Galactic nucleus region.
The total rates of such events between main-sequence stars have been estimated to be \(10^{-4}-10^{-5}\) yr\({}^{-1}\) galaxy\({}^{-1}\)(Rose et al., 2020; Amaro Seoane, 2023; Rose et al., 2023) if the core is fully relaxed to the Bahcall-Wolf power-law \(\propto r^{-7/4}\)(Bahcall and Wolf, 1976)1. The rate for collisions between giants could be higher due to larger cross-sections (Amaro Seoane, 2023). However, if collisions continuously deplete the inner part of the stellar-density cusp, the rate would become smaller, e.g., \(\simeq 10^{-5}-10^{-7}\) yr\({}^{-1}\) galaxy\({}^{-1}\) for main-sequence stars, depending on the assumption of the stellar influx into the center
(Balberg and Yassur, 2023). Since these powerful collisions essentially destroy stars in galactic center environments, these events can affect the frequency of other types of nuclear transients. For example, Balberg and Yassur (2023) suggests that high-velocity collisions can almost completely suppress extreme mass-ratio inspirals.
High-velocity collisions between main-sequence stars (e.g., Benz and Hills, 1987, 1992; Lai et al., 1993; Rauch, 1999; Freitag and Benz, 2005) have been studied using numerical simulations, focusing on the mass ejection and the impact of such collisions on the thermodynamic state of the host, rather than their observation signatures. Recently, Amaro Seoane (2023b) analytically investigated the observables of high-velocity collisions between stars of various types in galactic nuclei. They found that the peak luminosity of high-velocity collisions can be as high as \(10^{44}\eta_{\rm rad}\) erg/s. Here, \(\eta_{\rm rad}\) is one of the determining factors which measures how efficiently the initial kinetic energy is converted into radiation energy. If \(\eta_{\rm rad}\) is of order unity, the peak luminosity can be comparable to different types of nuclear transients, such as tidal disruption events. However, \(\eta_{\rm rad}\) in their work was left as a free parameter because evaluating \(\eta_{\rm rad}\) involves non-linear hydrodynamics effects such as shocks, which cannot be done analytically.
In this paper, we investigate the hydrodynamics of high-velocity collisions, or black hole-drive disruptive collisions (BDCs), between 1 \(\rm M_{\odot}\) giants and numerically estimate the radiation conversion efficiency and their observables, using the moving-mesh hydrodynamics code AREPO(Springel, 2010; Weinberger et al., 2020; Pakmor et al., 2016). In the simulations, we consider collisions with \(v_{\rm rel}=10^{4}\) km/s between two identical 1 \(\rm M_{\odot}\) giants with four different radii (\(R_{\star}=10\) R\({}_{\odot}\), 20 R\({}_{\odot}\), 50 R\({}_{\odot}\), and 100 R\({}_{\odot}\)), four impact parameters (\(b=0.04\)\(R_{\star}\), 0.2 \(R_{\star}\), 0.4 \(R_{\star}\), and 0.8 \(R_{\star}\)), and three initial approaching velocity (\(v_{\rm rel}=10^{4}\) km/s, \(5\times 10^{3}\) km/s, and \(2.5\times 10^{3}\) km/s). The largest approaching speed corresponds to roughly the largest relative velocity for stellar collisions near the BH, i.e., the Keplerian velocity at the smallest possible distance from the BH where at least two stars exist for a typical stellar density around a massive BH assuming the Bahcall-Wolf power law: \(r\simeq 10^{-5}\) pc for \(10^{5}\)\(\rm M_{\odot}\) BH, \(\simeq 10^{-4}\) pc for \(10^{6}\)\(\rm M_{\odot}\) BH, and \(\simeq 10^{-3}\) pc for \(10^{7}\)\(\rm M_{\odot}\) BH. Because collisions with lower relative velocities are expected to create fainter transients, our simulations with the largest \(v_{\rm rel}\) would provide an upper limit for the luminosity and total radiated energy of these events.
This paper is organized as follows. We describe our methods in SS 2, including the code description (SS 2.1), stellar models (SS 2.2), and initial conditions (SS 2.3). Then we present our results in SS 3 and discuss astrophysical implications for the collisions in SS 4. Finally, we summarize and conclude in SS 5.
## 2 Methods
### Code
We perform a suite of 3D hydrodynamic simulations of BDCs between red giants using the massively parallel gravity and magneto-hydrodynamics moving-mesh code AREPO(Springel, 2010; Pakmor et al., 2016; Weinberger et al., 2020). The code inherits advantages of the two widely used hydrodynamical schemes, the Lagrangian smoothed particle method and the Eulerian finite-volume method, allowing for an accurate treatment of supersonic flows and shock capturing without introducing an artificial viscosity, and low advection errors. We use the ideal equation of states that takes into account radiation pressure assuming local thermodynamic equilibrium,
\[P=\frac{\rho k_{\rm B}T}{\mu m_{\rm p}}+\frac{4\sigma}{3c}T^{4}, \tag{1}\]
where \(P\) is the total pressure, \(\rho\) the density, \(k_{\rm B}\) the Boltzmann constant, \(T\) the temperature, \(\mu=0.62\) the mean molecular weight, \(m_{\rm p}\) the proton mass, and \(\sigma\) the Stefan-Boltzmann constant.
### Stellar model
We adopt the internal structure of giants evolved using the 1D stellar evolution code MESA (version r22.05.1 Paxton et al., 2011, 2013) to model giants in 3D. The star has an initial mass \(M_{\star}=1\)\(\rm M_{\odot}\) and a metallicity of \(Z=0.02\). We treat the mixing processes and winds following Choi et al. (2016). More specifically, we model convection using the mixing length theory with a mixing length parameter of 1.81. We adopt the Ledoux (1947) criterion to determine the boundary of the convective regions an exponential overshoot prescription (Herwig, 2000) with parameters \(f=0.016\) and \(f_{0}=0.008\) at the top of the core and \(f=0.0174\), \(f_{0}=0.0087\) at the bottom of the hydrogen-burning shell. Semiconvection is treated following Langer et al. (1983) with an efficiency factor of 0.1. We allow the star on the red giant branch to lose mass via wind following the prescription from Reimers (1975) with scaling factor of 0.1.
Figure 1 shows the evolution of the 1 \(\rm M_{\odot}\) star in a Hertzsprung-Russell diagram until it reaches the tip of the red-giant branch. We take the giants at four different evolutionary stages where their radii are \(R_{\star}\simeq 10\), 20, 50, and 100 \(\rm R_{\odot}\) (indicated by the star symbols in the figure).
We construct 3D giants from the 1D giant models using the method developed in Ohlmann et al. (2017) with \(10^{6}\) cells. Modeling the entire giant with gas cells is computationally expensive given very steep density gradients. So instead, we model the inner part of the star with a point particle, representing effectively the core. Furthermore, we place gas cells on top of it such that the internal structure above the core matches with the MESA model while the entire star stays in hydrostatic equilibrium. The point particle interacts only gravitationally with gas: it only gravitationally pulls the envelope which is cancelled
Figure 1: Evolution of a 1 \(\rm M_{\odot}\) star in a Hertzsprung–Russell diagram. The color bar shows the age of the star. The four star symbols mark the four giant models adopted for collision experiments: (from smallest to largest symbols) \(R_{\star}=9\), 20, 50, and 100 \(\rm R_{\odot}\).
by the pressure gradient of the gas when the star is in isolation. We choose that the size of the region modelled using a point particle is 5% of the stellar radius ("point particle radius"). The point particle radius is in fact greater than the size of the core (\(R\simeq 0.02\) R\({}_{\odot}\)). This choice is justified by the fact that the mass of the core is effectively the same as the enclosed mass within \(\simeq 0.05\)\(R_{\star}\) (vertical dotted lines), as illustrated in Figure 2. This means the total binding energy inside our 3D giants is essentially the same as what we would have had when the point particle radius were exactly the core radius. With this choice of the point radius, while we reduce computational costs significantly, we lose only a small fraction of the total energy budget inside the star.
We then relax the 3D stars fully in isolation, which usually takes 5 - 10 stellar dynamical times (\(\sqrt{R_{\star}^{3}/G\ M_{\star}}\)). Figure 3 shows the radial density of the fully relaxed stars above the point particle (_top_ panel) and their errors (_bottom_ panel) relative to the MESA models. The relative errors of the density of the inner part of the stars, where most of the mass is concentrated, are less than a few %. Although the errors at the surface are relatively large, the deviation of such small masses at the surface, corresponding to the plateau at the end of each line in Figure 2, should not affect our results.
We performed resolution tests for nearly head-on collisions between giants with \(R_{\star}=100\) R\({}_{\odot}\) with different resolutions. The choice of the collision parameters are motivated by the fact that the impact of the shock in such a collision is the strongest (see Figure 8), which requires the highest resolution. We first constructed giants with \(N=2.5\times 10^{5}\), \(5\times 10^{5}\), \(10^{6}\), \(2\times 10^{6}\) and \(4\times 10^{6}\) cells and performed the collision experiments. We find that the results have already converged very well when \(N\geq 10^{6}\): the conversion factor \(\eta_{\rm rad}\), defined in Equation 8, differs by less than 1%. In fact, the difference in \(\eta_{\rm rad}\) between \(N\leq 5\times 10^{5}\) and \(N=10^{6}\) is already reasonably small, \(\lesssim 20\%\) for \(N=2.5\times 10^{5}\) and \(\lesssim 10\%\) for \(N=5\times 10^{5}\) relative to the case with \(N\geq 10^{6}\). Furthermore, we confirmed that the total energy is conserved within \(\lesssim 1\%\) until the end of the simulations.
### Initial conditions
We place two identical stars, initially separated by 10 \(R_{\star}\), on a hyperbolic orbit with some relative velocity at infinity \(v_{\rm rel}\). So it takes 10 \(R_{\star}/v_{\rm rel}\simeq(0.1-1)\) days, depending on \(R_{\star}\) and \(v_{\rm rel}\), until the two stars collide. We note that the time is measured since collision in this paper: accordingly, the initial time of the simulations is \(t\simeq-(0.1-1)\) days. Those stars are embedded in a low-density background medium with density of \(10^{-18}\) g/ cm\({}^{3}\) and temperature of \(10^{4}\) K. The background density is comparable to the density of the interstellar medium at the Galactic center ranging between \(10^{5}\) to \(10^{6}\) particles per cm\({}^{3}\)(Gillessen et al., 2019) at Galactic center distances that dominate the collision rate (see Amaro Seoane, 2023). We discuss the impact of the background density and temperature on the properties of collision products in SS4.1. Our fiducial model is the near-head-on collision between the two 10 R\({}_{\odot}\) giants initially approaching towards each other at \(v_{\rm rel}=10^{4}\)km/s with an impact parameter \(b=0.04\)\(R_{\star}\). Here, \(b=0.04\)\(R_{\star}\) is the smallest possible impact parameter given the softening length of the point particle: in other words, the gravity of the point particles becomes inaccurate at the closest approach distance with \(b<0.04\)\(R_{\star}\). For this giant, we additionally consider off-axis collisions with larger impact parame
\begin{table}
\begin{tabular}{c c c c c} \hline Model number & Mass & Radius & \(v_{\rm rel}\) & Impact parameter \(b\) \\ - & M\({}_{\odot}\) & R\({}_{\odot}\) & 1000 km/s & \(R_{\star}\) \\ \hline
1 & 1 & 10 & 10 & 0.04 \\
2 & 1 & 10 & 10 & 0.2 \\
3 & 1 & 10 & 10 & 0.4 \\
4 & 1 & 10 & 10 & 0.8 \\
5 & 1 & 10 & 5 & 0.04 \\
6 & 1 & 10 & 2.5 & 0.04 \\
7 & 1 & 20 & 10 & 0.04 \\
8 & 1 & 50 & 10 & 0.04 \\
9 & 1 & 100 & 10 & 0.04 \\ \hline \end{tabular}
\end{table}
Table 1: Initial parameters: (from left to right) model number, stellar mass, stellar radius, relative velocity \(v_{\rm rel}\) at infinity, and impact parameter \(b\).
Figure 3: The radial density profile (_top_) of the giants with four different radii relaxed for five - ten stellar dynamical times and the relative error with respect to the MESA models (_bottom_), as a function of radius from the core. The dashed grey lines in the _top_ panel show the density profiles of the MESA models. The density profiles of the 3D stars match well with the MESA models within a few % except for those at the stellar surface.
Figure 2: Enclosed mass as a function of radius for the four giants with \(R_{\star}=10\) R\({}_{\odot}\) (black), 20 R\({}_{\odot}\) (red), 50 R\({}_{\odot}\) (blue), 100 R\({}_{\odot}\) (green). The vertical dotted lines, sharing the same color, indicate the size of the region modelled using a point particle. Although the point particle size is greater than the size of the core (\(R\simeq 0.02\) R\({}_{\odot}\)), given the flat mass-radius relation between the core radius and the point particle radius, we essentially retain the total energy budget inside the star above the core with significantly low computational costs.
ters, \(b=\)0.2, 0.4, and 0.8 \(R_{\star}\), and two additional \(v_{\rm rel}=2500\) and 5000 km/s, to study the dependence of the impact parameter and the collision velocity, respectively. For larger giants, we only consider the near head-on collisions with \(v_{\rm rel}=10^{4}\) km/s. The initial parameters of the models are summarized in Table 1.
## 3 Result
### Overview
We provide an overview of the evolution of the collision product using our fiducial model, e.g., head-on collision between the two 10 R\({}_{\odot}\) giants. We present in Figure 4 (from _top_ to _bottom_) the density \(\rho\), the temperature \(T\), the Mach number \({\cal M}\), and the speed in the mid-plane at four different times in our fiducial model.
Initially, the two stars approach at \(v_{\rm rel}\simeq 10^{4}\) km/s (\(1^{\rm st}\) column). Since their first contact, the envelopes are continuously compressed due to the converging motion. Along the contact surface (the pronounced narrow feature across the center in the \(2^{\rm nd}\) column, dubbed "shock surface"), pressure gradients are built up and the temperature is raised above \(10^{7}\) K due to adiabatic compression. As the later incoming gas collides supersonically with the pressure wall, shocks are created. Some of the very hot gas in the shock surface escapes radially perpendicular to the collision axis (or along the shock surface) with an opening angle of \(\simeq 30^{\circ}\) and speeds of a few thousands km/s, which is not particularly high compared to the rest. At the strongest compression, a significant fraction of the kinetic energy is converted into heat energy (\(\gtrsim 30\%\)), which is already a few orders of magnitude greater than the total binding energy of the stars. When the pressure gradient exceeds the ram pressure, the compressed gas bounces off and expands quasi-spherically and homologously at supersonic speeds (see \(3^{\rm nd}\) and \(4^{\rm th}\) column panels in Figure 4). On top of the expanding motion, the converted heat energy continuously drives the outer part of the gas cloud to expand by the PdV work, meaning that some of the heat energy is converted back into kinetic energy. At the same time, the outer edge of the cloud supersonically collides with the background medium. This has two effects. First, mass piles up at the boundary between the gas cloud and the background medium, reducing the kinetic energy of the expansion front. Second, shocks are created, which dissipates the kinetic energy of the expansion front to heat energy. As a result of both effects, the expansion front slows down.
### Evolution of expanding cloud - parameter dependence
#### 3.2.1 Fiducial case
To describe the evolution of the expanding gas more quantitatively we show in Figure 5 the spherically-averaged density \(\rho\) and (mass-weighted) temperature \(T\), the expansion speed \(v^{\rm r}\), and the area-weighted average of the optical depth \(\tau\) over the solid angle for our fiducial model as a function of distance from the collision point at five logarithmically sampled times between 1 and 30 days after collision. The density \(\rho\) (_top-left_) and the temperature \(T\) (_top-right_) of the inner regions of the expanding gas cloud are nearly constant. As the cloud expands adiabatically, the overall level of \(\rho\) and \(T\) drops while maintaining its slope: \(\rho\simeq 10^{-8}\) g/ cm\({}^{3}\) at \(t\simeq 1\) day to \(10^{-12}\) g/ cm\({}^{3}\) at \(t\simeq 30\) days, and \(T\simeq 2\times 10^{5}\) K at \(t=1\) day to \(5\times 10^{3}\) K at \(t\simeq 30\) days, at which point the cloud is cooler than the background medium. \(\rho\) and \(T\) outside the flat region decay towards the outer edge with a different steepness: the density drops following a power law of \(\propto r^{-\lambda}\) with \(\lambda\simeq 12-13\) upon collision, gradually decreasing to \(\lambda\simeq 8\) at \(t\simeq 30\) days. But the temperature decays rather like \(\propto r^{-1}\) at \(1\lesssim t\lesssim 30\) days. The decaying slopes of \(\rho\) and \(T\) depend on \(R_{\star}\), \(b\), and \(v_{\rm rel}\), but the dependence of the slope of \(T\) is generally stronger. \({\rm dln}\,\rho/{\rm dln}\,r\) is almost the same, independent of \(R_{\star}\) whereas \(-{\rm dln}\,T/{\rm dln}\,r\) tends to be larger for larger \(R_{\star}\) (e.g., \(\lambda\simeq 2-3\) for \(R_{\star}=100\) R\({}_{\odot}\)). \({\rm dln}\,T/{\rm dln}\,r\) is steeper for larger \(b\) (e.g., \(\lambda\simeq 2-3\) for \(b=0.8\)\(R_{\star}\)), while \({\rm dln}\,\rho/{\rm dln}\,r\) is only slightly less steeper for larger \(b\) (e.g., \(\lambda\simeq 12\) for \(b=0.8\)\(R_{\star}\)). The dependence of the slopes on \(v_{\rm rel}\) is relatively weak: \(\lambda\) for \(\rho\) is almost same for \(2500{\rm km/s}\leq v_{\rm rel}\leq 10^{4}{\rm km/s}\) and \(\lambda\) for \(T\) is slightly larger for smaller \(v_{\rm rel}\) (e.g., \(\lambda\simeq 1-1.5\) for \(v_{\rm rel}=2500\) km/s).
As shown in the _bottom-left_ panel of Figure 5, the cloud expands homologously, i.e., \(v^{\rm r}\propto r\) or constant \(v^{\rm r}\) at the same mass coordinate, which is also found in all other models. Right after the collision, the maximum expansion velocity at the outer edge is greater than the initial relative velocity by a factor of \(\simeq 5\) and stays constant. The period of time with a constant peak \(v^{\rm r}\) is very brief for this particular model (\(\lesssim 0.1\) days). However the constant maximum \(v^{\rm r}\) phase is longer for collisions with larger \(R_{\star}\), which is illustrated in the _bottom-right_ panel of Figure 6. After the constant maximum \(v^{\rm r}\) phase, the peak expansion velocity continuously decreases due to the interactions with the background medium.
The gas cloud is initially optically thick. The optical depth to the center is \(\tau\gtrsim 10^{5}\) at \(t\simeq 1\) day, as demonstrated in the _bottom-right_ panel of Figure 5. As it expands and cools, \(\tau\) decreases following a power-law of \(t^{-7/3}\) (see the _bottom-right_ panel of Figure 6), indicating that the entire cloud will become optically thin within 7 - 8 months, consistent with the analytic estimate by Amaro Seoane (2023b). The nearly flat \(\tau\) inside the cloud indicates that the transition from optically thick to completely optically thin may be prompt.
#### 3.2.2 Comparison between models
To further demonstrate the dependence of the stellar radius \(R_{\star}\), the impact parameter \(b\) and the initial relative velocity \(v_{\rm rel}\), we compare in Figure 6 the evolution of the same four quantities, shown in Figure 5 between different models. For a proper comparison, we estimate \(\overline{\rho}\) as the average volume within a distance enclosing 75% of the gas mass2 and \(\overline{T}\) as the mass-weighted average of \(T\) within the same volume. As shown in the _top_ panels, \(\overline{\rho}\) and \(\overline{T}\) decrease over time, following a power-law of \(t^{-3}\), and \(t^{-1}\), respectively, almost independently of \(R_{\star}\) and \(b\) except for \(\overline{T}\) with \(v_{\rm rel}=2.5\times 10^{5}\) km/s. The \(t^{-3}\) power-law for \(\overline{\rho}\) is expected from an homologous expansion: \(\rho\propto(v^{\rm r}t)^{-3}\propto t^{-3}\). As the \(t^{-1}\)-scaling relation for \(\overline{T}\) suggests, the total (radiation + gas) internal energy at a given mass coordinate decreases like \(t^{-13}\). The significant deviation from the \(t^{-1}\) power-law for \(v_{\rm rel}=2500\) km/s indicates that there is continuous energy exchange between gas at different mass shells. Unlike other cases where the radiation energy is dominant, in this case, the gas internal energy is comparable to the radiation energy and the total internal energy drops like \(\propto t^{-4/3}\), resulting in a non-power law decay curve for \(\overline{T}\). Although each of the two quantities, \(\overline{\rho}\) and \(\overline{T}\), tends to follow a single power-law, the degree to which their magnitudes depend on \(R_{\star}\), \(b\), and \(v_{\rm rel}\) is different. \(\overline{\rho}\) has a very weak dependence on \(b\) and \(R_{\star}\). \(\overline{T}\) is insensitive to \(b\) and
weakly depends on \(R_{\bullet}\): only a factor of 1.5 greater for \(R_{\bullet}=100\) R\({}_{\odot}\) than that \(R_{\bullet}=10\) R\({}_{\odot}\).
\(v_{\rm peak}^{\rm r}\) stays constant upon collision at \((3-6)\times v_{\rm rel}\). The constant-\(v_{\rm peak}^{\rm r}\) phase lasts longer for the case involved with stronger shocks (e.g., larger \(R_{\bullet}\) for given \(b\) and \(v_{\rm rel}\)). Eventually, \(v_{\rm peak}^{\rm r}\) decreases over time because of the interactions with the background medium, following a power-law of \(t^{-1/3}\) for all models. In particular, the peak expansion speed with varying \(R_{\bullet}\) tends to asymptote to a single value at later times. As \(b\) and \(v_{\rm rel}\) decrease, \(v_{\rm peak}^{\rm r}\) is smaller at a given time. But the difference is at most by a factor of 3 for the collision parameters considered.
As explained for our fiducial model above, the optical depth is initially high at collision, \(\tau>O(10^{6})\). The optical depth for most cases gradually decreases as the gas cloud expands, following a power-law
Figure 4: Density \(\rho\) (_top_), temperature \(T\) (_top-middle_), mach number \(\mathcal{M}\) (_bottom-middle_), and speed \(v\) (_bottom_) of gas in a nearly head-on (\(b=0.04R_{\bullet}\)) collision between two giants with \(R_{\bullet}=10\) R\({}_{\odot}\) at four different times, \(t=-0.07\) days (before collision), 0 days (at collision), 1 days and 30 days (after collision). The red dots in each panel indicate the location of the cores. The white contour line in the _top_ panel for \(\rho\) show the location of the photosphere at which the radially integrated optical depth \(\simeq 1\) and those in the _bottom-middle_ panel for \(\mathcal{M}\) the boundaries at \(\mathcal{M}\simeq 1\). The arrows in the _bottom_ panels indicate the direction of gas motion. Initially the two stars start to move towards each other with \(v_{\rm rel}=10^{4}\) km/s (_left_). At collision, very steep pressure gradients are built up at the collision surface and strong shocks are created when the incoming gas collides with the pressure barrier (_left-middle_). The gas bounces off and expands quasi-spherically and homologously at supersonic speeds (_right-middle_ and _right_).
\(t^{-7/3}\), which is expected from the scaling relations of \(\rho\) and \(v_{\rm peak}^{\rm r}\): \(\tau\propto\rho R_{\rm peak}\propto t^{-3}r^{2/3}\propto t^{-7/3}\) where \(R_{\rm peak}\) is the location of the peak expansion speed \(\simeq v_{\rm peak}^{\rm r}\propto t^{2/3}\). The deviation from the \(t^{-7/3}\) power-law relation becomes more significant as the collisions happen at lower \(v_{\rm rel}\) and higher \(b\).
#### 3.2.3 Fitting formulae
Combining all the scaling relations, we find that the average density \(\overline{\rho}(t)\), mass-weighted average of temperature \(\overline{T}\) peak expansion velocity \(v_{\rm peak}^{\rm r}(t)\), size of the outer edge \(R_{\rm peak}(t)\) and radial expansion speed \(v_{\rm peak}^{\rm r}(r,t)\) after \(t>5\) days can be well-described by the following analytic expressions,
\[\overline{\rho}(t) =6\times 10^{-10}{\rm g/cm^{3}}\left(\frac{t}{1{\rm day}}\right)^{- 3}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-3},\] (2) \[\overline{T}(t) =1.5\times 10^{5}K\left(\frac{t}{1{\rm day}}\right)^{-1}\tan^{-1}( \sqrt{R_{\star}/10~{}{\rm R}_{\odot}})\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
where the expression for \(R_{\rm peak}\) is found by analytically integrating \(v^{t}(r,t)\) over time. Note that \(\overline{\it{\beta}}\) decays faster than that expected from the expression \(3M_{\rm gas}/(4\pi R_{\rm peak}^{3})\simeq t^{-2}\) because \(\overline{\it{\rho}}\) follows the homologous relation whereas the peak expansion speed slows down so the outer edge expands slower than that expected for homologous expansion.
Note that we do not include the term describing the dependence on \(R_{\star}\) in most of the expressions above because of their very weak \(R_{\star}\)-dependence. On the other hand, the omission of the \(v_{\rm rel}\)-dependence in Equation 3 for \(\overline{\it{T}}\) is because of too small number of models with varying \(v_{\rm rel}\) for reliable fitting. Instead, we have specified the range of \(v_{\rm rel}\) where the equation is valid.
### Stellar core
The cores move almost synchronously with the bulk of the gas. The orbit of the cores are barely affected by the collision: they remain unbound after collision and move away from each other at a speed almost same as the incoming speed. The distances from the collision point in our fiducial model at five different times are marked with circles in Figure 5.
The mass bound to the cores is larger for larger \(v_{\rm rel}\) and smaller \(b\). But it is overall insignificant. For \(b\leq 0.2\) R\({}_{\odot}\) and \(v_{\rm rel}\geq 5000\) km/s, the bound mass is less than \(6\times 10^{-6}\) M\({}_{\odot}\). It is \(\simeq 2\times 10^{-3}\) M\({}_{\odot}\) for the model with \(v_{\rm rel}=2500\) km/s and that with \(b=0.4\) R\({}_{\odot}\) and \(\simeq 3\times 10^{-2}\) M\({}_{\odot}\) for the model with \(b=0.8\) R\({}_{\odot}\).
### Conversion factor
In this section, we investigate how much heat energy is created in collisions, which is closely related to the amount of energy that can be radiated away and potentially observed. We first define the conversion factor \(\eta_{\rm rad}\) as the ratio of the total radiation energy to the initial kinetic energy,
\[\eta_{\rm rad}(t)=\frac{\int aT(t)^{4}dV}{\int\rho(t=0)v(t=0)^{2}dV}, \tag{8}\]
where \(a\) is the radiation constant and \(dV\) is the volume element of each cell. Using \(\eta_{\rm rad}\) one can estimate the total radiation energy as \(\simeq 0.25\eta_{\rm rad}\ M_{\star}v_{\rm rel}^{2}\) for equal-mass collisions. To distinguish gas that initially belonged to the stars from the background gas, we employ a selection condition using a passive scalar. The passive scalar is an artificial scalar quantity initially assigned to each cell which then evolves via advection without affecting the evolution of hydrodynamics quantities. The initial values of the passive scalar of the cells in the stars are one and that of the background cells is zero.
Figure 6: Average density \(\overline{\it{\beta}}\) (_top-left_), mass-weighted average of temperature \(\overline{\it{T}}\) (_top-right_), the peak expansion speed \(v_{\rm peak}^{r}\) (_bottom-left_) and the surface average of the optical depth \(\tau\) to the center(_bottom-right_) of the cloud in all models, as a function of time since collision. The average density and temperature are estimated using the cells within a radius containing 75% of the cloud mass.
Figure 7: Same as Figure 4, but for off-axis collisions (\(b=0.2,0.4,\) and \(0.8~{}R_{\star}\)) between two giants with \(~{}R_{\star}=10~{}\rm R_{\odot}\) at \(t\simeq 5\) days since collision.
So depending on the mass exchange (or mixing) between the cells, the passive scalar varies between zero (vacuum cells) and one (cells originally in the stars). We perform the integration over cells with the passive scalar \(\geq 0.1\). The value of \(\eta_{\rm rad}\) is largely unaffected by the choice of the threshold of the passive scalar, provided that it is greater than 0.
We show \(\eta_{\rm rad}\) for all our models in Figure 8 before the radiation energy in the optically thin gas becomes dominant. It is generally found that \(\eta_{\rm rad}\) dramatically increases at collision to \(\eta_{\rm rad}\simeq 0.1-0.8\), meaning a significant fraction of the initial kinetic energy is converted into heat energy. The maximum conversion factors are summarized in Table 2. Then as the cloud expands and cools, \(\eta_{\rm rad}\) decreases down to \(\lesssim 10^{-2}\). We see three clear post-peak trends of \(\eta_{\rm rad}\). First, \(\eta_{\rm rad}\) is larger when larger stars collide. Additionally, \(\eta_{\rm rad}\) is approximately \(\propto R_{\star}\) at any given time: \((1-2)\simeq 10^{-3}\) for \(R_{\star}=10\) R\({}_{\odot}\), \(\simeq(3-4)\times 10^{-3}\) for \(R_{\star}=20\) R\({}_{\odot}\), \(\simeq\times 10^{-2}\) for \(R_{\star}=50\) R\({}_{\odot}\), and \(\simeq 2\times 10^{-2}\) for \(R_{\star}=100\) R\({}_{\odot}\) at \(t\simeq 3\) days. We attribute this positive correlation between \(\eta_{\rm rad}\) and \(R_{\star}\) to the fact that for the same relative velocity, larger (cool) stars collide at higher \(\mathcal{M}\), resulting in stronger shocks over a wider contact surface (\(\propto R_{\star}\)). Second, \(\eta_{\rm rad}\) is almost the same when \(b\lesssim 0.2~{}R_{\star}\), while \(\eta_{\rm rad}\) begins to decrease with \(b\) when \(b\gtrsim 0.2~{}R_{\star}\). This trend is somewhat expected given that as \(b\) increases, the mass of gas that is shocked at collision decreases. Lastly, \(\eta_{\rm rad}\) decreases with \(v_{\rm rel}\) because of lower \(\mathcal{M}\) collisions for given sound speed (i.e., the same star). \(\eta_{\rm rad}\) at \(v_{\rm rel}=5000\) km/s is almost the same as that \(v_{\rm rel}=10000\) km/s, but \(\eta_{\rm rad}\) at \(v_{\rm rel}=2500\) km/s is lower by a factor of \(\simeq 2\) than that for our fiducial case. The overall levels of the conversion factors that we obtain are comparable to what Amaro Seoane (2023b) imposed in order for their analytical model to match with the observed object ZTF19acboexm (see their Figure 9).
We can also define the conversion factor for the ram pressure of gas moving at supersonic speeds,
\[\eta_{\rm ram}(t)=\frac{\int\rho(t)v(t)^{2}dV}{\int\rho(t=0)v(t=0)^{2}dV}, \tag{9}\]
where the integration in the denominator is carried out over cells for which the passive scalar \(>0.1\), and that in the numerator the integration is carried out only over cells with supersonic speeds, \(\mathcal{M}\geq 1\). As illustrated in the \(3^{\rm rd}\) column panels of Figure 4 and 7, almost all the gas is supersonically expanding. As a result, \(1-\eta_{\rm ram}\simeq\eta_{\rm rad}\).
### Observables
We estimate the luminosity \(L\), blackbody radius \(R_{\rm BB}\), and temperature \(T_{\rm BB}\), using the radiation energy and the local cooling time \(t_{\rm cool}\). We first construct a spherical grid with an extremely small opening polar angle (\(\theta\simeq 10^{-10}\) radians) to avoid the singularity at the poles, radially extending out to near the outer boundary of the domain. The grid in the radial direction is logarithmically divided, i.e., constant \(\Delta r/r\) where \(\Delta r\) is the cell size at \(r\), while that in the \(\theta\) and \(\phi\) directions are linearly divided, i.e., constant \(\Delta\theta\) and \(\Delta\phi\). The number of grids in \(r\), \(\theta\), and \(\phi\) are (800, 600, 600), which we confirmed to give converging estimates for the observables. We then identify the photosphere at which the optical depth \(\tau\simeq 1\). \(\tau\) is integrated along each \(r\)-path with the opacity found using an OPAL opacity table for Solar metallicity (Iglesias and Rogers, 1996). The photospheric area is,
\[A_{\rm BB}=\int_{0}^{2\pi}\int_{0}^{\infty\pi}r(\tau=1)^{2}\sin\theta drd \theta d\phi, \tag{10}\]
which gives the effective size of the emitting region or blackbody radius \(R_{\rm BB}=(A_{\rm BB}/4\pi)^{1/2}\).
We attempt to bracket the range of realistic radiated luminosity from the collision event by employing two different methods, each of which places different weights on the contribution from the gas cloud layers (the inner regions or outer regions near the photosphere) within the identified photosphere. Our estimates should be accurate at an order-of-magnitude level. However, for more accurate modeling of light curves, we will carry out detailed non-equilibrium radiation transport calculations in future follow-up work dedicated to estimating light curves and spectra.
In both methods, the total luminosity for each radial path is estimated by summing the contributions from the cells with the local cooling time \(t_{\rm cool}\) shorter than the evolution time \(t\) within the photosphere. Here, \(t_{\rm cool}\) is defined as \(h_{\rho}\tau(1+u_{\rm gas}/u_{\rm rad})/c\) where \(h_{\rho}\) is the density moment scale height inside the photosphere and \(u_{\rm rad}\) (\(u_{\rm gas}\)) the radiation (gas thermal) energy. However, the difference between the two methods is the assumption for how most of the radiation energy is radiated away. In one method, we assume that the total radiation energy within the photosphere is radiated away over a time comparable to the cooling time at the base of the cloud. Under this assumption, the inner regions tend to dominate the luminosity. We first integrate the total radiation energy along the radial path and divide it by the cooling time at the base of the cloud \(t_{\rm cool,max}\), i.e., the longest cooling time which is no longer than \(t\), or
\[L_{1}=\int_{0}^{2\pi}\int_{0}^{\infty\pi}\left[\int_{r(t_{\rm cool}=t)}^{r(\tau= 1)}aT^{4}r^{2}\sin\theta dr\right]t_{\rm cool,max}(\theta,\phi)^{-1}d\theta d\phi. \tag{11}\]
In the second, we assume that the radiation energy of each cell is radiated away over the local cooling time. So the total luminosity is estimated,
\[L_{2}=\int_{0}^{2\pi}\int_{0}^{\infty\pi}\int_{r(t_{\rm cool}=t)}^{r(\tau=1)}aT^{ 4}t_{\rm cool}(r,\theta,\phi)^{-1}r^{2}\sin\theta drd\theta d\phi. \tag{12}\]
In this method, the outer regions near the photosphere dominate the luminosity. As stressed before, the evolution of the hydrodynamics quantities for optically thin gas (i.e., outer region near the photosphere) in our simulations is intrinsically less accurate than those for
Figure 8: The ratio of the radiation energy to the initial kinetic energy \(\eta_{\rm rad}\) as a function of time, measured since collision, for all our models.
optically thick gas. Hence, \(L_{1}\) should be considered more consistent with our hydrodynamical scheme. We find that the shapes of the \(L_{1}\) and \(L_{2}\) lightcurves are very similar. However, \(L_{1}\) is consistently smaller than \(L_{2}\) by a factor of \(\approx 10\). For this reason, we present \(L_{1}\) and the resulting blackbody temperature \(T_{\rm BB,1}=(L_{1}/\sigma A_{\rm BB})^{1/4}\) in this section and those from Equation 12 in Appendix A.
Figure 9 shows \(L_{1}\) (_top_), \(T_{\rm BB,1}\) (_middle_), and \(R_{\rm BB}\) (_middle_) as a function of time measured since collision for all our models. Note that the luminosity and the blackbody temperature differ depending on the assumption of radiation (Equations 11 and 12), but \(R_{\rm BB}\) is independent of the assumption. The luminosity increases dramatically to its peak at collision. The peak luminosity is \(L_{1}\gtrsim 10^{41}-10^{43}\) erg/s (\(L_{2}\gtrsim 10^{42}-10^{44}\)), which is higher for larger \(R_{\star}\) and smaller \(b\) and higher \(v_{\rm rel}\), which has the same trend as \(\eta_{\rm rad}\). The temperature at peak is \(T_{\rm BB,1}\simeq 10^{5}\) K. Because \(T_{\rm BB}\propto L^{1/4}\), \(T_{\rm BB,2}\) is greater than \(T_{\rm BB,1}\) by less than a factor of 2. We summarize peak \(L\) and \(T_{\rm BB}\) at peak for all our models in Table 2. Subsequently both \(L\) and \(T_{\rm BB}\), independent of the assumption for the diffusion time (so both \(L_{1}\) and \(L_{2}\)), decrease following a power-law \(\propto t^{-\xi}\) with \(\xi\) slightly differing at early and late times. \(L\) at \(t\lesssim 5\) days reveals a decaying curve with \(\xi\simeq 0.7-0.8\), followed by a slower decay with \(\xi\simeq 0.4\) at \(t\gtrsim 5\) days. \(L\) therefore decreases by a factor of 10 for the first 5 days. The decay in \(L\) for the next 30 days is relatively small, by only a factor of a few. The change in \(\xi\) for \(T_{\rm BB}\) is very mild: \(\xi\simeq 0.6\) at \(t\lesssim 5\) days and \(\simeq 0.5\) at \(t\gtrsim 5\) days. \(T_{\rm BB}\) decreases from \(\simeq(1-2)\times 10^{5}\) K at collision to \(10^{4}\) K at \(5-15\) days, \((4-6)\times 10^{3}\) K at 30 days. This means the collision will be bright in extreme UV at collision which shifts to optical on a time scale of a month. Lastly, \(R_{\rm BB}\) increases to \(\simeq 10^{15}\)cm in 30 days, approximately following power-law growth of \(\propto t^{0.8}\).
The light curves from our simulations reveal some differences from that analytically predicted by Amaro Seoane (2023b). Assuming a constant \(\eta\) comparable to the minimum \(\eta_{\rm rad}\) shown in Figure 8, their analytic model predicts a peak luminosity consistent with the numerically integrated peak luminosity shown in Figure 9. However, the luminosity from their analytic model peaks at a few days after collision and subsequently decays faster. We attribute these discrepancies to the difference in the way of calculating the luminosity. In their analytic model, the luminosity was estimated under the assumption that \(\eta\) does not change over time and the total radiation energy within the gas cloud is radiated away instantaneously on a time scale comparable to the longest possible photon cooling time at any given time (e.g., based on the optical depth to the center). On the other hand, in this work, we take into account the time-dependent contributions (e.g., adiabatic loss of energy due to expansion) of the cloud.
The observables estimated in this section are driven by stellar collisions. But given the fact that these collisions occur near a SMBH, the expanding gas cloud and the nearby BH would very likely interact, generating a possibly even brighter flare, which we discuss in SS 4.2.
## 4 Discussion
### Interaction of gas cloud with interstellar medium
In this work, we simulated BDCs of giants surrounded by a medium with a constant density of \(10^{-18}\)g/cm\({}^{3}\) and temperature of \(10^{4}\) K. As the cloud expands, it collides inelastically with the background medium, which results in the continuous decrease in the kinetic energy of the expansion front. In addition, the collision between the outer edge of the cloud and the background medium can create shocks, converting the kinetic energy into heat energy. The net effect
Figure 9: Bolometric luminosity \(L\) (_top_), blackbody temperature \(T_{\rm BB}\) (_middle_), and blackbody radius \(R_{\rm BB}\) (_bottom_), estimated for stellar collisions using Equations 10 and 11. The dotted grey horizontal lines in the _bottom_ panel indicate the distances of the collision from the black hole with \(M_{\rm BH}=10^{6}\) M\({}_{\odot}\) (\(\simeq 10^{14}\) cm) and \(10^{7}\) M\({}_{\odot}\) (\(\simeq 10^{15}\) cm). The magenta guide lines show the power-law that describes the quantity shown in the last two panels.
is the deceleration of the gas cloud, deviating from an homologous behavior, which is also found from our simulations where the velocity of the outer edge decreases following \(t^{-1/3}\). This impact of the surrounding medium would be faster if the colliding stars were initially embedded in a denser medium. For example, the rising slope of \(\eta_{\rm rad}\) would be less steep for the case with lower-density background gas. Given the supersonic motion of the cloud, how the cloud expands would not be significantly affected by the temperature of the background medium for a given background density. However the evolution of \(\eta_{\rm rad}\) would be changed depending on the background temperature. In fact, we performed extra simulations with different background temperatures (\(100-5000\) K), showing that while the expansion properties of the cloud (e.g., \(\overline{\rho}\), \(\overline{T}\), and \(v^{r}_{\rm peak}\)) are almost independent of the background temperature, \(\eta_{\rm rad}\) tends to be lower at the local minimum and increases more slowly afterward for a lower background temperature.
Although the deviation from an homologous expansion was only found near the outer edge for the duration of our simulations, as an order-of-magnitude estimate, the motion of the entire gas cloud would become completely deviated from an homologous expansion when the swept-up mass is comparable to the mass of the cloud,
\[\begin{split}\mbox{$\eta_{\rm non-homologous}$}& \simeq\frac{3M_{\rm gas}}{4\pi\rho_{\rm ISM}(v^{r})^{3}},\\ &\simeq 1100\ {\rm days}\left(\frac{M_{\rm gas}}{2\ {\rm M}_{\odot}} \right)\left(\frac{\rho_{\rm ISM}}{10^{-18}\ {\rm g/\ cm^{3}}}\right)^{-1}\left(\frac{v^{r}}{10^{4}{\rm km/s}} \right),\end{split} \tag{13}\]
where \(v^{r}\) is the expansion speed and \(\rho_{\rm ISM}\) the density of the background medium. At the same time, this also means that the scaling relations for the homologous expansion found from our simulations would be applied to the evolution of the homologously expanding part of the collision product, independent of the existence of the background medium.
### Interaction of gas cloud with supermassive black hole
In addition to the burst caused by the stellar collision (see SS 3.5), there would be a subsequent burst due to accretion onto the nearby SMBH. As a result, the overall shape of the luminosity would be that the stellar collision creates the first peak with \(L\ga 10^{42}\) erg/s which decays, followed by a sharp rise to Eddington due to accretion onto the BH, possibly remaining at that level for up to years until the captured gas is accreted onto the BH. We will examine the observables from the BH-cloud interaction by considering two cases, 1) Case 1. no-decelerating expansion (SS4.2.1) and 2) Case 2. decelerating expansion (SS4.2.2). And we will discuss the astrophysical implications for BHs in SS4.2.3.
#### 4.2.1 Case 1. no-decelerating expansion
We first assume that the entire gas cloud expands homologously and the expansion speed of the outer edge is \(v^{r}\simeq\psi_{\rm rel}\) with \(\psi\simeq 3-6\) (see Figure 6). The gas cloud starts to interact with the BH when the size of the expanding gas cloud becomes comparable to the distance to the BH \(R_{\rm BB}\) for given \(v_{\rm rel}\).
\[R_{\rm BH}\simeq\frac{GM_{\bullet}}{v^{2}_{\rm rel}}=10^{15}\ {\rm cm}\left(\frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}} \right)^{-2}. \tag{14}\]
The time difference between the first collision-driven burst and the subsequent accretion-driven burst would be set by the time \(\tau_{\rm BH}\) at which the outer edge of the cloud reaches the BH, \(R_{\rm BH}-R_{\rm Sch}\simeq R_{\rm BH}\simeq R_{\rm peak}\), where \(R_{\rm Sch}\) is the Schwarzschild radius,
\[\tau_{\rm BH}\simeq\frac{R_{\rm BH}}{v^{r}}\simeq 3\ {\rm days}\left(\frac{M_{ \bullet}}{10^{7}\ {\rm M}_{\odot}}\right)\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}} \right)^{-3}. \tag{15}\]
To zeroth order, the part of the cloud that is within the Bondi radius \(R_{\rm Bondi}\simeq 2GM_{\bullet}/(v^{r})^{2}\) from the BH would be gravitationally captured by the BH and subsequently accreted onto the BH. Assuming a Bondi-Hoyle accretion (Bondi & Hoyle, 1944; Bondi, 1952), the luminosity \(L_{\rm Bondi}\) with radiative efficiency \(\epsilon\) can be estimated,
\[L_{\rm Bondi} \simeq\frac{4\pi\epsilon G^{2}M_{\bullet}^{2}pc^{2}}{(v^{r})^{3}},\] \[\simeq 3\times 10^{47}\ {\rm erg/s}\left(\frac{\epsilon}{0.1} \right)\left(\frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)^{-1}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}} \right)^{3}, \tag{16}\]
which is super-Eddington for \(M_{\bullet}<3\times 10^{8}\ {\rm M}_{\odot}(v_{\rm rel}/10^{4}{\rm km\ s^{-1}})^{1.5}\). Note that \(L_{\rm Bondi}\) has no dependence on \(t\) given the scaling relations for \(\rho\) (\(\propto t^{-3}\), Equation 2) and \(v^{r}(r=R_{\rm BH})\) (\(\propto t^{-1}\), Equation 7): \(L_{\rm Bondi}\propto\rho(v^{r})^{-3}\propto t^{0}\). Super-Eddington accretion may be possible if the gas is optically thick and the trapping radius \(R_{\rm tr}=(L_{\rm Bondi}/L_{\rm Edd})(GM_{\bullet}/\epsilon c^{2})\) is smaller than the Bondi radius (Begelman, 1979). The ratio of the two radii is,
\[\frac{R_{\rm tr}}{R_{\rm Bondi}}\simeq 300\left(\frac{t}{1{\rm day}}\right)^{-2} \left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-1}, \tag{17}\]
suggesting super-Eddington accretion would be possible at \(t\ga 20\ {\rm days}(v_{\rm rel}/10^{4}{\rm km\ s^{-1}})^{-0.5}\). Here, we caution that the time ratio in Equation 17 is estimated under the assumption that the global accretion flow is not affected by any accretion feedback, which is highly uncertain. Assuming a black body, the temperature at the Bondi radius if \(L\simeq L_{\rm Edd}\) is,
\[T_{\rm Bondi}(t)\simeq 10^{5}K\left(\frac{t}{3{\rm day}}\right)^{-1}\left(\frac{M_ {\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)^{3/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}} \right)^{-1/2}, \tag{18}\]
\begin{table}
\begin{tabular}{c c c c|c c c c c} \hline Model number & \(R_{\bullet}\) & \(v_{\rm rel}\) & \(b\) & \(\eta_{\rm peak}\) & \(L_{\rm peak,1}\) & \(L_{\rm peak,2}\) & \(T_{\rm BB,peak,1}\) & \(T_{\rm BB,peak,2}\) \\ - & \({\rm R}_{\odot}\) & \(10^{3}\ {\rm km/s}\) & \(R_{\bullet}\) & - & \(10^{33}\ {\rm erg/s}\) & \(10^{33}\ {\rm erg/s}\) & \(10^{5}\ {\rm K}\) & \(10^{5}\ {\rm K}\) \\ \hline
1 & 10 & 10 & 0.04 & 0.68 & 9.6 & 1.3 & 3.0 & 2.1 \\
2 & 10 & 10 & 0.2 & 0.58 & 9.3 & 1.3 & 2.9 & 2.0 \\
3 & 10 & 10 & 0.4 & 0.40 & 6.5 & 1.0 & 2.7 & 1.9 \\
4 & 10 & 10 & 0.8 & 0.13 & 4.2 & 0.7 & 2.7 & 1.7 \\
5 & 10 & 5 & 0.04 & 0.68 & 2.1 & 0.3 & 2.2 & 1.4 \\
6 & 10 & 2.5 & 0.04 & 0.54 & 0.5 & 0.1 & 1.0 & 1.0 \\
7 & 20 & 10 & 0.04 & 0.75 & 13 & 2.0 & 2.7 & 1.7 \\
8 & 50 & 10 & 0.04 & 0.75 & 26 & 3.4 & 2.2 & 1.4 \\
9 & 100 & 10 & 0.04 & 0.72 & 26 & 3.4 & 1.7 & 1.1 \\ \hline \end{tabular}
\end{table}
Table 2: Peak conversion factor \(\eta\), luminosity at peak \(L_{\rm peak}\) and blackbody temperature at peak \(T_{\rm BB,peak}\) for each model, using Equation 11 (\(L_{\rm peak,1}\) and \(T_{\rm BB,peak,1}\)) and Equation 12 (\(L_{\rm peak,2}\) and \(T_{\rm BB,peak,2}\))
and at the onset of the BH-gas interaction (or \(t\simeq\tau_{\rm BH}\)),
\[T_{\rm Bondi}(t=\tau_{\rm BH})\simeq 10^{5}K\left(\frac{\psi}{5}\right)\left(\frac{M _{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)^{-1/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right). \tag{19}\]
If \(L\simeq L_{\rm Bondi}\),
\[T_{\rm Bondi} \simeq 2.4\times 10^{4}K\left(\frac{t}{150{\rm day}}\right)^{-1} \left(\frac{\epsilon}{0.1}\right)^{1/4}\] \[\times\left(\frac{M_{\bullet}}{5\times 10^{8}\ {\rm M}_{\odot}} \right)^{1/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-5/4}, \tag{20}\]
and at \(t=\tau_{\rm BH}\),
\[T_{\rm Bondi}(t=\tau_{\rm BH})\simeq 2.4\times 10^{4}K\left(\frac{ \psi}{5}\right)\left(\frac{\epsilon}{0.1}\right)^{1/4}\] \[\times\left(\frac{M_{\bullet}}{5\times 10^{8}\ {\rm M}_{\odot}} \right)^{-3/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-7/4}. \tag{21}\]
Because \(R_{\rm Bondi}\) increases faster than \(R_{\rm peak}\),
\[\frac{R_{\rm Bondi}}{R_{\rm peak}}\propto t, \tag{22}\]
as the most optimistic case, the entire gas cloud could be ultimately captured by the BH in a time \(\tau_{\rm capture}\) at which \(R_{\rm Bondi}\simeq R_{\rm peak}\),
\[\tau_{\rm capture}\simeq 40\ {\rm days}\left(\frac{\psi}{5}\right)\left( \frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-3}. \tag{23}\]
Then the maximum duration of the Eddington luminosity may be set by,
\[\tau_{\rm acc}\lesssim\frac{M_{\rm gas}\epsilon c^{2}}{L_{\rm Edd}}\simeq 9 \ {\rm years}\left(\frac{\epsilon}{0.1}\right)\left(\frac{M_{\rm gas}}{2\ {\rm M}_{\odot}}\right)\left(\frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)^{-1}, \tag{24}\]
where \(M_{\rm gas}\) is the mass of the gas cloud, i.e., total mass of the two collided stars. Here, we assumed that the entire gas would be accreted onto the BH. However, radiation pressure from super-Eddington accretion would be strong enough to generate outflow. For such a case, only a fraction of the gas cloud would end up accreting and \(\tau_{\rm acc}\) would be shorter than estimated above.
#### 4.2.2 Case 2. decelerating expansion
Now we examine the observables from interactions between decelerating expanding cloud with \(v_{\rm peak}^{\rm r}\propto t^{-1/3}\) and the SMBH, using Equations 2-7. For this case, \(\tau_{\rm BH}\) has a different dependence on \(M_{\bullet}\) and \(v_{\rm rel}\),
\[\tau_{\rm BH}\simeq 3\ {\rm days}\left(\frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}} \right)^{3/2}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^{-3}\left(\frac {b/R_{\bullet}+5}{5}\right)^{6}. \tag{25}\]
We show in Figure 10 the range of \(\tau_{\rm BH}\) for three different collision velocities \(v_{\rm rel}\) as a function of \(M_{\bullet}\) assuming a non-decelerating expansion speed (thick diagonal bars, \(\psi=3-6\)) and a decelerating expansion speed (solid lines). The interaction onset time would be longer generally if the expansion of the cloud slows down. Depending on \(M_{\rm BH}\) and \(v_{\rm rel}\), the second burst could happen over a wide range of time. For example, if a collision with \(v_{\rm rel}\gtrsim 2500\) km/s occurs in the Galactic center (with \(M_{\bullet}\simeq 4\times 10^{6}\ {\rm M}_{\odot}\) GRAVITY Collaboration et al., 2019), the second accretion-driven burst would occur after the collision in less than a day to \(6-7\) months depending on the location of the collision from the BH. For very massive black holes (\(M_{\rm BH}>10^{8}\ {\rm M}_{\odot}\)), \(\tau_{\rm BH}\) can be more than tens of years.
The Bonding luminosity is still independent of \(t\) and has the same \(M_{\bullet}\)- and \(v_{\rm rel}\)-dependence as the case with the no-decelerating expansion, but it is roughly a factor of 3 greater at given \(M_{\rm BH}\) and \(v_{\rm rel}\),
\[L_{\rm Bondi}\simeq 10^{48}\ {\rm erg/s}\left(\frac{\epsilon}{0.1}\right)\left( \frac{M_{\bullet}}{10^{7}\ {\rm M}_{\odot}}\right)^{-1}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}} \right)^{3}, \tag{26}\]
which is further illustrated in Figure 11. While the expression for the
Figure 11: Bondi luminosity due to free-fall accretion of the collision product onto a black hole for different collision velocities \(v_{\rm rel}\), as a function of black hole mass. The solid line is for the decelerating peak expansion speed (_Case 2. decelerating expansion_, § 4.2.1) and dashed lines for the non-decelerating expansion speed (_Case 1. no-decelerating expansion_ (§ 4.2.2). The grey dashed diagonal line indicates the Eddington luminosity.
Figure 10: Time \(\tau_{\rm BH}\) to the accretion-driven burst since the peak collision-driven luminosity for different collision velocities \(v_{\rm rel}\), as a function of black hole mass. The lines illustrate _Case 2. decelerating expansion_ (§ 4.2.1) where the entire cloud expands homologously up to 0.5 days since collision (dashed), then the outer edge starts decays like \(t^{-1/3}\) (solid) due to interactions with a background medium, using Equation 7. The less steep diagonal bars demarcate the range of \(\tau_{\rm BH}\) for the case where the gas cloud continuously expands homologously with the outer edge moving at \((3-6)\times v_{\rm rel}\) (_Case 1. no-decelerating expansion_ (§ 4.2.2), corresponding to the peak expansion speed upon collision in our simulations (see the _bottom-right_ panel of Figure 6).
blackbody temperature at the Bondi radius has the same dependence on \(M_{\bullet}\) and \(v_{\rm vel}\) as Equations 18 and 20, because of the different expression for \(\tau_{\rm BH}\), \(T_{\rm Bondi}(t=\tau_{\rm BH})\) is written differently,
\[T_{\rm Bondi}(t=\tau_{\rm BH})\simeq\] \[\left\{\begin{array}{ll}8\times 10^{4}K\left(\frac{M_{\bullet}}{ 10^{7}\,{\rm M}_{\odot}}\right)^{-3/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/ s}}\right)^{2.1}\left(\frac{b/R_{\bullet}*5}{3}\right)^{-6},\\ &\hskip 142.26378pt\mbox{for $L=L_{\rm Edd}$},\\ 6\times 10^{3}K\left(\frac{\kappa}{0.1}\right)^{1/4}\left(\frac{M_{\bullet }}{10^{7}\,{\rm M}_{\odot}}\right)^{-5/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km /s}}\right)^{2.8}\left(\frac{b/R_{\bullet}*5}{3}\right)^{-6},\\ &\hskip 142.26378pt\mbox{for $L=L_{\rm Bondi}$}.\end{array}\right. \tag{27}\]
We compare \(T_{\rm Bondi}\) at the onset of the accretion-drive burst (so \(T_{\rm Bondi}\) at \(t=\tau_{\rm BH}\)) in Figure 12 between the non-decelerating expansion case (thick bars) and the decelerating expansion case (lines). For low-mass black holes, \(T_{\rm Bondi}\) is quite similar, e.g., \(10^{5}\) K for \(M_{\bullet}=10^{5}-10^{6}\) M\({}_{\odot}\). However, because of a steeper decline for the decelerating expansion case (\(T_{\rm Bondi}\propto M_{\bullet}^{-3/4}-M_{\bullet}^{-5/4}\), Equation 27) than for the no-decelerating expansion case (\(T_{\rm Bondi}\propto M_{\bullet}^{-1/4}-M_{\bullet}^{-3/4}\), Equations 19 and 21), \(T_{\rm Bondi}\) for the decelerating expansion case is generally lower for high-mass BHs: for \(M_{\bullet}=10^{9}\) M\({}_{\odot}\), \(T_{\rm Bondi}\simeq 10-10^{3}\) K for the decelerating expansion case whereas \(T_{\rm Bondi}\simeq 10^{3}-10^{4}\) K for the no-decelerating expansion case.
For the decelerating expansion case, the Bondi radius increases faster,
\[\frac{R_{\rm Bondi}}{R_{\rm peak}}\propto t^{4/3}, \tag{28}\]
which leads to a smaller \(\tau_{\rm capture}\),
\[\tau_{\rm capture}\simeq 11\ {\rm days}\left(\frac{M_{\bullet}}{10^{7}\,{ \rm M}_{\odot}}\right)^{3/4}\left(\frac{v_{\rm rel}}{10^{4}{\rm km/s}}\right)^ {-2.5}\left(\frac{b/R_{\bullet}+5}{5}\right)^{-3}. \tag{29}\]
The duration of the accretion process would be the same as Equation 24.
#### 4.2.3 Astrophysical implication for black holes
The possibility of the accretion of at least some fraction of the expanding cloud onto the SMBH in proximity can have significant implications for the growth of BHs in the cosmic landscape. While several mechanisms for massive BH formation have been proposed, the precise mechanism for growing BH seeds at extremely high redshifts remains uncertain (see for reviews Colpi & Dotti, 2011; Inayoshi et al., 2020). The proposed mechanisms include 1) rapid growth of the remnants of the Population III stars via super-Eddington accretion (e.g., Volonteri & Rees, 2005; Haiman & Loeb, 2001; Ryu et al., 2016; Lupi et al., 2016; Sassano et al., 2023), the direct collapse of supermassive self-gravitating objects (e.g., Omukai & Nishi, 1998; Yoshida et al., 2008; Zwick et al., 2023), growth of BHs in a runaway process (e.g., Devecchi et al., 2012; Stone et al., 2017; Tagawa et al., 2020; Rizzuto et al., 2023). In principle, as long as a BH is more massive than colliding stars, the velocity of stars around the BH can be large enough that stellar collisions can be completely disruptive. Hence, the accretion of gas produced in stellar collisions onto a nearby BH can provide another venue for the growth of stellar-mass BHs to massive BHs, in particular see BHs at high redshift.
However, disruptive collisions are not the only growth mechanism for BHs in stellar-dense environments. We show in Figure 13 the regions around BHs in which several events possibly contributing to their growth, i.e., disruptive collisions, tidal disruption events,
Figure 12: Temperature from the Eddington-limited Bondi-luminosity at the onset of the accretion of the collision product onto a black hole for different collision velocities \(v_{\rm rel}\), as a function of black hole mass. As before in Figure 10, the lines show the case where the cloud undergoes a non-decelering expansion up to 0.5 days, followed by an deceleration of the outer edge like \(t^{-1/3}\) at 0.5 days since collision due to interactions with a background medium, using Equations 2-7. The diagonal bars indicate the range of \(\tau_{\rm BH}\) when the entire gas cloud expands without being decelerated with the peak expansion speed (\(3-6\)) \(v_{\rm rel}\). The power-laws are analytically derived in Equations 18 (\(\propto M_{\bullet}^{-1/4}\)) and 27 (\(\propto M_{\bullet}^{-3/4}\) and \(\propto M_{\bullet}^{-5/4}\)).
Figure 13: Parameter space for disruptive events of a giant with \(M_{\bullet}=1\) M\({}_{\odot}\) and \(R_{\bullet}=10\) R\({}_{\odot}\) in terms of the distance from the BH for varying BH masses. The region dubbed “Black hole” is defined by the Schwarzschild radius, \(r_{\rm Ssh}=2GM_{\bullet}/c^{2}\). If the pericenter distance of a star is smaller than a few times outside \(r_{\rm Ssh}\), the star would be directly captured by the black hole. If the separation is smaller than the stellar radius, they would collide (“black hole-star collision”). When the pericenter distance is smaller than the tidal radius \(r\leq r_{\rm t}=(M_{\bullet}/M_{\bullet})^{1/3}R_{\bullet}\), stars are tidally destroyed by the BH (“tidal disruption”). Finally, disruptive collisions happen between the distance at which the Keplerian velocity is greater than the stellar escape velocity, \(r\leq r_{\rm collision}=(M_{\bullet}/M_{\bullet})R_{\bullet}\) and the tidal radius \(r\leq r_{\rm t}\). The white diagonal lines in the region for disruptive collisions corresponds to the collision velocity for given BH mass and radius.
BH-star collisions, and direct captures by BHs, can occur. When the distance from the BH is less than a few times greater than the Schwarzschild radius \(r_{\rm Sch}=2G\,M_{\bullet}/c^{2}\) (dubbeded "direct capture" radius), the star would directly fall into the BH (e.g., \(r<2r_{\rm Sch}\) for parabolic orbits). If the closest approach distance between the BH and a star is smaller than the stellar radius, \(r\lesssim~{}R_{\star}\), they collide, during which the BH would gravitationally capture a fraction of the star and accrete. When a star orbits at a distance greater than both the stellar radius and the direct capture radius, and smaller than the so-called tidal radius, \(r_{\rm t}=(M_{\bullet}/M_{\star})^{1/3}R_{\star}\), very strong BH's tidal forces disrupt the star, creating debris, some of which would end up accreting onto the BH. This event is called tidal disruption event (Hills, 1988; Rees, 1988). Finally, the region for disruptive collisions between giants may be characterized by the two distances, the distance within which the Keplerian velocity around the BH exceeds the stellar escape speed, \(r_{\rm collision}=(M_{\bullet}/M_{\star})R_{\star}\) and the tidal radius.
As shown in Figure 13, all four star-destroying events can contribute to the growth of stellar-mass and intermediate-mass BHs. However, for SMBHs with \(r_{\rm Sch}>~{}R_{\star}\), only three events, namely, disruptive collisions, tidal disruptions, and direct captures, can feed the BHs. For very massive BHs (e.g., \(M_{\bullet}>10^{9}\) M\({}_{\odot}\)), disruptive collisions would be the dominant and likely only observable transient among those considered here that lead to the mass growth of the BHs (Amaro Seoane, 2023).
This has an interesting implication for the detection of dormant BHs. TDEs have been considered a unique signpost for the existence of dormant SMBHs. However, because there is a maximum BH mass capable of disrupting stars, \(M_{\bullet}\) at which \(r_{\rm t}\) equals to the direct capture radius, TDEs can not be used to detect very massive quiescent black holes. However, disruptive stellar collisions can occur near BHs at all mass scales, which would make these events _a promising tool to probe the existence of very massive dormant BHs_ which cannot be probed by other transients. In particular, if the luminosity due to the interaction of the collision product with the BH is Eddington-limited, an inference of the BH mass would potentially be possible from the observed radiated light curve.
Which type of the events is dominant at different mass ranges would depend on the stellar density, the accretion efficiency and occurrence rates, which is beyond the scope of our paper. We will examine this aspect in more detail in our future work.
### Particle acceleration
In this work we have conducted numerical hydrodynamical simulations that confirm, following stellar collision event, the formation of strong shocks. These shocks arise due to the high velocity of the outflow, and its impact with the surrounding ISM in the galactic nucleus environment. These shock waves subsequently compress and heat the surrounding ISM gas.
The shocks formed in these stellar collisions provide an environment highly conducive to efficient particle acceleration. As particles interact with the turbulent magnetic fields expected close to the shock front, they can gain a significant fraction of the free energy available from the differential flow speeds (in the shock's rest frame, the upstream towards the shock with velocity \(V\) and the downstream moves away from the shock at velocity \(V/4\)). This process of diffusive particle acceleration at shocks, an example of first order Fermi acceleration, is expected to result in the generation of a power-law spectrum of non-thermal particles up to very high energies (Blandford & Ostriker, 1978; Bell, 1978).
A fraction of the energy in the accelerated particle population produced by stellar collisions will subsequently be radiated via non-thermal emission through various energy loss processes (see Matthews et al., 2020; Orlando et al., 2021, for review in the context of active galactic nuclei jets and supernovae, respectively). For instance, the accelerated electrons will produce synchrotron radiation as they spiral around the magnetic fields also generated during the collision. This emission is expected to be detectable in the radio, and potentially the X-ray, bands. In addition, the interaction between accelerated protons and the surrounding gas can generate gamma-ray emission through processes like inelastic proton-proton collisions.
The non-thermal radiation emitted by the accelerated particles produced in stellar collisions offers valuable diagnostics into the physical processes at play during violent stellar collision events. By analyzing the observed non-thermal radiation, we can gain a clearer understanding of shock front environment. Ultimately, these insights will elucidate on the dynamics of the collision itself. Our numerical hydrodynamics simulations, coupled with theoretical estimates for the production of non-thermal particles not included in our numerical description, provide can provide insights into particle acceleration in stellar collisions. This will be addressed elsewhere in a separate work.
## 5 Conclusion and summary
In this work, we investigate the hydrodynamics of black hole-drive disruptive collisions (BDCs) between giants in galactic nuclei and their observational signatures using two state-of-the-art codes, the 3D moving-mesh hydrodynamics code AREPO and the 1D stellar evolution code MESA. The initial conditions of our simulations involved two identical 1 M\({}_{\odot}\) giants with different radii, initial relative speeds, and impact parameters. This work complements to the analytical calculations presented by Amaro Seoane (2023), which is generally consistent with each other. We improve the estimates of the events' observables by accurately taking into account the realistic stellar internal structure and non-linear hydrodynamics effects.
When two stars collide with exceedingly large kinetic energy, very strong shocks are created along the contact surface. The two stars are fully destroyed and merged into an homologously, quasi-spherically, and supersonically expanding gas cloud. The maximum expansion speed of the cloud is larger than the initial relative velocity of the stars by a factor of \(3-6\). The expansion speed at a given mass coordinate stays the same, but the outer edge of the cloud slows down because of the interaction with the background medium. As it expands, the overall level of its density and temperature drops following a power-law \(\propto t^{-3}\) and \(\propto t^{-1}\), respectively, becoming optically thin within a few hundred days. At any given time of evolution up to 30 days, the density and temperature of the inner regions of the cloud remain relatively constant, rapidly decaying towards the outer edge, following a power-law: \(\rho(r)\propto r^{-8}-r^{-12}\) and \(T(r)\propto r^{-1}-r^{-2}\). These quantities exhibit weak dependencies on the stellar radius within \(10-100\) R\({}_{\odot}\) and the impact parameter within \(b\simeq 0.4\) R\({}_{\odot}\). But the dependence on the collision velocity is relatively strong. We provide fitting formulae for the average cloud density, temperature, maximum expansion speed, and optical depth (Equations 2- 7), which would be useful for analytic estimates for these BDCs.
One of the key findings of our study is to numerically estimate the amount of radiation energy converted from the initial kinetic energy, which plays a crucial role in determining the observable properties of the collisions. The overall trend of the conversion efficiency, defined as the ratio of the converted radiation energy to the initial kinetic energy, is such that it peaks at \(\gtrsim 0.1\) at collision, decays to \(10^{-4}-10^{-2}\) within 10 days, and then gradually increases. The effi
ciency reaches \(10^{-2}-10^{-1}\) in one month since the collision. But its magnitude depends on various factors, including the stellar radius, impact parameter, and collision velocity. More specifically, a collision between larger stars colliding at a higher speed with a smaller impact parameter tends to result in greater conversion efficiency.
We estimate the luminosity, the blackbody radius, and the blackbody temperature, using the converted radiation energy and local cooling time within the gas cloud. The peak luminosity can reach values exceeding \(10^{42}\) erg/s and exhibits the similar dependence with the conversion efficiency. Over time, the luminosity decays following a power-law of \(t^{-0.8}\) at early times and \(t^{-0.4}\) after 10 days since collision. The blackbody radius increases almost linearly with time (\(\propto t^{0.8}\)), while the temperature decreases, following a power-law of \(t^{-0.5}-t^{-0.6}\). The collision events would initially produce bursts of extreme ultraviolet (\(\approx 10\)eV) gradually shifting to optical (\(\simeq 0.1\)eV), with temporal evolution spanning from days to weeks. These events can be observed by ongoing (e.g., ZTF Bellm et al. 20194, ASSA-SN Kochanek et al. 20175) and future (e.g., LSST Ivezic et al. 20196 and ULTRASAT Shvartzvald et al. 20237) surveys. More detailed radiation transport calculations will be carried out in our follow-up project, with which the detection rate for each survey will be estimated.
Footnote 4: [https://www.ztf.caltech.edu](https://www.ztf.caltech.edu)
Footnote 5: [https://www.astronomy.ohio-state.edu/asassn](https://www.astronomy.ohio-state.edu/asassn)
Footnote 6: [https://www.lsst.org](https://www.lsst.org)
Footnote 7: [https://www.weizmann.ac.il/ultrasat](https://www.weizmann.ac.il/ultrasat)
In addition to the burst resulting from the stellar collision itself, a subsequent burst occurs due to the accretion of the gas cloud onto the supermassive black hole in the galactic center in \(5(M_{\bullet}/10^{7}\ {\rm M}_{\odot})\) days for \(v_{\rm rel}=10^{4}\) km/s since collision. Assuming Bondi accretion, the accretion luminosity can easily exceed the Eddington limit as well as the luminosity from the stellar collision. Because the Bondi radius expands faster than the gas cloud, the entire cloud would be gravitationally captured in the black holes's potential in \(11(M_{\bullet}/10^{7}\ {\rm M}_{\odot})^{3/4}\) days and subsequently accrete onto the black hole. It would take \(\lesssim 9(M_{\bullet}/10^{7}\ {\rm M}_{\odot})^{-1}\) years if the entire cloud was accreted. Therefore, the overall luminosity curve would include a peak from the collision event, followed by a rise to the Eddington luminosity. This heightened luminosity can be sustained for up to 10 years.
Although the estimate of the time scales and luminosity due to gas-black hole interactions are still on the order-of-magnitude level, this aspect indicates very important implications. The possibility of the gas accretion onto the black hole at all mass scales in proximity subsequently after the collision suggests that the collision can provide another mechanism for black hole growth. Tidal disruption events have been proposed as a tool to detect dormant black holes, mostly up to \(10^{8}\ {\rm M}_{\odot}\). However, because disruptive stellar collisions can occur near very massive dormant ones (\(>10^{9}\ {\rm M}_{\odot}\)), such collisions can be a potentially promising tool to probe the existence of very massive dormant black holes.
Finally we demonstrate the conversion of kinetic energy into radiation energy, providing insights into the efficiency of particle acceleration in these collisions. The resulting bursts of ultraviolet and optical emission indicate the generation of high-energy particles, highlighting the importance of particle acceleration processes in understanding the observational signatures of such events.
While this study, to our knowledge, is the first detailed hydrodynamics calculations of BDCs between giants, there are a few caveats in our modelling that will be improved in our future work. First, the assumption for local thermodynamic equilibrium is only valid for optically thick gas. This means the evolution of the collision product at early times is accurate, but as the gas cloud becomes optically thin, our treatment of radiation pressure becomes inaccurate. As remarked in SS 3.5, This would affect the shape of the lightcurves at late times. We will perform detailed non-equilibrium radiation transport calculations for the late time evolution in our follow-up project using our hydrodynamics calculations at early times when our assumption for local thermodynamic equilibrium is valid. This will significantly improve the light curve modelling. Second, there are several physical impacts that we have not considered yet, such as, magnetic fields, recombination, and the existence of non-thermal particles. Using the machinery that we built for this work, We will explore their impacts in a series of studies dedicated to investigating the impact of each physics.
BDCs will offer insights into many astrophysical aspects that can not be provided by other transients, such as the stellar dynamics and potential particle acceleration in galactic nuclei and globular clusters, black hole growth, and detection of dormant black holes.
## Acknowledgements
TR is grateful to Luc Dessart and Re'em Sari for constructive comments on the manuscript, Hans-Thomas Janka for fruitful discussions for similarities and dissimilarities of these events with core-collapse supernovae, and Ruggero Valli for providing an MESa input file for creating the giants used for the simulations. This research project was conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b166ea10. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683. In addition, some of the simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. PAS acknowledges the funds from the "European Union NextGenerationEU/PRTR", Programa de Planes Complementarios I+D+I (ref. ASFAF/2022/014).
## Data availability
Any data used in this analysis are available on reasonable request from the first author.
|
2306.08322 | Cryptography approach for Secure Outsourced Data Storage in Cloud
Environment | A large amount of data and applications are migrated by researchers,
stakeholders, academia, and business organizations to the cloud environment due
to its large variety of services, which involve the least maintenance cost,
maximum flexibility, and on-demand service for storage, computation, and data
distribution intentions. Despite the various characteristics the cloud
environment supports, it also faces many challenges. However, data users may
not completely trust a cloud environment that is engaged by a third party.
Every cloud user always has a prime concern, i.e., security. Numerous methods
have been designed to solve the issue of data security during data storage,
calculation, and sharing across stakeholders and users. Nevertheless, there is
a lack of existing methods that tackle the issue of the security of data when
it is stored in a cloud environment. This article presents a precise security
method that has handled the security of data while it is being shared and
stored in the cloud. These methods have been utilized to lessen security
assaults and prevent unauthorized parties from accessing the actual data. The
article is concluded with some limitations and recommendations for the future
in terms of secure data retention and distribution. | Rishabh Gupta, Deepika Saxena, Ashutosh Kumar Singh | 2023-06-14T07:43:24Z | http://arxiv.org/abs/2306.08322v1 | # Cryptography approach for Secure Outsourced Data Storage in Cloud Environment
###### Abstract
A large amount of data and applications are migrated by researchers, stakeholders, academia, and business organizations to the cloud environment due to its large variety of services, which involve the least maintenance cost, maximum flexibility, and on-demand service for storage, computation, and data distribution intentions. Despite the various characteristics the cloud environment supports, it also faces many challenges. However, data users may not completely trust a cloud environment that is engaged by a third party. Every cloud user always has a prime concern, i.e., security. Numerous methods have been designed to solve the issue of data security during data storage, calculation, and sharing across stakeholders and users. Nevertheless, there is a lack of existing methods that tackle the issue of the security of data when it is stored in a cloud environment. This article presents a precise security method that has handled the security of data while it is being shared and stored in the cloud. These methods have been utilized to lessen security assaults and prevent unauthorized parties from accessing the actual data. The article is concluded with some limitations and recommendations for the future in terms of secure data retention and distribution.
Cloud Computing Data Security Cloud Storage Privacy Preservation Cryptography
## 1 Introduction
With its fast data processing and storage capacity, cloud computing is essential for all applications that need high processing costs, such as machine learning for classification [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Cloud computing offers open, on-demand, scalable, and easy-to-use computing services [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. The cloud server requires a huge quantity of data for computation and sharing among users [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. These data are obtained from different data owners. Every data owner has its data containing sensitive data such as personal images, social media data, and medical records [55, 56, 57, 58, 59, 60, 61, 62]. Such sensitive data is transmitted to the cloud service provider for keeping and processing purposes [63, 64, 65, 66, 67, 68]. The data owners lose their rights, and also do not know about who is accessing outsourced data [69, 70]. But, after outsourced data, different organizations or any adversary can access the data since the cloud service provider holds all data on its servers and it is the third party [71, 72]. The data owner may not trust the cloud server [73, 74]. Cloud computing suffers from various security problems, which are widely discussed topics [75]. In fact, cloud computing offers many tools but also has critical security concerns. Therefore, to preserve data privacy from others, owners first convert their data before passing
data to another entity. Although the data is converted using existing approaches, such as differential privacy [76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87]. To protect the actual information of the owners, the existing approach produces the noise using a probability-based distribution function. The generated noise is added to the actual information and acquires the noise-added data which is stored on the cloud server and shared across all the entities. But this approach does not trade-off between accuracy and privacy. To resolve the above challenge, a cryptography approach for secure outsourced data storage in the cloud environment is proposed for secure computation and distribution among entities. The proposed scheme adopts a cryptography approach to encrypt the owners' data. Data is secured by encryption and decryption processes. So that the adversary is unable to understand the actual data of each owner. In this proposed model, it is considered that the data owners are not communicating with each other. The cloud server is honest-but-curious (non-colluding), following the protocol strictly. The data in encrypted form is forwarded to the cloud server that stores it. Cloud server performs computation on the received data and shares it among users whenever they send the request for the owners' data. Users decrypt the data that are obtained from the cloud service provider and acquire the actual information. Therefore, the proposed model stores and shares the data in such a way that doesn't reveal the data owners' data.
## 2 Related Work
To assess the suitability of fog computing, a three-tier architecture is presented in [88], which transmits the data from terminal nodes to the cloud through the fog nodes. On the basis of this model, the authors described the mathematical disparity between the traditional cloud computing paradigm and the fog computing paradigm for different renewable and non-renewable energy resources and related costs. A case study was conducted and the result revealed that fog computing outperforms cloud computing in the sense of the Internet of Things ( IoT), with a large number of latency-sensitive applications. The proposed architecture reduced latency, energy consumption, and carbon dioxide emissions but suffered from limited data sharing.
In the current storage schema, the data of the users are stored entirely on cloud servers. By doing so, users lose their right to data protection and face the possibility of privacy leakage. To preserve the privacy of data, a fog computing-driven three-layer storage framework was devised by Wang et al. [89]. The suggested system has the potential to make the most of cloud storage while still preserving data privacy. A distribution method based on risk mitigation is used to spread the components among various cloud storage platforms. In addition, the Hash-Solomon Code algorithm was also designed to divide data into different sections. Each component can be put on the cloud, fog, and local machines and reconstructed from plaintext. The privacy protection of sensitive data is tackled by fog computing-based data collection methods with exceptional performance. However, user revocation was not considered.
The Server-aided Network Topology (SNT) system and the Fully-connected Network Topology (FNT) system, both of which are based on connections to SNT and FNT servers, were introduced by Phong and Phuong [90] for stochastic gradient descent (SGD) protection. The SGD or its derivatives can be used by several machine learning trainers in these systems over the combined dataset without sharing the local dataset of each trainer. The clients communicate the locally derived weights for the model to the server, which then aggregates the incoming weights and transmits them back to the clients. Applying privacy-preserving weight transfer to network training in the encrypted domain can offer robustness against model extraction attempts. Such an approach's benefit is that it withstands updated loss and doesn't need frequent synchronization. The experiments were conducted using various datasets, and results showed that their system outperformed the existing system in terms of learning accuracy. These systems are also effective in terms of calculation & communication. The developed systems, which utilized weight parameters rather than gradient parameters, had accuracy close to SGD.
A privacy-preserving price e-negotiation (3PEN) protocol is introduced in [91] that protects the prices of the seller and buyer. A secure multi-party computation (SMC) technique was employed for the input data encryption. The oracle and qubit comparator was applied to acquire the final state, depicting the two-party's price results. It presents the results of the prices of the two parties' comparisons, and counting is done to total the number of products that satisfy the trading requirements (i.e., the number of times the buyer's offer exceeds the seller's asking price by a predetermined amount). This prevents any confidential information about Alice's prices from being discovered. The suggested protocol offers higher security and significantly lowers the likelihood of an external eavesdropper attack than the traditional channel while increasing computing costs.
Due to resource limitations, classifier owners employ outsourced classification services to move their classifiers to remote servers, where users request services of classification. To protect the classifier, Chai et al. [92] devised an outsourced classification protection scheme having less computational and communication overhead. The authors utilized a substitutive OU cryptosystem to mitigate the bandwidth consumption. The proposed scheme resists the substitution-then-comparison (STC) attack, ensuring that users can accurately receive the classification outcome without
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Model/Schem /Framework & Workflow & Implementation & Outcomes & Drawbacks \& Future Scope \\ \hline \hline A fog computing-based network model for data service [88] & \(\bullet\) The network model of fog computing was constructed to exchange traffic pattern & The experimental results performed range from 2 to 10 terminal nodes & \(\bullet\) Reduced 50\% service latency due to fog computing & \(\bullet\) This work can be extended by considering the heterogeneous devices \\ \hline \hline A cloud storage scheme for data privacy [89] & \(\bullet\) The data was encoded using hash transformation computed to divide the data into small parts & \(\bullet\) The experiments were performed based on ‘one more block’ principle & \(\bullet\) It reduced the lower servers’ storage presurcing & \(\bullet\) A fraction of data gets exposed to each outsourcing cloud server \\ \hline \hline A secure sharing network weights system driven on multilayered neural network [90] & \(\bullet\) The security of input data was performed via symmetric encryption & \(\bullet\) Breast Cancer, Skin/non-Skin, and MNIST datasets were adopted to evaluate this framework & \(\bullet\) Maintain the accuracy of deep learning at its highest level & \(\bullet\) The privacy of the output is not being considered \\ \hline \hline A secure multi-party computation-based e-negotiation protocol for price protection & \(\bullet\) The two-party’s prices were acquired by using a qubit comparator & \(\bullet\) Not Available more secure to privacy analysis & \(\bullet\) The proposed model is more secure according to privacy analysis & \(\bullet\) Only one-to-one communication (between seller and buyer) \\ \hline \hline OU cryptosystem-based privacy-protocol for data classification [92] & \(\bullet\) The data was encrypted using OU encryption algorithm & \(\bullet\) The MIRACL library was used for simulations & \(\bullet\) Increased efficiency due to OU cryptosystem & \(\bullet\) A large number of calculations makes the system complex \\ \hline \end{tabular}
\end{table}
Table 1: A capsulization of cryptography-based models
disclosing the classifier's private, with limited data sharing. Table I presents a thumbnail of relevant models based on the classical mechanism containing potential details.
## 3 System Model
The system model consists of three entities, such as Data Owners (\(DO\)), Users (\(U\)), and Cloud Service Provider (\(CSP\)).
Fig. 1 presents the building block of the cryptography technique, which consists of symmetric and asymmetric approaches. In symmetric approach, each data owner \(DO_{1}\), \(DO_{2}\), \(\ldots\), \(DO_{n}\) and user \(U_{1}\), \(U_{2}\), \(\ldots\), \(U_{m}\) has secret keys \(S_{\mathcal{K}1}\), \(S_{\mathcal{K}2}\), \(\ldots\), \(S_{\mathcal{K}n}\). \(DO_{1}\), \(D_{2}\), \(\ldots\), \(DO_{n}\) encrypts data \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\) with their secret keys \(S_{\mathcal{K}1}\), \(S_{\mathcal{K}2}\), \(\ldots\), \(S_{\mathcal{K}n}\), respectively, and obtained encrypted data \(D_{1}^{E}\), \(D_{2}^{E}\), \(\ldots\), \(D_{n}^{E}\). The produced data are passed to the users \(U_{1}\), \(U_{2}\), \(\ldots\)\(U_{m}\) through \(CSP\) for utilization purposes. \(U_{1}\), \(U_{2}\), \(\ldots\), \(U_{m}\) decrypt the obtained encrypted data \(D_{1}^{E}\), \(D_{2}^{E}\), \(\ldots\), \(D_{n}^{E}\) with their secret keys \(S_{\mathcal{K}1}\), \(S_{\mathcal{K}2}\), \(\ldots\), \(S_{\mathcal{K}n}\) and acquire the plain documents \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\). Similarly, in asymmetric approach, each data owner \(DO_{1}\), \(DO_{2}\), \(\ldots\), \(DO_{n}\) has data \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\), and public keys \(P_{\mathcal{B}1}\), \(P_{\mathcal{B}2}\), \(\ldots\), \(P_{\mathcal{B}n}\). \(DO_{1}\), \(DO_{2}\), \(\ldots\), \(DO_{n}\) encrypts data \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\) with their \(P_{\mathcal{B}1}\), \(P_{\mathcal{B}2}\), \(\ldots\), \(P_{\mathcal{B}n}\), respectively, and acquired encrypted data \(D_{1}^{E}\), \(D_{2}^{E}\), \(\ldots\), \(D_{n}^{E}\). The acquired data are transferred to the users \(U_{1}\), \(U_{2}\), \(\ldots\), \(U_{m}\) through \(CSP\) for various purposes. \(U_{1}\), \(U_{2}\), \(\ldots\), \(U_{m}\) decrypt the acquired encrypted data \(D_{1}^{E}\), \(D_{2}^{E}\), \(\ldots\), \(D_{n}^{E}\) with their private keys \(P_{\mathcal{V}1}\), \(P_{\mathcal{V}2}\), \(\ldots\), \(P_{\mathcal{V}m}\) and obtain the plain informations \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\).
## 4 Data Encryption and Decryption
Let data \(D_{1}\), \(D_{2}\), \(\ldots\), \(D_{n}\in\mathbb{D}\) that are protected by encryption and decryption process of symmetric and asymmetric cryptography techniques using sets of public keys (\(P_{\mathcal{B}}\)), secret keys (\(S_{\mathcal{K}}\)), and private keys (\(P_{\mathcal{V}}\)). The symmetric cryptography technique maps \(\Omega_{E}:\)\(D_{i}\times S_{\mathcal{K}}\to D_{i}^{E}\), and \(\Lambda_{D}\); \(D_{i}^{E}\times S_{\mathcal{K}}\to D_{i}\), such that \(\Lambda_{D}\Omega_{E}(\mathcal{D}_{i}\), \(S_{k})\) = \(\mathcal{D}_{i}\); whereas \(\Omega_{E}\) and \(\Lambda_{D}\); are the encryption and decryption operations. While the asymmetric cryptography technique maps \(\Omega_{E}:\)\(D_{i}\times P_{\mathcal{B}}\to D_{i}^{E}\), and \(\Lambda_{D}\); \(D_{i}^{E}\times P_{\mathcal{V}}\to D_{i}\), such that \(\Lambda_{D}\Omega_{E}(\mathcal{D}_{i}\), \(\mathcal{P}_{b})\) = \(\mathcal{D}_{i}\); \(\forall\)\(\mathcal{D}_{i}\in D_{i}\), \(\mathcal{S}_{k}\in S_{\mathcal{K}}\), \(\mathcal{P}_{b}\in P_{\mathcal{B}}\), \(\mathcal{P}_{v}\in P_{\mathcal{V}}\) where \(\mathcal{D}_{i}^{E}\in D_{i}^{E}\).
The symmetric cryptography technique (\(\Omega_{E}\), \(\Lambda_{D}\), \(D_{i}\), \(D_{i}^{E}\), \(S_{\mathcal{K}}\)) comprises three operations: Key Generation, Encryption, and Decryption, which are defined as:
1. **Key Generation** (\(K_{G}^{SY}\) (\(\mathcal{C}\mathcal{G}\))): This function generates the secret key (\(\mathcal{S}_{k}\)) using Eq. (1), which is utilized to encrypt and decrypt the data. \[\mathcal{S}_{k}=K_{G}^{SY}(\mathcal{C}\mathcal{G})\forall\mathcal{S}_{k}\in S _{\mathcal{K}}\] (1)
Figure 1: Schematic representation of cryptography-driven model
2. **Encryption** (\(\Omega_{E}\)): This function (\(\Omega_{E}:\mathcal{D}_{i}\times\mathcal{S}_{k}\rightarrow\mathcal{D}_{i}^{E}\)) performs the encryption task on the actual data \(\mathcal{D}_{i}\) using the secret key (\(\mathcal{S}_{k}\)) and procures the encrypted data \(\mathcal{D}_{i}^{E}\), as given in Eq. (2). \[\mathcal{D}_{i}^{E}=\Omega_{E}(\mathcal{D}_{i},\mathcal{S}_{k})\forall\quad \mathcal{D}_{i}\in D_{i}\land\mathcal{S}_{k}\in S_{\mathcal{K}}\land\mathcal{D }_{i}^{E}\in D_{i}^{E}\] (2)
3. **Decryption** (\(\Lambda_{D}\)): This function (\(\Lambda_{D}\); \(\mathcal{D}_{i}^{E}\times\mathcal{S}_{k}\rightarrow\mathcal{D}_{i}\)) takes the encrypted data \(\mathcal{D}_{i}^{E}\) and \(\mathcal{S}_{k}\) as input and provides the actual data \(\mathcal{D}_{i}\) as an output using Eq. (3). \[\mathcal{D}_{i}=\Lambda_{D}(\mathcal{D}_{i}^{E},\mathcal{S}_{k})\forall \mathcal{D}_{i}^{E}\in D_{i}^{E}\land\mathcal{S}_{k}\in S_{\mathcal{K}}\land \mathcal{D}_{i}\in D_{i}\] (3)
The asymmetric cryptography technique (\(\Omega_{E}\), \(\Lambda_{D}\), \(D_{i}\), \(D_{i}^{E}\), \(P_{B}\), \(P_{V}\)) contains three operations that are described as:
1. **Key Generation** (\(K_{G}^{AS}\) (\(\mathcal{C}\mathcal{G}\))): This operation generates the keys \(\mathcal{P}_{b}\), and \(\mathcal{P}_{v}\) for encryption and decryption, respectively, using Eq. (4). \[\mathcal{P}_{b},\mathcal{P}_{V}=K_{G}^{AS}(\mathcal{C}\mathcal{G})\forall \mathcal{P}_{b}\in P_{\mathcal{B}}\land\mathcal{P}_{v}\in P_{\mathcal{V}}\] (4)
2. **Encryption** (\(\Omega_{E}\)): This function (\(\Omega_{E}:\mathcal{D}_{i}\times\mathcal{P}_{b}\rightarrow\mathcal{D}_{i}^{E}\)) encrypt the actual information \(\mathcal{D}_{i}\) with \(\mathcal{P}_{b}\) keys and provides the encrypted data \(\mathcal{D}_{i}^{E}\) by applying Eq. (5). \[\mathcal{D}_{i}^{E}=\Omega_{E}(\mathcal{D}_{i},\mathcal{P}_{b})\forall\quad \mathcal{D}_{i}\in D_{i}\land\mathcal{P}_{b}\in P_{\mathcal{B}}\land\mathcal{D }_{i}^{E}\in D_{i}^{E}\] (5)
3. **Decryption** (\(\Lambda_{D}\)): This function (\(\Lambda_{D}\); \(\mathcal{D}_{i}^{E}\times\mathcal{P}_{v}\rightarrow\mathcal{D}_{i}\)) takes the encrypted data \(\mathcal{D}_{i}^{E}\) and \(\mathcal{P}_{v}\) as input and provides the actual data \(\mathcal{D}_{i}\) as an output using Eq. (6). \[\mathcal{D}_{i}=\Lambda_{D}(\mathcal{D}_{i}^{E},\mathcal{P}_{v})\forall \mathcal{D}_{i}^{E}\in D_{i}^{E}\land\mathcal{P}_{v}\in P_{\mathcal{V}}\land \mathcal{D}_{i}\in D_{i}\] (6)
## 5 Conclusion
Data security is an especially challenging issue in the cloud computing environment for storage and information sharing. This work focuses on the security of data that has been outsourced by various data owners. It provides a succinct and concise overview of the methods used for outsourcing data security. The system model, data encryption, and decryption processes are discussed, followed by a comparison of various existing works. A comprehensive data security method that can secure the data and effectively use it for machine learning is required in light of the aforementioned evaluation and new obstacles. This work can be extended by designing a more efficient privacy-preserving mechanism to secure the data for various owners.
|
2304.06576 | Constraints on new physics around the MeV scale with cosmological
observations | We investigate the joint effect of the cosmological phase transitions,
thermal light dark matter, and the lepton asymmetry on the big bang
nucleosynthesis and cosmic microwave background. We find that all of them can
modify the predictions of the effective number of neutrino species and
primordial nucleosynthesis. In turn, we observe that: 1) the cosmological
observations can exclude slow and strong phase transitions with strength even
smaller than $\mathcal{O}(10^{-3}-10^{-2})$; 2) a much larger portion of dark
matter mass region is excluded when the phase transition temperature is closer
to 1 MeV; and 3) the magnitude of the non-vanishing neutrino lepton asymmetry
is limited to be around $\mathcal{O}(10^{-2}-10^{-1})$ depending on the phase
transition strength. These phase transitions can produce stochastic
gravitational wave background to be probed by pulsar timing array experiments. | Shihao Deng, Ligong Bian | 2023-04-13T14:37:59Z | http://arxiv.org/abs/2304.06576v3 | # Constraining low-scale dark phase transitions with cosmological observations
###### Abstract
We investigate the effects of the low-scale cosmological first-order phase transitions on the neutrino decoupling and constrain the phase transition parameters with the cosmological observations of big bang nucleosynthesis and cosmic microwave background. We consider the phase transitions that occur at the MeV scale whose stochastic gravitational wave background can be probed by pulsar timing array experiments. We find that the phase transition can modify the predictions of the effective number of neutrino species and the primordial nucleosynthesis. In turn, we observe that the cosmological observations can exclude slow and strong phase transitions around MeV scales.
## I Introduction
The cosmological first-order phase transitions (PTs) are generally predicted by many well-motivated new physics models [1; 2; 3; 4]. The first-order PTs are expected to produce stochastic gravitational wave background (SGWB) [5; 6], explain the source of primordial magnetic fields [7; 8] and the origin of the baryon asymmetry of the Universe [9]. The SGWB produced by the first-order PTs is one of the main scientific goals of many gravitational detectors, such as LIGO [10; 11], LISA [12], \(Taiji\)[13], NANOGrav [14], PPTA [15], and SKA [16]. Since both the types of electroweak PT and QCD PT in the Standard Model of particle physics are _cross-over_[17; 18], observing the SGWB relies of the first-order PTs would help to probe the parameters of new physics beyond the Standard model (BSM) [5; 6; 19; 20; 21].
The low-scale first-order PTs can occur in QCD when lepton asymmetry shows up [22; 23; 24; 25]1, and thermal dark sectors [28]. PTs in the dark sector are of great interest since they have the chance to modify the dark matter predictions through change particles masses [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], interactions [40], or the dark matter production dynamic in the early universe [41; 42; 43; 44]. MeV-scale dark matter is also highly connected with the neutrino decoupling process, and therefore the cosmic microwave background (CMB) and Big Bang nucleosynthesis (BBN) [45; 46; 47; 48; 49; 50]. Therefore, vacuum energy released from the first-order PTs in the dark sector might yield photon reheating and/or neutrino reheating, and change the effective number of neutrino species [51].
Footnote 1: For the constraints on the lepton asymmetry from BBN and CMB, we refer to Refs. [26; 27].
In this work, we consider the effects of the MeV-scale first-order PTs on the CMB and BBN when the MeV-scale thermal dark sector was taken into account. We include the detailed dynamics of the PTs on the evolution of time and temperature around the MeV-scale in the early universe to study the neutrino decoupling process and the BBN process. It was also observed that the appearance of the first-order PTs would change the time-temperature relation in the early universe, and therefore affect the light-element abundance during the BBN process. We then place constraints on PTs parameters with the corresponding cosmological observations of CMB and BBN.
## II Thermal dynamics with PTs
Before PTs, we assume that all particle species are in thermal equilibrium, and are described by a thermal equilibrium distribution function, characterized by a temperature \(T_{\rm i}\). First-order PTs proceed through true vacuum bubbles nucleation and percolation with the nucleation rate [52; 53]: \(\Gamma(t)=\Gamma_{0}e^{\beta t}\,\), where the \(\beta\) characterizes the PT rate or the true vacuum bubbles nucleations rate, and pre-factor can be estimated as \(\Gamma_{0}^{1/4}=\left(4\pi^{3}g_{\star}/45\right)^{1/2}\left(T_{\rm p}^{2}/ m_{\rm p1}\right)e^{-\beta/8H_{\star}}\) in the radiation dominant universe with \(T_{\rm p}\) being the PT temperature and \(H_{\star}\) being the Hubble parameter at the PT temperature, the Planck mass is \(m_{\rm Pl}=1.22\times 10^{19}\,\)GeV [54]. At the PT time, the PT's inverse duration is \(\beta/H_{\star}\equiv\beta/H(t_{\rm p})\) and the PT's strength is \(\alpha\equiv\Delta V/\rho_{\rm r}(t_{\rm p})\) where \(\Delta V\) denotes the energy density difference between the false and true vacua. We consider the PT occurs as the averaged probability of the false vacuum \(F(t_{\rm p})=0.7\). The \(F(t)\) can be calculated through [55]: \(F(t)=\exp\left[-(4\pi/3)\int_{t_{\rm i}}^{t}\,\,{\rm d}t^{\prime}\Gamma\left( t^{\prime}\right)a^{3}\left(t^{\prime}\right)r^{3}\left(t,t^{\prime}\right)\right]\), where \(t_{\rm i}\) is the time when PTs starts and \(r(t,t^{\prime})\equiv\int_{t^{\prime}}^{t}a^{-1}(\tau)d\tau\) is the comoving radius of true vacuum bubbles. Before PTs, all the fields settle in the false vacuum with \(F(t<t_{i})=1\).
If the false vacuum decay later in the expanding Universe, the energy density becomes larger after a PT since \(\rho_{\rm r}\) decrease as \(a^{-4}\) while \(\Delta V\) remains almost constant. As the PT proceeds, \(F(t)\) decreases, and the false vacuum energy transfers into the background plasma and yields photon reheating, causing an increase in their temperature and a subsequent decrease in \(N_{\rm eff}=3\times\left(11/4\right)^{4/3}\left(T_{\nu}/T_{\gamma}\right)^{4}\). Around \(0.8\,\)MeV, the neutrinos decouple, so the injection of PT's energy around this time will have a significant effect on \(N_{\rm eff}\). As shown in Fig. 1,
the value of \(F(t)\) changes much faster for a larger value of \(\beta/H_{*}\) at the fixed PT temperatures. For \(T_{\rm p}=1\,\)MeV, \(F(t)\) starts to fall just around 1s, which has a significant effect on neutrino decoupling. As \(T_{\rm p}\) increases, the magnitude of the \(F(t)\) would decrease earlier, and thus the effect on neutrino decoupling will be weaker, so \(N_{\rm eff}\) decreases even less compared to the Standard Model.
Considering the case of photon reheating driven by the first-order PTs, we study the early thermodynamics of the universe with MeV-scale thermally electrophilic dark sectors. More explicitly, we extend the corresponding temperature evolution equations given in Ref [48] to include the PTs dynamics,
\[\begin{split}\frac{dT_{\gamma}}{dt}=&-\Bigg{(}4H \rho_{\gamma}+3H\left(\rho_{e}+p_{e}\right)+3H\left(\rho_{\chi}+p_{\chi}\right) \\ &+3H\,T_{\gamma}\frac{dP_{\rm int}}{dT_{\gamma}}+\frac{\delta\rho _{\nu_{e}}}{\delta t}+2\frac{\delta\rho_{\nu_{\mu}}}{\delta t}+\frac{d\rho_{ \rm vac}}{dt}\Bigg{)}/f(T_{\gamma})\,,\\ \frac{dT_{\nu}}{dt}=&-(12H\rho_{\nu}-\frac{\delta \rho_{\nu_{e}}}{\delta t}-2\frac{\delta\rho_{\nu_{\mu}}}{\delta t})/(3\,\frac {\partial\rho_{\nu}}{\partial T_{\nu}})\,.\end{split} \tag{1}\]
Here, \(f(T_{\gamma})=\frac{\partial\rho_{\gamma}}{\partial T_{\gamma}}+\frac{\partial \rho_{e}}{\partial T_{\gamma}}+\frac{\partial\rho_{\chi}}{\partial T_{\gamma}} +T_{\gamma}\frac{d^{2}P_{\rm int}}{dT_{\gamma}^{2}}\), and the energy exchange rates \(\delta\rho_{\nu_{e}}/\delta t\) and \(\delta\rho_{\nu_{\mu}}/\delta t\) are:
\[\begin{split}\frac{\delta\rho_{\nu_{e}}}{\delta t}& =\frac{G_{F}^{2}}{\pi^{5}}\left[\left(1+4s_{W}^{2}+8s_{W}^{4} \right)F(T_{\gamma},T_{\nu_{e}})+2F(T_{\nu_{\mu}},T_{\nu_{e}})\right]\,,\\ \frac{\delta\rho_{\nu_{\mu}}}{\delta t}&=\frac{G_{F} ^{2}}{\pi^{5}}\left[\left(1-4s_{W}^{2}+8s_{W}^{4}\right)F(T_{\gamma},T_{\nu_{ \mu}})-F(T_{\nu_{\mu}},T_{\nu_{\mu}})\right]\,,\end{split}\]
with \(F(T_{1},T_{2})=32\,(T_{1}^{9}-T_{2}^{9})+56\,T_{1}^{4}\,T_{2}^{4}\,(T_{1}-T_{ 2})\,,\) and where \(G_{F}=1.1664\times 10^{-5}\,\)GeV\({}^{-2}\) is the Fermi constant, and \(s_{W}^{2}=0.223\) accounts for the Weinberg angle [56]. Finite temperature corrections are accounted for by \(P_{\rm int}\) and its derivatives [48]. In the above equations, \(\rho_{i}\) and \(p_{i}\) correspond to the energy density and pressure of a given particle respectively, \(H=\sqrt{(8\pi/3)\left(\sum_{i}\rho_{i}+\rho_{\rm vac}\right)/m_{\rm Pl}^{2}}\) is the Hubble parameter. The \(\rho_{\rm vac}=F(t)\Delta V\) is the energy density of the false vacuum.
We solve the time evolution equations for \(T_{\gamma}\) and \(T_{\nu}\) starting from \(T_{\gamma}=T_{\nu}=30\,\)MeV, when the neutrino and electron are in thermal equilibrium. According to \(t=1/(2H)\), the starting time for the evolution is \(t_{0}\sim 7\times 10^{-4}\,\)s, and we evolve the system until \(t_{\rm final}=5\times 10^{4}\,\)s where the electrons and positrons have already annihilated away. By solving this set of differential equations, we can find all the key background evolution quantities as a function of time, such as Hubble rate, temperature, etc. Technically, we modify the publicly available versions of NUDEC_BSM [48; 50] to take into account the PT dynamics, and use it to compute the background thermodynamics and \(N_{\rm eff}\), which is crucial for CMB observations.
The observed DM abundance in the Universe can be explained by a WIMP particle that annihilates to light species at a rate of \(\langle\sigma v\rangle\simeq 3\times 10^{-26}\,\)cm\({}^{3}\)/s [57]. Such a particle would decouple from the plasma while non-relativistic at temperatures of \(T\sim m/20\), and for WIMPs of \(m\lesssim 20\,\)MeV, neutrino decoupling and \(N_{\rm eff}\) would be affected. However, if the DM mass is greater than \(20\,\)MeV, the effect on neutrino decoupling would be negligible. For a vector boson with a mass of \(30\,\)MeV, \(N_{\rm eff}\) is calculated to be \(3.04335\), which is close to the Standard Model's prediction. To capture the dynamics and effects of the PTs, we consider the dark sector with mass being fixed at \(30\,\)MeV. Therefore, our results do not depend too much on the type of dark matter.
Fig. 2 illustrates the impact of two PT parameters (\(\alpha\) and \(\beta/H_{*}\)) on \(N_{\rm eff}\) with the PT temperature being \(T_{\rm p}=1\,\)MeV for the scenario of vector boson mass \(m=30\) MeV, which depicts that, for the fixed PT duration time, the increase of PT strength (\(\alpha\)) results in the decrease of \(N_{\rm eff}\).
As stated before, the effect of \(T_{\rm p}\) is significant, which determines the time at which \(F(t)\) takes effect in the neutrino decoupling process and affect the PT strength \(\alpha\). In the top two plots of Fig. 3, we present the effect of \(T_{\rm p}\) and \(\beta/H_{*}\) on the effective number of neutrino species \(N_{\rm eff}\) is shown for different \(\alpha\). Here, the \(N_{\rm eff}=2.92\) is the
central value obtained by the combinations of _Planck_ TT, TE, EE+lowE [58]. Roughly speaking, the effect of the PT on the \(N_{\rm eff}\) is significant when the PT temperature is relatively low for slow PT with small \(\beta/H_{*}\), and the effect of the PT can be negligible when the PT temperature is \(T_{\rm p}\gtrsim 4\) MeV. In the bottom two plots of Fig. 3, we demonstrate the joint effect of \(\alpha\) and \(\beta/H_{*}\) on \(N_{\rm eff}\) for different \(T_{\rm p}\). The impact of the PT is more effective for slower PT with lower \(\beta/H_{*}\) and stronger PT with larger \(\alpha\), and the effect of the PT is greater for the scenario of \(T_{\rm p}=1\) MeV than that of \(T_{\rm p}=2\) MeV.
## III Primordial nucleosynthesis and PTs
Conventionally, the neutron fraction \(X_{n}\equiv n_{n}/n_{b}\), which represents the ratio of neutron number density to baryon number density, is a crucial intermediate quantity in BBN. When the temperature is high enough, neutrons and protons transform into each other through \(n\leftrightarrow p\) reactions and these weak interactions are in equilibrium. Neglecting the chemical potential of electrons and neutrinos, the equilibrium abundance of neutrons is \(X_{n}^{\rm eq}=e^{-Q/T}/(1+e^{-Q/T})\), where \(Q\equiv m_{n}-m_{p}=1.293\) MeV. The neutrons follow a thermal equilibrium distribution until neutrinos decouple at around \(T_{\rm FO}\sim 0.8\) MeV(\(t_{\rm FO}\sim 1\)s).
After the freeze-out, \(X_{n}\) slowly decreases due to occasional weak reactions and is eventually dominated by free neutron decay. During this time, neutrons decay into protons, and the remaining fraction of neutrons is given by \(X_{n}(t>t_{\rm FO})\approx X_{n}(t_{\rm FO})e^{-t/\tau_{n}}\). As the temperature of the Universe falls to the nucleosynthesis temperature of \(T_{\rm nuc}\approx 0.078\) MeV (at time \(t_{\rm nuc}\)), the production of helium begins and the fraction \(Y_{\rm P}\) starts to increase. The final abundance of \(Y_{\rm p}\) is determined by the abundance of neutrons at the onset of nucleosynthesis, which is given by \(X_{n}(t_{\rm nuc})\), and is approximate \(Y_{\rm P}\approx 2X_{n}(t_{\rm nuc})\).
First-order PTs can have an impact on both CMB and BBN if the temperature of the first-order PT is low-scale around MeVs. When the low-scale first-order PTs happen, the process of photon reheating affects the time-temperature relation. For a given photon temperature, the total energy density and the Hubble rate decrease so is the magnitude of \(dT/dt\). This means that the onset of nucleosynthesis will occur much later, i.e., \(t_{\rm nuc}\) is delayed, and more neutrons decayed, resulting in a smaller \(Y_{\rm P}\).
On the other hand, in the case of a first-order PT with photon reheating, the neutrino density is lower than in the absence of a PT, resulting in a decrease in the weak reaction rate \(\Gamma_{np}\) at a given \(T=T_{\gamma}\), which can lead to an earlier freeze-out, with larger neutron fraction \(X_{n}(T_{\rm FO})\approx X_{n}^{\rm eq}(T_{\rm FO})\). Though the reduced neutrino density also leads to a decrease in the Hubble rate which partially offsets the effect on \(\Gamma_{np}\)[59], the reduction effect of the weak rate \(\Gamma_{np}\) is more significant due to its higher dependence on temperature [60]. Considering these two points together, the freeze-out temperature \(T_{\rm FO}\) would be higher than the scenario in the absence of PTs, i.e., freeze-out occurs earlier, which probably yields a larger final \(Y_{\rm P}\). In summary, larger \(X_{n}(t_{\rm FO})\) tends to favor larger\(Y_{\rm P}\) while larger \(t_{\rm nuc}\) tends to favor smaller \(Y_{\rm P}\).
Deuterium is formed directly from neutrons and protons and its abundance follows the equilibrium value as long as there are plenty of free neutrons available. Thus, the freeze-out has almost no impact on the abundance of deuterium. Since the deuterium binding energy is rather small, the peak of its abundance ratio occurs at around \(T_{\rm nuc}\). The primary factor that affects the abundance of deuterium is the time \(t_{\rm nuc}\), the reheating of photons triggered by the first-order PTs will affect the time-temperature relation, causing a later \(t_{\rm nuc}\), which leads to a smaller D/H\({}_{\rm P}\). For the calculation of the Primordial Nucleosynthesis, we pass the necessary thermodynamic parameters including \(T_{\gamma}\), \(T_{\nu}\), the scale factor \(a\), and the Hubble parameter \(H\) obtained with the modified NUDEC_BSM on to the BBN code PRIMAT [61]. These parameters were constructed as a function of time using the interpolation method and were used to replace the original thermodynamics of PRIMAT. By doing so, the time evolution of the nuclei abundances was calculated by recomputing weak interactions and nuclear reaction rates. We verified the correctness of our modified version by generating curves for the primordial helium and deuterium abundances with dark matter mass, which is in agreement with the results presented in Fig.1 of Ref. [47] when the PT dynamics were not considered.
Figure 3: The top-left and top-right panels show the effect of \(T_{\rm p}\) and \(\beta/H_{*}\) on \(N_{\rm eff}\) for \(\alpha=0.1\) (top-left) and \(\alpha=0.5\) (top-right). The bottom-left and bottom-right panels show the effect of \(\alpha\) and \(\beta/H_{*}\) on \(N_{\rm eff}\) for different \(T_{\rm p}\), where we set \(T_{\rm p}=1\) MeV (bottom-left) and \(T_{\rm p}=2\) MeV (bottom-right).
In Fig.4, we show specifically the effect of first-order PT on BBN with the PT parameters taken as \(T_{\rm P}=1\,\)MeV, \(\alpha=0.01\) and \(\beta/H_{*}=10\). Where, the \(Y_{\rm P}\) and D/H\(|_{\rm P}\) predictions are calculated with \(\Omega_{b}h^{2}=0.021875\) and \(\tau_{n}=879.5\,\)s [47]. Compared to the scenarios in the absence of PT, we find both the magnitudes of \(Y_{\rm P}\) and D/H\(|_{\rm P}\) decrease slightly. Though the effect of PT on D/H\(|_{\rm P}\) is found to be consistent with the observation in Ref. [51], the effect on \(Y_{\rm P}\) is different, which is related to the two parameters mentioned before, \(X_{n}(t_{\rm FO})\) and \(t_{\rm nuc}\). In our case, the effect of \(t_{\rm nuc}\) is a little stronger than \(X_{n}(t_{\rm FO})\) and leads to a decrease in \(Y_{\rm P}\), however, in the case of Ref. [51], the effect of \(X_{n}(t_{\rm FO})\) is stronger than \(t_{\rm nuc}\) and finally leads to a larger \(Y_{\rm P}\). We further note that the two plots confirm that the effects of dark matter mass are negligible since all the predictions of \(Y_{p}\) and D/H\(|_{\rm P}\) merges at large \(m_{\chi}\gtrsim 10\) MeV for both the scenarios with and without considering PTs.
## III Constraints on PT parameters
In this section, we study constraints on the low-scale first-order PTs from the BBN and CMB observations. For the analysis of the BBN, we consider the observation of primordial abundances of helium and deuterium (\(Y_{\rm P},{\rm D}/{\rm H}|_{\rm P}\)). To obtain the current constraints on low-scale PT from BBN observables, we take the effective BBN \(\chi^{2}\) being [47],
\[\begin{split}\chi^{2}_{\rm BBN}=&\frac{\left[Y_{\rm P }(\Omega_{b}h^{2},\alpha,\beta/H_{*})-Y_{\rm P}^{\rm obs}\right]^{2}}{\sigma(Y_ {\rm P}^{\rm th})^{2}+\sigma(Y_{\rm P}^{\rm obs})^{2}}\\ &+\frac{\left[{\rm D}/{\rm H}|_{\rm P}(\Omega_{b}h^{2},\alpha, \beta/{\rm H}_{*})-{\rm D}/{\rm H}|_{\rm P}^{\rm obs}\right]^{2}}{\sigma({\rm D }/{\rm H}|_{\rm P}^{\rm th})^{2}+\sigma({\rm D}/{\rm H}|_{\rm P}^{\rm obs})^{2 }}\,.\end{split} \tag{2}\]
Where, the central values are: \(Y_{\rm P}^{\rm obs}=0.245\,,{\rm D}/{\rm H}|_{\rm P}^{\rm obs}=2.547\times 10^{-5}\), and the current observational errors are \(\sigma(Y_{\rm P}^{\rm obs})=0.003,\sigma({\rm D}/{\rm H}|_{\rm P}^{\rm obs})= 0.025\times 10^{-5}\)[62], and theoretical errors are taken from Ref. [63]: \(\sigma(Y_{\rm P}^{\rm th})=0.00014,\sigma({\rm D}/{\rm H}|_{\rm P}^{\rm th})= 0.037\times 10^{-5}\). CMB observations precisely measure the value of \((\Omega_{b}h^{2},N_{\rm eff},Y_{\rm P})\). To obtain the constraints on PT parameters from these measurements, we take the Gaussian likelihood as [47]:
\[\chi^{2}_{\rm CMB}=(\Theta-\Theta_{\rm obs})^{\rm T}\,\Sigma^{-1}_{\rm CMB}\,( \Theta-\Theta_{\rm obs})\,, \tag{3}\]
with \(\Theta\equiv(\Omega_{b}h^{2},N_{\rm eff},Y_{\rm P})\) and
\[\Sigma_{\rm CMB}=\begin{bmatrix}\sigma_{1}^{2}&\sigma_{1}\sigma_{2}\rho_{12}& \sigma_{1}\sigma_{3}\rho_{13}\\ \sigma_{1}\sigma_{2}\rho_{12}&\sigma_{2}^{2}&\sigma_{2}\sigma_{3}\rho_{23}\\ \sigma_{1}\sigma_{3}\rho_{13}&\sigma_{2}\sigma_{3}\rho_{23}&\sigma_{3}^{2} \end{bmatrix}\,\,. \tag{4}\]
We take Planck+BAO+\(H_{0}\) dataset with the experimental value of \(\Theta\) being: \(\Theta_{\rm obs}=(0.02345,3.36,0.249)\), the parameters of the covariance matrices are \((\rho_{12},\rho_{13},\rho_{23})=(0.011,0.50,-0.64)\) and \((\sigma_{1},\sigma_{2},\sigma_{3})=(0.00025,0.25,0.020)\). The local measurement of the \(H_{0}\) from the SH0ES collaboration [64] would uplift the reconstructed value of the effective neutrino number for some amount, for the neutrino interpretation of the Hubble Tension we refer to Ref. [65].
Fig. 5 displays the exclusion limits at the 95% confidence level (CL) for \(\alpha\) and \(\beta/H_{*}\) after marginalizing over \(\Omega_{b}h^{2}\). By setting \(\Omega_{b}h^{2}=0.021875\), we obtain the minimum \(\chi^{2}\) value, denoted as \(\chi^{2}_{\rm min}\), and the corresponding
Figure 4: Impacts of light BSM particles on primordial nucleosynthesis as a function of their mass \(m_{\chi}\). The _left (right) panel_ corresponds to \(Y_{\rm P}({\rm D}/{\rm H}|_{\rm P})\). The dashed line corresponds to the case where no PT is considered, the solid line corresponds to the scenarios with a first-order PT where \(T_{\rm P}=1\,\)MeV,\(\alpha=0.01\) and \(\beta/H_{*}=10\).
Figure 5: The 95% CL constraints on the PT parameters \(\alpha\) and \(\beta/H_{*}\) from CMB and BBN datasets at \(T_{\rm p}=1\,\)MeV (left) and \(T_{\rm p}=2\,\)MeV(right). The regions to the right of yellow and light yellow regions are excluded by the current BBN and CMB constraints respectively.
95% CL limits are defined by \(\Delta\chi^{2}=\chi^{2}-\chi^{2}_{\rm min}=5.99\). We find that: 1) strong PT of relatively large \(\alpha\) and slow PT with small \(\beta/H_{*}\) are excluded; 2); BBN and CMB observations yield weaker constraints on PTs occurring at higher temperature (with larger \(T_{\rm P}\)); and 3) the constraint from BBN is stronger than that of CMB for both panels, this is because that \({\rm D/H}|_{\rm P}\) provides stronger constraints compared to \(N_{\rm eff}\). For the PTs with \(\beta/H_{*}=50\), the BBN data set constrain the PT strength to be \(\alpha\lesssim 0.03\) at \(T_{\rm P}=1\,\)MeV and \(\alpha\lesssim 0.05\) at \(T_{\rm P}=2\,\)MeV.
## IV Conclusion and discussion
This study shows that, in comparison with the Standard Model case, the thermal dynamics of first-order PTs lead to changes in the BBN and CMB predictions. The strong and slow PT yields a large deviation of the effective number of neutrino species from that of the Standard Model prediction. And the PTs effect on the deviation of \(N_{\rm eff}\) would be negligible when the PTs temperature \(T_{P}\gtrsim 4\) MeV. The constraints on the PTs parameter spaces from the observation of BBN are much stronger than that of the CMB, and the PTs strength of \(\alpha\gtrsim\mathcal{O}(0.01-0.1)\) together with the inverse duration of \(\beta/H_{*}\lesssim\mathcal{O}(1-10^{2})\) for \(T_{P}\sim 1-2\) MeV are excluded by the current cosmological observations at 95% CL.
The observations of the curvature perturbation at CMB would yield constraints on low-scale dark sectors [66, 67]. Nanohertz gravitational wave detection conducted by PPTA, NANOGrav, and SKA would have the chance to probe the low-scale PTs. Refs. [14, 15] provide the constraints on slow and strong first-order PTs occurring around the QCD scale. This study is complementary to these studies and provides much stronger constraints on strong and slow PTs occurring close to the MeV scale. It was noted that the free streaming of neutrinos would damp gravitational waves [68]. In comparison with the scenario of purely first-order PT without considering the neutrino decoupling effects, the low-frequency tail of the gravitational wave spectrum from MeV scale PTs under study would be modified [69, 70], which could be probed by Pulsar timing arrays soon.
###### Acknowledgements.
We are grateful to Miguel Escudero Abenza and James Alvey for helpful discussions on the BBN code PRIMAT and the study of the neutrino decoupling with the code NUDEC_BSM. We thank Shun Zhou for the insightful discussion on the relationship between neutrino decoupling and low-scale phase transitions. We thank Zach Weiner for bringing our attention to the relationship between neutrino-free steaming and gravitational waves. This work is supported in part by the National Key Research and Development Program of China Grants No. 2021YFC2203004, and in part by the National Natural Science Foundation of China under grants Nos. 12075041, 12147102, and the Fundamental Research Funds for the Central Universities of China (No. 2021CDJQY-011 and No. 2020CDJQY-Z003), and Chongqing Natural Science Foundation (Grants No.cstc2020jcyj-msxmX0814).
|
2301.05879 | Metamorphism as a covariant transform for the SSR group | Metamorphism is a recently introduced integral transform, which is useful in
solving partial differential equations. Basic properties of metamorphism can be
verified by direct calculations. In this paper we present metamorphism as a
sort of covariant transform and derive its most important features in this way.
Our main result is a characterisation of metamorphism's image space. Reading
this paper does not require advanced knowledge of group representations or
theory of covariant transform. | Taghreed Alqurashi, Vladimir V. Kisil | 2023-01-14T10:14:25Z | http://arxiv.org/abs/2301.05879v3 | # Metamorphism as a covariant transform
###### Abstract.
Metamorphism is a recently introduced integral transform, which is useful in solving partial differential equations. Basic properties of metamorphism can be verified by direct calculations. In this paper we present metamorphism as a sort of covariant transform and derive its most important features in this way. Our main result is a characterisation of metamorphism's image space. Reading this paper does not require advanced knowledge of group representations or theory of covariant transform.
Key words and phrases: metamorphism, covariant transform, integral transform 2010 Mathematics Subject Classification: Primary 35A22; Secondary 20C35, 22E70, 35C15 On leave from Odessa University
## 1. Introduction
Metamorphism is an integral transform recently introduced to treat partial differential equations [26]. In particular, metamorphism allows one to reduce the order of a differential equations: e.g. a second order differential equation can be transformed to a first order admitting a straightforward solution and transparent geometrical structure [3, 4]. Basic properties of the metamorphism can be verified by direct calculations--the path which was intentionally chosen to reduce the amount of prerequisites in the introductory paper [26]. Yet, a genuine origin of metamorphism is a covariant transform related to the Schrodinger-Jacobi group [7, 10] as was already presented in the Jupyter notebooks [24] with respective symbolic computations.
This paper systematically utilises the group theory and covariant transform technique to reinstall the metamorphism transform from a scratch. Furthermore, some sister integral transforms are appearing as well. The paper can be seen as a readable narrative to a Jupyter notebook [24], which will be frequently referred here to replace some boring calculations. Our main result is a characterisation of the metamorphism image space in Thm. 5.5.
We made this paper as accessible as possible. Its reading does not require an advanced knowledge of group representations and the theory of covariant transform. We provide most of required information with further references to more detailed presentations if needed.
In Sect. 2 we introduce several groups: the Heisenberg, \(\mathrm{SL}_{2}(\mathbb{R})\), affine, Schrodinger, and finally our main object--the group SSR. Essential relations between those groups are presented as well. We describe some (not all) induced representations of the group SSR in Sect. 3. The corresponding covariant transform and its properties are described in Sect. 4. Finally, we connect a selection of a fiducial vector with the properties of the image space of covariant transform in Sect. 5. In particular, the metamorphism is defined as the covariant transform with a remarkable fiducial vector--the Gaussian. Covariant transforms with some other mentioned fiducial vectors are still awaiting their investigation.
## 2. Heisenberg, \(\mathrm{SL}_{2}(\mathbb{R})\), affine, Schrodinger and SSR groups
We start from a brief account of groups involved in the consideration. An element of the one-dimensional Heisenberg group \(\mathbb{H}\)[19, 10, 25] will be denoted by \((s,x,y)\in\mathbb{R}^{3}\). The group law on \(\mathbb{H}\) is defined as follows:
\[(s,x,y)\cdot(s^{\prime},\,x^{\prime},\,y^{\prime})=(s+s^{\prime}+\tfrac{1}{2} \omega(x,y;x^{\prime},y^{\prime}),\,x+\,x^{\prime},\,y+\,y^{\prime}),\]
where
\[\omega(x,y;x^{\prime},y^{\prime})=xy^{\prime}-x^{\prime}y \tag{1}\]
is the symplectic form [5, SS41] on \(\mathbb{R}^{2}\). The identity element in \(\mathbb{H}\) is \((0,0,0)\), and the inverse of \((s,x,y)\) is \((-s,-x,-y)\).
There is an alternative form of \(\mathbb{H}\) called the polarised Heisenberg group \(\mathbb{H}_{p}\) with the group law [1, 10, SS1.2]
\[(s,\,x,\,y)\cdot(s^{\prime},\,x^{\prime},\,y^{\prime})=(\,s+s^{\prime}+xy^{ \prime},\,x+x^{\prime},\,y+y^{\prime}).\]
and the group isomorphism \(\theta:\mathbb{H}\to\mathbb{H}_{p}\) given by
\[\Theta:(s,x,y)\to(s+\tfrac{1}{2}xy,\,x,\,y).\]
The special linear group \(\mathrm{SL}_{2}(\mathbb{R})\) is the group of \(2\times 2\) matrices with real entries and the unit determinant [18, 27]. The group law on \(\mathrm{SL}_{2}(\mathbb{R})\) coincides with the matrix multiplication. A matrix \(A\in\mathrm{SL}_{2}(\mathbb{R})\) acts on vectors in \(\mathbb{R}^{2}\) by a symplectomorphism, i.e. an automorphisms of the symplectic form \(\omega\) (1):
\[\omega(A(x,y);A(x^{\prime},y^{\prime}))=\omega(x,y;x^{\prime},y^{\prime}).\]
Therefore, the transformation \(\theta_{A}:\mathbb{H}\to\mathbb{H}\)
\[\theta_{A}:(s,x,y)\to(s,A(x,y))\]
is an automorphism of \(\mathbb{H}\)[10, SS1.2]. The corresponding polarised automorphism \(\theta_{A}^{p}=\Theta\circ\theta_{A}\circ\Theta^{-1}:\mathbb{H}_{p}\to \mathbb{H}_{p}\) is
\[\theta_{A}^{p}(s,x,y)=\left(s+\tfrac{1}{2}(\mathrm{ac}x^{2}+2\mathrm{bc}xy+b \mathrm{d}y^{2}),\,\mathrm{ac}+\mathrm{b}y,\mathrm{cx}+\mathrm{d}y\right),\]
where \(A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\).
Upper-triangular matrices in \(\mathrm{SL}_{2}(\mathbb{R})\) with positive diagonal entries form a subgroup \(\mathbb{A}\). We parameterise it by pairs \((b,r)\in\mathbb{R}_{+}^{2}\) with \(b\in\mathbb{R}\) and \(r>0\) as follows:
\[\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\begin{pmatrix}r&0\\ 0&1/r\end{pmatrix}\begin{pmatrix}r&0\\ 0&1/r\end{pmatrix}. \tag{2}\]
The subgroup is isomorphic to the affine group of the real line also known as the \(\mathrm{\alpha}x+b\) group [21].
For a group acting by automorphism on another group we can define their semi-direct product. The model case is the affine group itself, where dilations act as automorphisms of shifts. Formally, let \(G\) and \(H\) be two groups and assume \(\theta:H\to\mathrm{Aut}(G)\), where \(\theta_{h}\) is an automorphism of \(G\) corresponding to \(h\in H\). The semi-direct product of \(G\) by \(H\) denoted by \(G\rtimes H\) is the Cartesian product of \(G\times H\) with the group law
\[(g_{1},h_{1})\,\cdot\,(g_{2},h_{2})=(g_{1}\theta_{h_{1}}(g_{2}),h_{1}h_{2}), \tag{3}\]
where \((g_{1},h_{1}),\,(g_{2},h_{2})\in G\times H\).
The semidirect product of the Heisenberg group and \(\mathrm{SL}_{2}(\mathbb{R})\) is called Schrodinger group \(\mathbb{S}\), which is the group of symmetries of the Schrodinger equation [13, 30] and parabolic equations [35] with applications in optics [33, 34]. In the context of number theory it is also known as the Jacobi group [7].
Our main object here is the group \(\mathbb{G}\coloneqq\mathbb{H}\rtimes\mathbb{A}\), which is the semi-direct product of the Heisenberg group \(\mathbb{H}_{p}\), and the affine group \(\mathbb{A}\) (2) acting by symplectic automorphism of \(\mathbb{H}_{p}\). Thus, \(\mathbb{G}\) is a subgroup of the Schrodinger group. It can be also called shear-squeeze-rotation (SSR) group [24] by three types of transformations of Gaussian coherent states. A subgroup of \(\mathbb{G}\) without squeeze (i.e. \(\tau=1\) in (2)) is called the shear group and it was used in a similar context in [3, 4]. This nilpotent step 3 group is also known as the Engel group [9].
Let \((s,x,y,b,r)\in\mathbb{G}\) where \((s,x,y)\in\mathbb{H}_{p}\) and \((b,r)\in\mathbb{A}\). Explicitly the group law (3) on \(\mathbb{G}\) is [24]
\[(s,x,y,b,r)\cdot(s^{\prime},x^{\prime},y^{\prime},b^{\prime},r^{ \prime}) =(s+s^{\prime}+xr^{-1}y^{\prime}-\tfrac{1}{2}b\left(r^{-1}y^{\prime }\right)^{2},\] \[x+rx^{\prime}-br^{-1}y^{\prime},\,y+r^{-1}y^{\prime},\,b+b^{ \prime}r^{2},\,rr^{\prime}).\]
There is a convenient matrix realisation of \(\mathbb{G}\)[24]
\[(s,x,y,b,r)=\begin{pmatrix}1&-yr&(x+by)/r&2s-yx\\ 0&r&-b/r&x\\ 0&0&1/r&y\\ 0&0&0&1\end{pmatrix}.\]
The corresponding solvable Lie algebra \(\mathfrak{g}\) has a basis \(\{S,X,Y,B,R\}\), with the following non-vanishing commutators:
\[[X,Y]=S,\quad[X,R]=-X,\quad[Y,R]=Y,\quad[Y,B]=X,\quad[R,B]=2B. \tag{4}\]
Clearly, the group \(\mathbb{G}\) is a not a commutative Lie group.
## 3. Induced representations the group \(\mathbb{G}\)
In this section we construct several induced representations of the group \(\mathbb{G}\), which are required for our study. First, we recall the general scheme of induced representations. For simplicity, only inductions from characters of subgroups are considered and it is sufficient for our present purposes. For further details and applications of induced representations see [2, 29, 11, 20, 28].
### Induced representation from a subgroup character
Let \(G\) be a group and \(H\) be a subgroup of \(G\). The space \(X=G/H\) of the left cosets \(gH\) of the subgroup \(H\) is given by the equivalence relation: \(g\sim g^{\prime}\) if there exists \(h\in H\) such that \(g=g^{\prime}h\). We define the natural projection \(\mathbf{p}:G\to X\) such that \(\mathbf{p}(g)=gH\).
Let us fix a section \(s:X\to G\) such that \(\mathbf{p}\circ s=I\), where \(I\) is the identity map on \(X\). An associated map \(\mathbf{r}:G\to H\) by
\[\mathbf{r}(g)=s(\mathbf{p}(g))^{-1}\cdot g. \tag{5}\]
provides the unique decomposition of the form [14, SS13.2]
\[g=s(\mathbf{p}(g))\cdot\mathbf{r}(g),\qquad\text{ for any }g\in G.\]
Thus, \(X\) is a left homogeneous space with the \(G\) action as follows:
\[g^{-1}:x\to g^{-1}\cdot x=\mathbf{p}\left(g^{-1}\ast\mathbf{s}(x)\right), \tag{6}\]
where \(\ast\) is the multiplication of \(G\) and \(\cdot\) is the action of \(G\) on \(X\) from the left.
Suppose \(X:H\to\mathbb{T}\) be a character of the subgroup \(H\). Let \(L^{X}_{2}(G)\) be a Hilbert space of functions on \(G\) with a \(G\)-invariant inner product and the \(H\)-covariance property [23],
\[F(gh)=\bar{\chi}(h)\,F(g),\qquad\text{ for all }g\in G,\ h\in H. \tag{7}\]
The space \(L^{X}_{2}(G)\) is invariant under the left regular representation by \(G\)-shifts
\[\Lambda(g):F(g^{\prime})\to F(g^{-1}g^{\prime}),\quad\text{ where }g,g^{\prime}\in G. \tag{8}\]
The restriction of \(\Lambda\) to the space \(L^{X}_{2}(G)\) is called the induced representation from the character \(\chi\).
An equivalent form of the induced representation can be constructed as follows [14, 23]. We define a lifting \(\mathcal{L}^{X}:L_{2}(X)\to L^{X}_{2}(G)\) as the map
\[[\mathcal{L}^{X}\mathfrak{f}](g)=\overline{\chi}(\mathbf{r}(g))\,f(\mathbf{p} (g)). \tag{9}\]
The pulling \(\mathcal{P}:L_{2}^{X}(G)\to L_{2}(X)\) given by
\[[\mathcal{P}\mathrm{F}](x)=\mathrm{F}(\mathbf{s}(x)). \tag{10}\]
Clearly \(\mathcal{P}\circ\mathcal{L}^{X}=\mathrm{I}\) on \(L_{2}(X)\). From (9), (10), the induced representation \(\rho:L_{2}(X)\to L_{2}(X)\) is defined by the formula:
\[\rho_{\chi}(g)=\mathcal{P}\circ\Lambda(g)\circ\mathcal{L}^{X},\]
where \(\Lambda(g)\) is the left regular representation (8). The representation \(\rho_{\chi}\) explicitly is
\[[\rho_{\chi}(g)](x)=\overline{\chi}(\mathbf{r}(g^{-1}\,\mathbf{s}(x)))\,f(g^ {-1}\cdot x), \tag{11}\]
where \(g\in G\) and \(x\in X\) and \(g^{-1}\cdot x\) is defined by (6). For a \(G\)-invariant measure \(\mu\) on \(X\) the representation (11) is unitary on the space \(L_{2}(X,\mu)\)
### Derived representations
In this subsection \(G\) is a Lie group with the corresponding Lie algebra \(\mathfrak{g}\). Let \(\rho\) be a representation of \(G\) in a Hilbert space \(\mathcal{H}\), the derived representation of \(X\in\mathfrak{g}\) denoted as \(\mathrm{d}\rho^{X}\) is given by
\[\mathrm{d}\rho^{X}\phi=\left.\frac{\mathrm{d}}{\mathrm{dt}}\rho(\exp(\mathrm{ t}X))\phi\right|_{t=0}, \tag{12}\]
where the vector \(\phi\in\mathcal{H}\) is such that the vector-function \(g\to\rho(g)\phi\) is infinitely-differentiable for any \(g\in G\). These vectors are called smooth and constitute a linear subspace, denoted \(\mathcal{D}^{\infty}\), of \(\mathcal{H}\) which is dense in \(\mathcal{H}\). It is easy to show that \(\mathcal{D}^{\infty}\) is invariant under \(\rho(g)\)[27, SS6.1]. If \(\mathcal{H}\) is \(L_{2}(\mathbb{R}^{n})\) then the space \(\mathrm{D}^{\infty}\) contains the Schwartz space, which is a dense subspace of \(L_{2}(\mathbb{R}^{n})\).
Also, we define the Lie derivative \(\mathcal{L}^{X}\) for \(X\in\mathfrak{g}\) as the derived right regular representation [27, SS6.1], that is
\[[\mathcal{L}^{X}\mathrm{F}](g)=\left.\frac{\mathrm{d}}{\mathrm{dt}}\mathrm{F }(g\,\exp(\mathrm{t}X))\right|_{t=0}, \tag{13}\]
for any differentiable function \(F\) on \(G\).
### Left regular representation of group \(\mathbb{G}\)
The left and right invariant Haar measures of the group \(\mathbb{G}\) are given by
\[\mathrm{d}_{1}(s,x,y,b,r)=\mathrm{d}s\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}b \,\frac{\mathrm{d}r}{r^{3}},\]
\[\mathrm{d}_{r}(s,x,y,b,r)=\mathrm{d}s\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}b \,\frac{\mathrm{d}r}{r}.\]
Thus, the group \(\mathbb{G}\) is non-unimodular with the modular function \(\Delta(s,x,y,b,r)=\frac{1}{r^{2}}\).
We extend the action (8) of \(\mathbb{G}\) on itself by left shifts to the left regular unitary representation on the linear space of functions \(L_{2}(\mathbb{G},d_{1})\):
\[\begin{split}[\Lambda(s,x,y,b,r)\mathrm{F}](s^{\prime},x^{ \prime},y^{\prime},b^{\prime},r^{\prime})=\mathrm{F}(s^{\prime}-s+x(y^{\prime }-y)-\frac{1}{2}b(y^{\prime}-y)^{2},\\ \frac{1}{r}(x^{\prime}-x)+\frac{b}{r}(y^{\prime}-y),\,r(y^{ \prime}-y),\,\frac{1}{r^{2}}(b^{\prime}-b),\,\frac{r^{\prime}}{r}),\end{split} \tag{14}\]
where \((s,x,y,b,r)\), \((s^{\prime},x^{\prime},y^{\prime},b^{\prime},r^{\prime})\in\mathbb{G}\).
This representation is reducible, i.e. there are \(\Lambda\)-invariant proper subspaces in \(L_{2}(\mathbb{G},d_{1})\). In particular, many types of induced representations of \(\mathbb{G}\) are realised as restrictions of the left regular representations (14) to some subspaces with a covariance property (7). We describe here two of them--called the quasi-regular type representation and the Schrodinger type representation--together with equivalent forms on the respective homogeneous spaces.
### Quasi-regular representation of the group \(\mathbb{G}\)
Let
\[Z=\{(s,0,0,0,1),s\in\mathbb{R}\}\]
be the centre of the group \(\mathbb{G}\). The space of left cosets \(X=\mathbb{G}/Z\) can be parametrised by
\[\mathbb{R}^{4}_{+}=\{(x,y,b,r)\in\mathbb{R}^{4}:\ r>0\}.\]
Consider the natural projection and the section maps
\[\begin{split}\mathbf{p}(s,x,y,b,r)&\to(x,y,b,r), \\ \mathbf{s}(x,y,b,r)&\to(0,x,y,b,r).\end{split} \tag{15}\]
We calculate the respective map \(\mathbf{r}\) (5) as follows
\[\mathbf{r}(s,x,y,b,r) =\mathbf{s}(\mathbf{p}(s,x,y,b,r))^{-1}(s,x,y,b,r)\] \[=(s,0,0,0,1).\]
Let \(\chi_{h}:Z\to\mathbb{T}\) be an unitary character of \(Z\):
\[\chi_{h}(s,0,0,0,1)=\mathrm{e}^{2\pi\mathrm{i}h\mathrm{s}},\]
defined by a parameter \(h\in\mathbb{R}\). In quantum mechanical framework \(h\) is naturally associated to the Planck constant [10, 20, 23, 15]. The corresponding induced representation \(\tilde{p}:L_{2}(\mathbb{R}^{4}_{+})\to L_{2}(\mathbb{R}^{4}_{+})\) is [24]
\[\begin{split}[\tilde{p}(s,x,y,b,r)f](x^{\prime},y^{\prime},b^{ \prime},r^{\prime})&=\mathrm{e}^{2\pi\mathrm{i}h\left(s+x(y^{ \prime}-y)-b(y^{\prime}-y)^{2}/2\right)}\\ &\qquad\times f(\frac{1}{r}(x^{\prime}-x)+\frac{b}{r}(y^{\prime} -y),r(y^{\prime}-y),\frac{1}{r^{2}}(b^{\prime}-b),\frac{r^{\prime}}{r}).\end{split} \tag{16}\]
It is called the quasi-regular type representation on \(L_{2}(\mathbb{R}^{4}_{+})\). One can check that \(\tilde{p}\) is unitary and we will discuss its reducibility below.
### Schrodinger type representation of the group \(\mathbb{G}\)
Let
\[H_{1}=\{(s,x,0,b,r),\,s,\,x,b\in\mathbb{R},r\in\mathbb{R}_{+}\}\]
be a subgroup of \(\mathbb{G}\), which is a semidirect product of a maximal abelian subgroup of \(\mathbb{H}\) and the affine group \(\mathbb{A}\). The space of the left cosets \(\mathbb{G}/H_{1}\) is parameterized by \(\mathbb{R}\). We define the natural projection \(\mathbf{p}:\mathbb{G}\to\mathbb{R}\) and a section map \(\mathbf{s}:\mathbb{R}\to\mathbb{G}\) by
\[\mathbf{p}(s,x,y,b,r) =y,\] \[\mathbf{s}(y) =(0,0,y,0,1).\]
The respective map \(\mathbf{r}\) (5) is
\[\mathbf{r}(s,x,y,b,r) =\mathbf{s}(\mathbf{p}(s,x,y,b,r))^{-1}(s,x,y,b,r)\] \[=(s,x,0,b,r).\]
Let \(\chi_{h\lambda}:H_{1}\to\mathbb{T}\) be a character \(H_{1}\)
\[\chi_{h\lambda}(s,x,0,b,r)=\mathrm{e}^{2\pi\mathrm{i}h\mathrm{s}}\ \mathrm{r}^{\lambda+\frac{1}{2}},\]
where \(h\in\mathbb{R}\), \(\lambda\in\mathrm{i}\mathbb{R}\). For simplicity, we will consider here the case of \(\lambda=0\) only. The induced representation on \(L_{2}(\mathbb{R})\) is [24]
\[[\rho(s,x,y,b,r)f](u)=\sqrt{r}\,\mathrm{e}^{2\pi\mathrm{i}h\left(s+x(u-y)-b(u-y )^{2}/2\right)}\ f(r\,(u-y)). \tag{17}\]
_Remark 3.1_.: The structure of this representation can be illuminated through its restrictions to the following subgroups:
* The affine group \(\mathbb{A}\), i.e. the substitution \(s=x=y=0\). The restriction is the co-adjoint representation of the affine group [11, SS6.7.1; 21]: \[[\rho(0,0,0,b,r)f](u)=\sqrt{r}\operatorname{e}^{\operatorname{\mathrm{r}i \hbar}\,b\,u^{2}}f(r\,u).\] Through the Fourier transform it is unitary equivalent to the quasi-regular representation of the affine group, which is the keystone of the wavelet theory and numerous results in complex and harmonic analysis [21].
* The Heisenberg group, that is \(r=1\) and \(b=0\). The restriction is the celebrated Schrodinger representation [10, 23]: \[[\rho(s,x,y,0,1)f](u)=\operatorname{e}^{2\pi\mathrm{i\hbar}\,(s+x(u-y))}f(u-y),\] which plays the crucial role in quantum theory.
* The third subgroup is the Gabor group with \(b=0\). The representation is \[[\rho(s,x,y,r,0)f](u)=\operatorname{e}^{2\pi\mathrm{i\hbar}\,(s+x(u-y))}r^{ \frac{1}{2}}f(r\,(u-y)).\] It is involved in Gabor analysis and Fourier-Bros-Iagolnitzer (FBI) transform [10, SS3.3].
* Finally, the shear group corresponding to \(r=1\). The restriction is \[[\rho(s,x,y,1,b)f](u)=\operatorname{e}^{2\pi\mathrm{i\hbar}\,(s+x(u-y)-b(u-y)^ {2}/2)}f(u-y),\] It was employed in [3, 4] to reduce certain quantum Hamiltonians to first-order differential operators.
In view of the mentioned connections, we call representation (17) as Schrodinger type representation. It is irreducible since its restriction to the Heisenberg group coincides with the irreducible Schrodinger representation [10, 23].
The derived representation (12) of the Schrodinger type representation (17) is
\[\begin{split}\mathrm{d}\rho^{X}&=2\pi\mathrm{i \hbar}u1,\qquad\qquad\qquad\mathrm{d}\rho^{B}=-\pi\mathrm{i\hbar}uu^{2}I,\\ \mathrm{d}\rho^{Y}&=-\frac{\mathrm{d}}{\mathrm{d}u},\qquad\qquad\qquad\qquad\mathrm{d}\rho^{R}=\frac{1}{2}I+u\,\frac{\mathrm{d}}{ \mathrm{d}u},\\ \mathrm{d}\rho^{S}&=2\pi\mathrm{i\hbar}I.\end{split} \tag{18}\]
It is easy to check that the above sets of operators (18) represents commutators (4) of the Lie algebra \(g\) of the group \(\mathbb{G}\).
## 4. Covariant transform
The covariant transform plays a significant role in various fields of mathematics and its applications [2, 6, 10, 21, 23, 31]. We present here some fundamental properties of the covariant transform which have implications for the metamorphism transform.
### Induced covariant transform
Let \(G\) be a group and let \(\rho\) be a unitary irreducible representation of the group \(G\) in a Hilbert space \(\mathcal{H}\). For a fixed unit vector \(\phi\in\mathcal{H}\), called here a fiducial vector (aka vacuum vector, ground state, mother wavelet, etc.), the covariant transform \(\mathcal{W}_{\phi}:\mathcal{H}\to\mathrm{L}(G)\) is [2, SS8.1; 6; 31]
\[[\mathcal{W}_{\phi}f](g)=\langle f,\rho(g)\phi\rangle,\qquad\text{ where }f\in\mathcal{H}\text{ and }g\in G. \tag{19}\]
The main property of (19) is that \(\mathcal{W}_{\phi}\) intertwines the representation \(\rho\) on \(\mathcal{H}\) and the left regular action \(\Lambda\) (8) on \(G\):
\[\mathcal{W}_{\phi}\circ\rho(g)=\Lambda(g)\circ\mathcal{W}_{\phi},\quad\text{ for all }g\in G. \tag{20}\]
A representation \(\rho\) is square-integrable if for some \(\phi\in\mathcal{H}\), the map \(W_{\phi}:\mathcal{H}\to\mathsf{L}_{2}(G,\mathrm{d}g)\) is unitary for a left Haar measure \(\mathrm{d}g\) on \(G\). Some representations are not square-integrable, but can still be treated by the following modification of covariant transform by Perelomov [31]. Let \(H\) be a closed subgroup of the group \(G\) and the corresponding homogeneous space is \(X=G/H\). Let for some character \(\chi\) of \(H\) a fiducial vector \(\phi\in\mathcal{H}\) is a joint eigenvector
\[\rho(h)\,\phi=\chi(h)\phi,\qquad\text{for all }h\in H. \tag{21}\]
Then, the respective covariant transform satisfies the covariant property, cf. (7):
\[[W_{\phi}f](gh)=\overline{\chi}(h)[W_{\phi}f](g).\]
Thus, the image space of \(W_{\phi}\) belongs to the induced representation by the character \(\chi\) of the subgroup \(H\). This prompts to adopt the covariant transform to the space of function on the homogeneous space \(X=G/H\). To this end, let us fix a section \(\mathbf{s}:X\to G\) and a fiducial vector \(\phi\in\mathcal{H}\) satisfying (21). The induced covariant transform from the Hilbert space \(\mathcal{H}\) to a space of functions \(\mathsf{L}_{\phi}(X)\) is
\[[W_{\phi}f](x)=\langle f,\rho(s(x))\phi\rangle,\quad\text{ where }x\in X.\]
Then, the induced covariant transform intertwines \(\rho\) and \(\bar{\rho}\)--an induced representation from the character \(\chi\) of the subgroup \(H\), cf. (20):
\[W_{\phi}^{\rho}\circ\rho(g)=\bar{\rho}(g)\circ W_{\phi}^{\rho},\qquad\text{ for all }g\in G. \tag{22}\]
In particular, the image space \(\mathsf{L}_{\phi}(G/H)\) of the induced covariant transform is invariant under \(\bar{\rho}\). Induced covariant transforms for the Heisenberg group [20] and the affine group [21] are the most familiar examples.
### Induced covariant transform of the group \(\mathbb{G}\)
On the same way as above, we can calculate the induced covariant transform of \(\mathbb{G}\). Consider the subgroup \(Z\) of \(\mathbb{G}\), which is \(Z=\{(s,0,0,0,1),s\in\mathbb{R}\}\). For the Schrodinger type representation (17), any function \(\phi\in\mathsf{L}_{2}(\mathbb{R})\) satisfies the eigenvector condition \(\rho(s,0,0,0,1)\phi=\mathrm{e}^{2\pi\mathrm{i}hs}\phi\) with the character \(\chi(s,0,0,0,1)=\mathrm{e}^{-2\pi\mathrm{i}hs}\), cf. (21). Thus, the respective homogeneous space is \(\mathbb{G}/Z\simeq\mathbb{R}_{+}^{4}\) and we take the above section \(\mathbf{s}:\mathbb{G}/Z\to\mathbb{G}:\mathbf{s}(x,y,b,r)=(0,x,y,b,r)\) (15). Then, the induced covariant transform is
\[\begin{split}[W_{\phi}f](x,y,b,r)&=\langle f,\rho (\mathbf{s}(x,y,b,r))\phi\rangle\\ &=\langle f,\rho(0,x,y,b,r)\phi\rangle\\ &=\int_{\mathbb{R}}f(u)\,\overline{\rho(0,x,y,b,r)\,\phi(u)}\, \mathrm{d}u\\ &=\int_{\mathbb{R}}f(u)\,\mathrm{e}^{-2\pi\mathrm{i}h(x(u-y)-b(u- y)^{2}/2)}\,r^{\frac{1}{2}}\,\overline{\phi}(r(u-y))\,\mathrm{d}u\\ &=\sqrt{r}\int_{\mathbb{R}}f(u)\,\mathrm{e}^{-2\pi\mathrm{i}h(x( u-y)-b(u-y)^{2}/2)}\,\overline{\phi}(r(u-y))\,\mathrm{d}u.\end{split} \tag{23}\]
From (22), \(W_{\phi}\) intertwines the Schrodinger type representation (17) with quasi-regular (16).
The last integral in (23) is a composition of five unitary operators \(\mathsf{L}_{2}(\mathbb{R}^{2})\to\mathsf{L}_{2}(\mathbb{R}^{2})\) applied to a function \(F(y,u)=f(y)\overline{\phi}(u)\) in the space \(\mathsf{L}_{2}(\mathbb{R})\otimes\mathsf{L}_{2}(\mathbb{R})\simeq\mathsf{L}_{2 }(\mathbb{R}^{2})\):
1. The unitary operator \(R:\mathsf{L}_{2}(\mathbb{R}^{2})\to\mathsf{L}_{2}(\mathbb{R}^{2})\) based on the dilation \[R:F(y,u)\to\sqrt{r}\,F(y,ru),\qquad\text{where }r>0.\]
2. The change of variables \(T:\mathsf{L}_{2}(\mathbb{R}^{2})\to\mathsf{L}_{2}(\mathbb{R}^{2})\) \[T:F(y,u)\to F(u,u-y).\]
3. The operator of multiplication by an unimodular function \(\psi_{b}(x,y)=\operatorname{e}^{\pi i\hbar b(u-y)^{2}}\), \[M_{b}:F(y,u)\to\operatorname{e}^{\pi i\hbar b(u-y)^{2}}F(y,u),\qquad\text{ where }b\in\mathbb{R}.\]
4. The partial Fourier transform \(u\to x\) in the second variable \[[\mathcal{F}_{2}F](y,x)=\int_{\mathbb{R}}F(y,u)\operatorname{e}^{-2\pi i\hbar x \operatorname{u}}du.\]
5. The multiplication \(M\) by the unimodular function \(\operatorname{e}^{2\pi i\hbar x\operatorname{y}}\).
Thus, we can write \(\mathcal{W}_{\phi}\) as
\[[\mathcal{W}_{\phi}f](x,y,b,r)=[(M\circ\mathcal{F}_{2}\circ M_{b}\circ T\circ R )\ F](x,y), \tag{24}\]
and obtain
**Proposition 4.1**.: _For a fixed \(r_{0}\in\mathbb{R}_{+}\) and \(b_{0}\in\mathbb{R}\), the map \(f\otimes\overline{\phi}\to[\mathcal{W}_{\phi}f](\cdot,\cdot,b_{0},r_{0})\) is a unitary operator from \(L_{2}(\mathbb{R})\otimes L_{2}(\mathbb{R})\) onto \(L_{2}(\mathbb{R}^{2})\)._
Also, the induced covariant transform preserves the Schwartz space, that is, if \(f,\phi\in\mathcal{S}(\mathbb{R})\) then \(\mathcal{W}_{\phi}f(\cdot,\cdot,b_{0},r_{0})\in\mathcal{S}(\mathbb{R}^{2})\). This is because the \(\mathcal{S}(\mathbb{R}^{2})\) is invariant under the all five above components of \(\mathcal{W}_{\phi}f\) in (24).
Note, that the induced covariant transform (23) does not define a square-integrable function on \(\mathbb{G}/\mathcal{Z}\sim\mathbb{R}_{+}^{4}\). To discuss unitarity we need to introduce a suitable inner product. In general we can start from a probability measure \(\mu\) on \(\mathbb{R}_{+}^{2}\), that is \(\int_{\mathbb{R}_{+}^{2}}d\mu=1\). Then we define the inner product
\[\langle f,g\rangle_{\mu}=\int_{\mathbb{R}_{+}^{4}}f(x,y,b,r)\,\overline{g(x,y, b,r)}\,\frac{\hbar\,dx\,dy\,d\mu(b,r)}{\sqrt{2r}}, \tag{25}\]
for \(f,g\in L_{\phi}(\mathbb{R}_{+}^{4})\). The factor \(\hbar\) in the measure \(\frac{\hbar\,dx\,dy}{\sqrt{2r_{0}}}\) makes it dimensionless, see discussion of this in [3, 15]. Important particular cases of probability measures parametrised by \((b_{0},r_{0})\in\mathbb{R}_{+}^{2}\) are
\[d\mu_{(b_{0},r_{0})}(b,r)=\delta(b-b_{0})\,\delta(r-r_{0})\,db\,dr\,, \tag{26}\]
where \(\delta(t)\) is the Dirac delta. The respective inner products becomes:
\[\langle f,g\rangle_{(b_{0},r_{0})}=\int_{\mathbb{R}^{2}}f(x,y,b_{0},r_{0})\, \overline{g(x,y,b_{0},r_{0})}\,\frac{\hbar\,dx\,dy}{\sqrt{2r_{0}}}\,. \tag{27}\]
From now on we consider \(L_{\phi}(\mathbb{R}_{+}^{4})\) as a Hilbert space with the inner product (25) or specifically (27). The respective norms are denoted by \(\|\cdot\|_{\mu}\) and \(\|\cdot\|_{(b_{0},r_{0})}\).
Using the above inner product, we can derive from Proposition 4.1 the following orthogonality relation:
**Corollary 4.2**.: _Let \(f,g,\phi,\psi\in L_{2}(\mathbb{R})\), then_
\[\langle\mathcal{W}_{\phi}f,\mathcal{W}_{\psi}g\rangle_{\mu}=\langle f,g\rangle \,\overline{\langle\phi,\psi\rangle},\]
_for any probability measure \(\mu\), in particular (26) with fixed \((b_{0},r_{0})\in\mathbb{R}_{+}^{2}\)._
**Corollary 4.3**.: _Let \(\phi\in L_{2}(\mathbb{R})\) have a unite norm. Then, the induced covariant transform \(\mathcal{W}_{\phi}\) is an isometry from \(L_{2}(\mathbb{R})\) to \(L_{\phi}(\mathbb{R}_{+}^{4})\) and its inverse is given by the adjoint operator--contravariant transform:_
\[f(u)=\int_{\mathbb{R}_{+}^{4}}F(x,y,b,r)\,[\rho(\mathbf{s}(x,y,b,r))\phi](u)\, \frac{\hbar\,dx\,dy\,d\mu(b,r)}{\sqrt{2r}}, \tag{28}\]
_where \(F\in L_{\phi}(\mathbb{R}_{+}^{4})\). In particular:_
\[f(u)=\int_{\mathbb{R}^{2}}F(x,y,b_{0},r_{0})\,[\rho(\mathbf{s}(x,y,b_{0},r_{0}) )\phi](u)\,\frac{\hbar\,dx\,dy}{\sqrt{2r_{0}}}. \tag{29}\]
Proof.: For \(f\in L_{2}(\mathbb{R})\), we have
\[\|f\|_{L_{2}(\mathbb{R})}=\|f\otimes\overline{\Phi}\|_{L_{2}(\mathbb{R}^{2})}=\| \mathcal{W}_{\Phi}f\|_{\mu},\]
which follows from isometry \(L_{2}(\mathbb{R}^{2})\to L_{2}(\mathbb{R}^{2})\) in Prop. 4.1. Then verification of formulae (28)-(29) is a technical exercise.
A reader may note that (29) with \(\phi(u)=\sqrt[4]{2}e^{-\pi\hbar\,u^{2}}\) is essentially the inverse Fock-Segal-Bargmann transform.
## 5. Image spaces of the covariant transforms
Clearly, not every function on \(\mathbb{R}^{4}_{+}\) is a covariant transform (23) of a function from \(L_{2}(\mathbb{R})\). In this section we discuss the image space of the covariant transform.
### Right shifts and covariant transform
Let \(R(g)\) be the right regular representation of the group \(G\), which acts on the functions defined in the group \(G\) as follows:
\[R(g):f(g^{\prime})\to f(g^{\prime}\,g),\qquad\text{where }g\in G.\]
In contrast to the intertwining property of the covariant transform for the left regular representation (20), the right shift satisfies the relation
\[R(g)[\mathcal{W}_{\phi}](g^{\prime}) =[\mathcal{W}_{\phi}](g^{\prime}\,g)\] \[=\langle f,\rho(g^{\prime}\,g)\phi\rangle\] \[=\langle f,\rho(g^{\prime})\rho(g)\phi\rangle\] \[=[\mathcal{W}_{\rho(g)\,\phi}f](g^{\prime}).\]
That is, the covariant transform intertwines the right shift with the action of \(\rho\) on the fiducial vector \(\phi\). Therefore, we obtain the following result, which plays an important role in exploring the nature of the image space of the covariant transform.
**Corollary 5.1**.: _[_16_]_ _Let \(G\) be a Lie group with a Lie algebra \(g\) and \(\rho\) be a representation of \(G\) in a Hilbert space \(\mathcal{H}\). Let a fiducial vector \(\phi\) be a null-solution, \(A\phi=0\), for the operator \(A=\sum_{j}a_{j}d\rho^{X_{j}}\), where \(d\rho^{X_{j}}\) are the derived representation of some \(X_{j}\in g\) and \(a_{j}\) are constants. Then, for any \(f\in\mathcal{H}\) the wavelet transform \([\mathcal{W}_{\phi}f](g)=\langle f,\rho(g)\phi\rangle\) satisfies_
\[D(W_{\phi}f)=0,\quad\text{ where }\quad D=\sum_{j}\overline{a}_{j}\mathcal{L}^{ X_{j}}.\]
_Here \(\mathcal{L}^{X_{j}}\) are the left invariant fields (Lie derivatives) (13) on \(G\) corresponding to \(X_{j}\)._
Illustrative examples are the classical spaces of analytical functions: the Fock-Segal-Bargmann space and the Hardy space, see [17, 22] for details.
_Remark 5.2_.: It is straightforward to extend the result of Cor. 5.1 from a linear combination of elements in the Lie algebra \(g\) to an arbitrary polynomial from the enveloping algebra of \(g\) or even to more general functions/distributions, cf. [17, Cor. 5.8].
### Characterisation of the image space for the group \(\mathbb{G}\)
The above Cor. 5.1 can be used to construct covariant transforms with desired properties through purposely selected fiducial vectors. We are illustrating this for the group \(\mathbb{G}\). First,
we need to compute the Lie derivatives (13) reduced to the representation space of the quasi-regular representation (16), see [24]:
\[\begin{split}\mathcal{L}^{X}&=\tau\partial_{x}, \mathcal{L}^{B}=\tau^{2}\,\partial_{b},\\ \mathcal{L}^{Y}&=\tfrac{1}{\tau}(-2\pi\mathrm{i} \hbar x\mathrm{I}-b\,\partial_{x}+\partial_{y}),\mathcal{L}^{R}=\tau\, \partial_{r},\\ \mathcal{L}^{S}&=-2\pi\mathrm{i}\hbar\mathrm{I}. \end{split} \tag{30}\]
One can check that those Lie derivatives make a representation of the Lie algebra of the group \(\mathbb{G}\)[24].
Now we are looking for a covariant transform \(\mathcal{W}_{\phi}:\mathsf{L}_{2}(\mathbb{R})\to\mathsf{L}_{2}(\mathbb{R}_{+} ^{4})\) with the image space annihilated by a generic linear combination of Lie derivatives (30). To this end the fiducial vector \(\phi\) shall be a null solution of the following differential operator composed from the derived Schrodinger type representation (18)
\[\begin{split}&\mathrm{d}\rho^{\mathrm{i}E_{x}S+E_{x}X+iE_{y}Y+iE_{ b}B+E_{r}R}\\ &\qquad=(E_{r}u-\mathrm{i}E_{y})\frac{d}{du}+(\pi\mathrm{i} \hbar(E_{b}u^{2}+2\mathrm{i}E_{x}u-2E_{s})+\tfrac{1}{2}E_{r})\mathrm{I}.\end{split} \tag{31}\]
where \(E_{s}\), \(E_{x}\), \(E_{y}\), \(E_{b}\) and \(E_{r}\) are arbitrary real coefficients. This equation has two different solutions depending on a value of \(E_{r}\). If \(E_{r}=0\) (which requires \(E_{y}\neq 0\) for non-trivial operator (31)) then a generic solution of (31) is [24]
\[\phi_{0}(u)=C\,\exp\biggl{(}\pi\mathrm{i}\hbar\left(2\mathrm{i}\frac{E_{s}}{E_ {y}}u+\frac{E_{x}}{E_{y}}u^{2}-\mathrm{i}\frac{E_{b}}{3E_{y}}u^{3}\right) \biggr{)}\, \tag{32}\]
where \(E_{x}<0\) for square integrability of \(\phi_{0}\) and the constant \(C\) is determined from the normalisation condition \(\|\phi_{0}\|_{2}=1\). We have here a sort of Airy beam [8], which was employed in [4] in the context of the share group, i.e. the absence of \(\mathrm{d}\rho^{R}\) in (31).
For \(E_{r}\neq 0\) we find the generic solution in the form [24]:
\[\begin{split}\phi_{1}(u)&=C\,(E_{r}u-\mathrm{i}E_{ y})^{-\frac{1}{2}+2\pi\mathrm{i}\hbar E_{s}/E_{r}-\pi\hbar E_{y}(2E_{x}E_{r}+E_{ b}E_{y})/E_{r}^{3}}\\ &\qquad\times\exp\biggl{(}-\pi\mathrm{i}\hbar\left(\frac{ \mathrm{i}(2E_{x}E_{r}+E_{b}E_{y})}{E_{r}^{2}}u+\frac{E_{b}}{2E_{r}}u^{2} \right)\biggr{)}.\end{split} \tag{33}\]
Again, for \(\phi_{1}\in\mathsf{L}_{2}(\mathbb{R})\) we need \(\frac{\hbar E_{b}}{E_{r}}>0\) and a proper normalising constant \(C\).
A detailed study of all arising covariant transforms is still awaiting further work. Here we concentrate on some special aspects which appear in this extended group setting for the most traditional fiducial vector--the Gaussian. First, we note that it steams from both solutions (32) and (33):
* For \(E_{r}=0\) letting \(E_{s}=E_{b}=0\), \(E_{x}=-1\) and \(E_{y}=1\) with \(C=\sqrt[4]{2}\) in \(\phi_{0}\) (32) produces (34) \[\phi(u)=\sqrt[4]{2}\mathrm{e}^{-\pi\mathrm{i}\hbar\,u^{2}}\quad\text{ with the identity }\quad d\rho^{-X+iY}\,\phi=0,\] i.e. \(\phi\) is annihilated by the Heisenberg group part of \(\mathbb{G}\).
* For \(E_{r}=1\) substitution of \(E_{s}=\frac{1}{4\pi\mathrm{i}\hbar}\), \(E_{x}=E_{y}=0\) and \(E_{b}=2\) with \(C=\sqrt[4]{2}\) into the vacuum \(\phi_{1}\)\(\phi_{1}\) (33) again produces (35) \[\phi(u)=\sqrt[4]{2}\mathrm{e}^{-\pi\mathrm{i}\hbar\,u^{2}}\quad\text{ with the identity }\quad d\rho^{\mathrm{i}/(4\pi\mathrm{i}\hbar)S+2\mathrm{i}B+R}\,\phi=0,\] i.e. \(\phi\) is also annihilated by the affine group part of \(\mathbb{G}\).
Let us introduce the covariant transform \(\mathcal{W}_{\phi}:\mathsf{L}_{2}(\mathbb{R})\to\mathsf{L}_{2}(\mathbb{R}_{+}^{4})\) (23) with the fiducial vector \(\phi\) (34)-(35):
\[\begin{split}&[\mathcal{W}_{\phi}\mathsf{f}](x,y,\mathsf{b}, \mathsf{r})=\sqrt{\mathsf{r}}\int_{\mathbb{R}}f(\mathsf{u})\,\mathrm{e}^{-2 \pi\mathrm{i}\hbar(x(u-y)-\mathsf{b}(u-y)^{2}/2)}\,\overline{\phi}(\mathsf{r}( \mathsf{u}-\mathsf{y}))\,\mathrm{d}\mathsf{u}\\ &=\sqrt[4]{2\mathsf{r}^{2}}\int_{\mathbb{R}}f(\mathsf{u})\, \mathrm{e}^{-2\pi\mathrm{i}\hbar(x(u-y)-\mathsf{b}(u-y)^{2}/2)}\,\mathrm{e}^{- \pi\mathrm{i}\hbar\mathsf{r}^{2}(\mathsf{u}-\mathsf{y})^{2}}\,\mathrm{d}\mathsf{ u}\\ &=\sqrt[4]{2\mathsf{r}^{2}}\int_{\mathbb{R}}f(\mathsf{u})\,\exp \left(-\pi\hbar\left((\mathsf{r}^{2}-\mathrm{i}\mathsf{b})(\mathsf{u}-\mathsf{ y})^{2}+2\mathrm{i}(\mathsf{u}-\mathsf{y})\mathsf{x}\right)\right)\, \mathrm{d}\mathsf{u}\,.\end{split} \tag{36}\]
It was introduced in [24, 26] and called metamorphism. We are also using the notation \(\widetilde{\mathsf{f}}\coloneqq\mathcal{W}_{\phi}\mathsf{f}\) from [26] which now can be explained as the double covariant transform for the Heisenberg and the affine groups simultaneously. The image space \(\mathsf{L}_{\phi}(\mathbb{R}_{+}^{4})\) of the metamorphism is a subspace of square-integrable functions on \(\mathsf{L}_{2}(\mathbb{R}_{+}^{4},\|\cdot\|_{\mathsf{u}})\), see (25).
_Remark 5.3_.: Another feature of the Gaussian as a fiducial vector is that an extension of the group \(\mathbb{G}\) to the full Schrodinger group does not add a value. Indeed, the Iwasawa decomposition \(\mathrm{SL}_{2}(\mathbb{R})=\mathrm{ANK}\)[18, SS1.1; 27, SSIII.1] represents \(\mathrm{SL}_{2}(\mathbb{R})\) as the product of the affine subgroup \(\mathrm{AN}\) and the compact subgroup \(\mathsf{K}\). Yet, the Gaussian is invariant under the action of the phase-space rotations produced by \(\mathsf{K}\). Thus, we get the same set of coherent states from the actions of the group \(\mathbb{G}\) and the Schrodinger group.
From the annihilation property (34) by the derived representation \(\mathrm{d}\rho^{-X+\mathrm{i}Y}\) and Cor. 5.1 we conclude that \(\mathcal{L}^{-X-\mathrm{i}Y}\,\widetilde{\mathsf{f}}=0\) for any \(\mathsf{f}\). Using (30) we find [24]:
\[\begin{split}\mathcal{C}_{1}&=-\mathcal{L}^{X}- \mathrm{i}\mathcal{L}^{Y}\\ &=\frac{1}{\mathsf{r}}\left((\mathsf{r}^{2}-\mathrm{i}\mathsf{b} )\,\partial_{x}+\mathrm{i}\,\partial_{y}+2\mathrm{x}\hbar\pi\,\mathsf{I} \right).\end{split} \tag{37}\]
The operator \(\mathcal{C}_{1}\) is called the first Cauchy-Riemann type operator. Similarly from (34) we conclude that \(\mathcal{C}_{2}\widetilde{\mathsf{f}}=0\) for the second Cauchy-Riemann type operator [24]:
\[\begin{split}\mathcal{C}_{2}&=-\frac{\mathrm{i}}{4 \pi\hbar}\mathcal{L}^{\mathrm{S}}-2\mathrm{i}\mathcal{L}^{\mathrm{B}}+ \mathcal{L}^{\mathrm{R}}\\ &=2\mathsf{r}^{2}\,\partial_{\mathsf{b}}+\mathrm{i}\,\mathsf{r}\, \partial_{\mathsf{r}}-\frac{1}{2}\mathrm{i}\,\mathsf{I}\,.\end{split} \tag{38}\]
It is convenient to view operators \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) as the Cauchy-Riemann operators for the following complexified variables:
\[w=\mathsf{b}+\mathrm{i}\mathsf{r}^{2}\quad\text{ and }\quad z=\mathsf{x}+( \mathsf{b}+\mathrm{i}\mathsf{r}^{2})\mathsf{y}=\mathsf{x}+w\mathsf{y}\,. \tag{39}\]
_Remark 5.4_.: As was pointed out in [22], the analyticity conditions (37)-(38) are consequences of minimal uncertainty properties of the fiducial vector. The first condition (37) follows from the celebrated Heisenberg-Kennard uncertainty relation [10, 22]
\[\Delta_{\phi}(\mathsf{M})\cdot\Delta_{\phi}(\mathsf{D})\geqslant\frac{\mathsf{ h}}{2}\]
for the coordinate \(\mathsf{M}=\mathrm{d}\rho^{\mathrm{i}X}\) and momentum \(\mathsf{D}=\mathrm{d}\rho^{\mathrm{i}Y}\) observables in the Schrodinger representation (18). The second condition (38) is due to the similar minimal joint uncertainty of the Gaussian state for the Euler operator \(\mathrm{d}\rho^{1/(4\pi\mathrm{i}\hbar)\mathrm{S}-\mathrm{i}\hbar}=-\mathrm{ i}\mathsf{u}\,\frac{\mathrm{d}}{\mathrm{d}\mathsf{u}}\) and the quadratic potential \(\mathrm{d}\rho^{\mathrm{i}\mathrm{B}}=\pi\mathrm{i}\mathsf{u}^{2}\mathsf{I}\).
Besides the two operators \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) which are based on the special properties (34)-(35) of the Gaussian we can note a couple of polynomial identities in the Schrodinger type representation of the Lie algebra \(\mathfrak{g}\). Indeed, using (18) one can check:
\[\big{(}\mathrm{d}\rho^{\mathsf{X}}\big{)}^{2}+2\,\mathrm{d}\rho^{\mathsf{S}}\, \mathrm{d}\rho^{\mathsf{B}}=0,\quad\text{ and }\quad\mathrm{d}\rho^{\mathsf{X}}\, \mathrm{d}\rho^{\mathsf{Y}}+\mathrm{d}\rho^{\mathsf{Y}}\,\mathrm{d}\rho^{ \mathsf{X}}+2\,\mathrm{d}\rho^{\mathsf{S}}\,\mathrm{d}\rho^{\mathsf{R}}=0. \tag{40}\]
These relations express the affine subalgebra generators \(\mathsf{B}\) and \(\mathsf{R}\) through the Heisenberg ones \(\mathsf{X}\) and \(\mathsf{Y}\). That is related to so-called quadratic algebra concept [12, SS2.2.4]. Because operators in (40) annihilate any function, including the fiducial vector of the metamorphism, Rem. 5.2 implies that the image space \(\mathsf{L}_{\phi}(\mathbb{R}^{4}_{+})\) is annihilated by by the second-order differential operators [24]:
(41) \[\mathcal{S}_{1} =\big{(}\mathcal{L}^{\mathsf{X}}\big{)}^{2}+2\,\mathcal{L}^{ \mathsf{S}}\,\mathcal{L}^{\mathsf{B}}=r^{2}(4\pi\mathrm{i}\,\partial_{\mathsf{ b}}-\partial_{\mathrm{xx}}^{2})\,;\] and (42) \[\mathcal{S}_{2} =\mathcal{L}^{\mathsf{X}}\,\mathcal{L}^{\mathsf{Y}}+\mathcal{L}^ {\mathsf{Y}}\,\mathcal{L}^{\mathsf{X}}+2\,\mathcal{L}^{\mathsf{S}}\,\mathcal{ L}^{\mathsf{R}}\] \[=-4\pi\mathrm{i}r\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\partial_{ \mathsf{r}}-2\mathrm{b}\,\partial_{\mathrm{xx}}^{2}+2\,\partial_{\mathrm{xy}} ^{2}-4\pi\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{ i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i }\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\, \mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm{i}\,\mathrm
Although the Gaussian and the metamorphism based on it are genuinely remarkable in many respects, other covariant transforms (23) with fiducial vectors (32) and (33) desrve further attention as well.
## Acknowledgments
Authors are grateful to Prof. Alexey Bolsinov for useful comments and suggestions on this work. The first named author was sponsored by the Albaha University (SA).
|
2307.01991 | Geodesic Equations on asymptotically locally Euclidean Kähler
manifolds | We solve the geodesic equation in the space of K\"ahler metrics under the
setting of asymptotically locally Euclidean (ALE) K\"ahler manifolds and we
prove global $\mathcal{C}^{1,1}$ regularity of the solution. Then, we relate
the solution of the geodesic equation to the uniqueness of scalar-flat ALE
metrics. To this end, we study the asymptotic behavior of
$\varepsilon$-geodesics at spatial infinity. Under the assumption that the
Ricci curvature of a reference ALE K\"ahler metric is non-positive, convexity
of the Mabuchi $K$-energy along $\varepsilon$-geodesics. However, we will also
prove that on the line bundle $\mathcal{O}(-k)$ over
$\mathbb{C}\mathbb{P}^{n-1}$ with $n \geq 2$ and $k \neq n$, no ALE K\"ahler
metric can have non-positive (or non-negative) Ricci curvature. | Qi Yao | 2023-07-05T02:47:22Z | http://arxiv.org/abs/2307.01991v2 | # Geodesic equations on asymptotically locally Euclidean Kahler manifolds
###### Abstract.
We solve the geodesic equation in the space of Kahler metrics under the setting of asymptotically locally Euclidean (ALE) Kahler manifolds and we prove global \(\mathcal{C}^{1,1}\) regularity of the solution. Then, we relate the solution of the geodesic equation to the uniqueness of scalar-flat ALE metrics. To this end, we study the asymptotic behavior of \(\varepsilon\)-geodesics at spatial infinity. Under the assumption that the Ricci curvature of a reference ALE Kahler metric is non-positive, convexity of the Mabuchi \(K\)-energy along \(\varepsilon\)-geodesics. However, we will also prove that on the line bundle \(\mathcal{O}(-k)\) over \(\mathbb{CP}^{n-1}\) with \(n\geq 2\) and \(k\neq n\), no ALE Kahler metric can have non-positive (or non-negative) Ricci curvature.
## 1. Introduction
In the paper, we study the geodesic equation in the setting of ALE Kahler cases, assuming relatively weak fall-off conditions. Let \((X,J,g)\) be a complete non-compact Kahler manifold of complex dimension \(n\)\((n\geq 2)\), we say \((X,J,g)\) is ALE if there is a compact subset \(K\subseteq X\) such that \(\psi:X\backslash K\to(\mathbb{C}^{n}\backslash B_{R})/\Gamma\) is a diffeomorphism, where \(B_{R}\) is a closed ball in \(\mathbb{C}^{n}\) with radius \(R\) and \(\Gamma\) is a finite subgroup of \(U(n)\) (any ALE Kahler manifold has only one end according to [19, Proposition 1.5, 3.2]) and the metric \(g\) satisfies the following condition on the end \(X\setminus K\):
* The metric \(g\) is asymptotic to the Euclidean metric \(\delta_{ij}\) at the end with decay rate \(-\tau\) for some \(\tau>n-1\), i.e., for \(i=0,1,\ldots,k\), \[g_{ij}=\delta_{ij}+O(r^{-\tau}),\qquad|\nabla^{i}((\psi^{-1})^{*}g)|_{g_{0}}=O (r^{-\tau-i}).\] (1.1)
The fall-off condition \(\tau>n-1\) is the weakest decay rate to make the ADM mass coordinate-invariant in general, referring to Bartnik [4] and Chrusciel [11].
One of the difficulties to build up a general theory of scalar-flat Kahler metrics in the ALE setting is that the decay rate of such metrics to their asymptotic models is not good enough compared to the Ricci-flat case. For instance, consider the family of scalar-flat Kahler metric constructed on \(\mathcal{O}_{\mathbb{CP}^{1}}(-k)\) by LeBrun [23],
\[g=\frac{ds^{2}}{1+A/s^{2}+B/s^{4}}+s^{2}\Big{[}\sigma_{1}^{2}+\sigma_{2}^{2}+ \Big{(}1+\frac{A}{s^{2}}+\frac{B}{s^{4}}\Big{)}\sigma_{3}^{2}\Big{]},\]
where \(A\), \(B\) are constants, \(\sigma_{1}\), \(\sigma_{2}\), \(\sigma_{3}\) are three invariant vector fields on \(3\)-sphere and \(s\) is a radial function on \(\mathcal{O}(-k)\). It can be checked that \(g-g_{euc}=O(r^{-2})\), where \(r\) denotes geodesic distance from a fixed basepoint, indicating that the Kahler potential function should be of log growth. In Arezzo-Pacard [2, Lemma 7.2], an expansion theorem is proved for scalar-flat Kahler metrics in the complement of \(B_{\Gamma}=\{z\in\mathbb{C}^{n}/\Gamma:|z|\leq 1\}\) in \(\mathbb{C}^{n}/\Gamma\), where \(\Gamma\) is a finite subgroup of \(U(n)\), assuming that the \(dd^{c}\)-lemma holds in this situation. In [30], the author proved a \(dd^{c}\) lemma and an expansion theorem under the setting of asymptotically conical (AC) Kahler manifolds. Here, we only need a theorem of weaker version under the setting of ALE Kahler manifolds.
**Theorem 1.1**.: _(Yao 2022) Let \((X,J)\) be an ALE Kahler manifold asymptotic to \(\mathbb{C}^{n}/\Gamma\). Let \(\omega_{1}\), \(\omega_{2}\) be Kahler forms in the same Kahler class of \((X,J)\) with the corresponding metrics satisfying (1.1) and such that the scalar curvatures of \(\omega_{1}\) and \(\omega_{2}\) are equal, \(R_{1}\equiv R_{2}\). Then_
\[\omega_{2}=\omega_{1}+dd^{c}\varphi,\quad\text{ with the potential }\varphi\in \mathcal{C}_{2-2\tilde{\tau}}^{\infty} \tag{1.2}\]
_for some \(\tilde{\tau}>n-1\) depending on \((n,\tau)\)._
Let \(\omega\) be the corresponding Kahler form of \(g\). According to Theorem 1.1, given two Kahler forms \(\omega_{1},\omega_{2}\in[\omega]\), if the corresponding ALE Kahler metrics \(g_{1},g_{2}\) satisfy the decay condition (1.1) and that the scalar curvatures of \(g_{1}\) and \(g_{2}\) are identically equal, \(R(g_{1})\equiv R(g_{2})\), then \(\omega_{1}-\omega_{2}=dd^{c}f\) and \(f\) decays at infinity with higher rate \(-\gamma\), with \(\gamma=2\tilde{\tau}-2\), for some \(\tilde{\tau}>n-1\). Hence, for prescribed scalar curvature problem, we consider the following restricted weighted Kahler potential space,
\[\mathcal{H}_{-\gamma}(\omega)=\{\varphi\in\hat{\mathcal{C}}^{\infty}_{-\gamma} :\omega_{\varphi}=\omega+dd^{c}\varphi>0\}\quad(\gamma>2n-4\geq 0),\]
where the class of functions, \(\hat{\mathcal{C}}^{\infty}_{s}\), is defined as follows
\[\mathcal{C}^{\infty}_{s} =\{f\in\mathcal{C}^{\infty}(X):|\nabla^{j}_{g_{0}}f|_{g_{0}}=O(r ^{s-j})\,\,\text{for all}\,\,\,j\geq 0\},\] \[\hat{\mathcal{C}}^{\infty}_{s} =\{\hat{f}\in\mathcal{C}^{\infty}(X):\hat{f}=f+c,\,\,\text{for $f\in \mathcal{C}^{\infty}_{s}$ and $c$ is a constant}\}.\]
Define \(\omega_{0}=\omega+dd^{c}\psi_{0}\), \(\omega_{1}=\omega+dd^{c}\psi_{1}\), for any two boundary data \(\psi_{0,1}\in\mathcal{H}_{-\gamma}(\omega)\). Also introduce the linear reference path \(\psi(t)=(1-t)\psi_{0}+t\psi_{1}\) in \(\mathcal{H}_{-\gamma}(\omega)\). Another path \(\varphi(t)\) in \(\mathcal{H}_{-\gamma}(\omega)\) with the same endpoints \(\psi_{0},\psi_{1}\) is called a geodesic in \(\mathcal{H}_{-\gamma}(\omega)\) if
\[\ddot{\varphi}(t)-\frac{1}{2}|\nabla_{\omega_{\varphi(t)}}\dot{\varphi}(t)|^{ 2}_{\omega_{\varphi(t)}}=0. \tag{1.3}\]
As observed by Donaldson [15] and Semmes [26], the geodesic equation is equivalent to a homogeneous complex Monge-Ampere equation in the product space \(X\times\Sigma\), where \(\Sigma\cong[0,1]\times S^{1}\) can be embedded as an annulus in \(\mathbb{C}\). Notice that any path \(\varphi(t)\) of functions on \(X\) can be viewed as a function \(\Phi\) on \(X\times\Sigma\) via \(\Phi(\cdot,t,e^{is})=\varphi(t)\). Let \(\Omega_{\Phi}=p^{*}\omega+dd^{c}\Phi\), where \(p\) is the projection from \(X\times\Sigma\) to \(X\) and \(dd^{c}\Phi\) is computed on \(X\times\Sigma\). Then the equation (1.3) can be rewritten as follows:
\[\Omega^{n+1}_{\Phi}=0, \tag{1.4}\] \[\Omega_{\Phi}\geq 0,\] (1.5) \[\Phi|_{t=0,1}=\psi_{0,1}. \tag{1.6}\]
In [15], Donaldson proposed a program to attack the existence and uniqueness problems regarding canonical metrics by studying the geometric structure of the potential space \(\mathcal{H}\), where the geodesic equation play a central role. In the cases of compact Kahler manifolds, Chen [9] showed that for any \(\psi_{0}\), \(\psi_{1}\in\mathcal{H}\), the geodesic equation has a unique solution up to \(dd^{c}\)-regularity. Blocki [6] and He [18] built up direct calculations to prove gradient estimate and Laplacian estimate. The full \(\mathcal{C}^{1,1}\) estimate was proved by Chu-Tosatti-Weinkove in [12]. In the other direction, Lempert-Vivas [24] and Darvas-Lempert [14] constructed counter-examples to assert that \(dd^{c}\Psi\) is not continuous in general, hence the \(\mathcal{C}^{1,1}\) regularity is optimal in general. In [3], Auvray generalized the \(dd^{c}\)-regularity to singular cases (precisely, there exists cusp singularities along simple normal crossings). The main theorem of sections 2-5 is to generalize the full \(\mathcal{C}^{1,1}\) estimates to ALE Kahler manifolds.
**Theorem A**.: _Let \(X\) be an ALE Kahler manifold and \(\psi_{0},\psi_{1}\in\mathcal{H}_{-\gamma}(\omega)\)\((\gamma>0)\). Then \(\psi_{0}\) and \(\psi_{1}\) can be connected by a \(\mathcal{C}^{1,1}\) geodesic \(\Phi\) solving (1.4), (1.5), (1.6). Moreover, there is a uniform constant \(C\) depending only on \(\|\psi_{0}\|_{\mathcal{C}^{1,1}(X,\omega)}\), \(\|\psi_{1}\|_{\mathcal{C}^{1,1}(X,\omega)}\) and on the geometry of \((X,\omega)\) such that_
\[\sup_{X\times\Sigma}\left(|\Phi|+|\nabla_{\Theta_{\Psi}}\Phi|_{\Theta_{\Psi}}+| \nabla^{2}_{\Theta_{\Psi}}\Phi|_{\Theta_{\Psi}}\right)\leq C. \tag{1.7}\]
_Here, \(\Theta_{\Psi}\) is a Kahler form on \(X\times\Sigma\) given by \(\Theta_{\Psi}=\Theta+dd^{c}\Psi\) with \(\Psi(\cdot,t,e^{is})=\psi(t)=(1-t)\psi_{0}+t\psi_{1}\) the linear path introduced above, and with \(\Theta=p^{*}\omega+Add^{c}t(t-1)\), where \(A>0\) is fixed depending only on \(\|\psi_{0}\|_{\mathcal{C}^{1,1}(X,\omega)}\), \(\|\psi_{1}\|_{\mathcal{C}^{1,1}(X,\omega)}\) such that \(\Theta_{\Psi}>0\)._
Then, we relate the solution of geodesic equation to the uniqueness of scalar-flat ALE Kahler metrics in each Kahler class. The main idea is to follow the framework of Chen [9] in the compact case, under the assumption that the Ricci curvature of the reference metric is non-positive. This was extended to the noncompact case with Poincare cusp ends by Auvray [3]. In the ALE case, it is first necessary to prove sufficient decay at infinity of solutions to the \(\varepsilon\)-geodesic equation.
In Section 6, we discuss the asymptotic behavior of \(\varepsilon\)-geodesics. Given any two functions
\[\psi_{0},\psi_{1}\in\mathcal{H}_{-\gamma}(\omega)=\{\varphi\in\hat{\mathcal{C}}_{ -\gamma}^{\infty}:\omega_{\varphi}=\omega+dd^{c}\varphi>0\}\quad(\gamma>0),\]
we set \(\psi(t)=(1-t)\psi_{0}+t\psi_{1}\) and let \(\Psi\) denote the corresponding function on \(X\times\Sigma\). We fix \(A\) large depending on \(\|\psi_{0}\|_{\mathcal{C}^{1,1}(X,\omega)}\), \(\|\psi_{1}\|_{\mathcal{C}^{1,1}(X,\omega)}\) such that \(\Theta_{\Psi}:=\Theta+dd^{c}\Psi\) is positive on \(X\times\Sigma\), where \(\Theta:=p^{*}\omega+Add^{c}t(t-1)\) with \(p:X\times\Sigma\to X\) the projection. Then, we introduce the following \(\varepsilon\)-geodesic equations
\[(E_{\varepsilon})\qquad\begin{cases}(\Theta+dd^{c}\Phi_{\varepsilon})^{n+1}= \upsilon(\varepsilon)\Theta_{\Psi}^{n+1},&\quad\text{in }X\times\Sigma,\\ \Theta+dd^{c}\Phi_{\varepsilon}>0,&\quad\text{in }X\times\Sigma,\\ \Phi_{\varepsilon}|_{t=0,1}=\phi_{0,1},&\quad\text{on }X\times\partial \Sigma,\end{cases}\]
where \(\upsilon(\varepsilon)\) is a smooth nonnegative function defined in \(X\times[0,1]\) satisfies the following conditions
\[\begin{split}\upsilon(0)\equiv 0,\quad\upsilon(1)\equiv 1; \\ \upsilon(\varepsilon)>0,&\quad\text{for }\varepsilon\in(0,1];\\ C^{-1}\varepsilon\leq\upsilon(\varepsilon)\leq\min(C\varepsilon,1),&\quad\text{for } \varepsilon\in[0,1];\\ |\nabla^{k}\upsilon(y,\varepsilon)|\leq C\varepsilon r(y)^{-\varsigma-k},& \quad\text{for }(y,\varepsilon)\in X\times\Sigma\times[0,1],\quad k\geq 1\end{split} \tag{1.8}\]
where \(\varsigma\) is an real number with \(\varsigma\geq\gamma\). In particular, we are interested in the following case. By taking
\[\upsilon(\varepsilon)=\varepsilon((1-\chi(\varepsilon))f+\chi(\varepsilon)), \tag{1.9}\]
where \(\chi\) is a smooth increasing function in \([0,1]\) equal to \(0\) (resp. \(1\)) in a neighborhood of \(0\) (resp. \(1\)) and \(f\) is defined as follows
\[f=A^{-1}\frac{\Theta^{n+1}}{\Theta_{\psi}^{n+1}}\in\mathcal{C}^{\infty}(X \times\Sigma), \tag{1.10}\]
and in this case, \(|\nabla^{k}f|\leq Cr^{-\gamma-2-k}\), \(\varsigma=\gamma+2\). By taking \(\varepsilon\) to be small enough, \((E_{\varepsilon})\) can be written as
\[\Big{(}\ddot{\varphi}-\frac{1}{2}|\nabla_{\omega_{\varphi}}\dot{\varphi}|_{ \omega_{\varphi}}^{2}\Big{)}\omega_{\varphi}^{n}=\varepsilon\omega^{n}.\]
Due to the positivity of the right hand side of \((E_{\varepsilon})\), it is well known that for every \(\varepsilon\in(0,1]\) there exists a solution \(\Phi_{\varepsilon}\in\bigcap_{k,\alpha}\mathcal{C}^{k,\alpha}\). We now prove:
**Theorem B**.: _Let \(\Phi_{\varepsilon}\) be the \(\varepsilon\)-geodesic constructed above. Then, there exists a constant \(C(k,\varepsilon^{-1})\) depending on \(k\geq 1\) and on an upper bound for \(\varepsilon^{-1}\) such that_
\[\Big{(}|\nabla^{k}_{X,\omega}\Phi_{\varepsilon}|_{\omega}+|\nabla^{k}_{X, \omega}\dot{\Phi}_{\varepsilon}|_{\omega}+|\nabla^{k}_{X,\omega}\dot{\Phi}_{ \varepsilon}|_{\omega}\Big{)}\leq C(k,\varepsilon^{-1})r^{-\gamma-k}\quad \text{for all }k\geq 1,\]
_where \(\nabla_{X,\omega}\) denotes the Levi-Civita connection of the ALE Kahler metric \(\omega\) on \(X\), acting as a differential operator in the \(X\) directions on \(X\times\Sigma\). And_
\[|\Phi_{\varepsilon}-c(t)|\leq C(\varepsilon^{-1})r^{-\gamma},\]
_where \(c(t)\) is a function only depending on \(t\). Hence, for any two potentials \(\psi_{0}\), \(\psi_{1}\) in \(\mathcal{H}_{-\gamma}(\omega)\), there exist \(\varepsilon\)-geodesics in \(\mathcal{H}_{-\gamma}(\omega)\) connecting \(\psi_{0}\) and \(\psi_{1}\)._
In section 6, we actually prove a stronger statement. Let \(\varphi_{\varepsilon}=\Phi_{\varepsilon}-\Psi\), then \(\varphi_{\varepsilon}\in\mathcal{H}_{\max\{-2\gamma-2,-\varsigma\}}\) due to the fact that \(\Psi\) was chosen to be linear in \(t\) (see section 6 for details).
Hence, while we still cannot define the Mabuchi \(K\)-energy along geodesics, the Mabuchi \(K\)-energy is now actually well-defined along \(\varepsilon\)-geodesics assuming \(\gamma=2\tilde{\tau}-2>2n-4\).
In Section 7, the second derivative of the Mabuchi \(K\)-energy will be calculated. Throughout section 7, we assume \(\gamma=2\tilde{\tau}-2>2n-4\) (Here it turns out that if \(\psi_{0}\), \(\psi_{1}\) are only in \(\mathcal{H}_{4-2n}(\omega)\), there would be boundary terms at infinity breaking the positivity of the second derivative. This is a new phenomenon compared to Chen [9] and Auvray [3]). However, under the assumption that the Ricci curvature of
some reference ALE Kahler metric, \(\omega\), is non-positive, we can then prove the convexity of Mabuchi \(K\)-energy:
**Theorem C**.: _Assume that \(\omega\) is an ALE Kahler metric on \(X\) such that the Ricci curvature of \(\omega\) is non-positive, \(\mathrm{Ric}(\omega)\leq 0\). Then, along each \(\varepsilon\)-geodesic in \(\mathcal{H}_{-\gamma}(\omega)\) with \(\gamma>2n-4\), \(\varphi(t)\), the Mabuchi \(K\)-energy is convex._
A quick corollary of Theorem C is that assuming \(\mathrm{Ric}(\omega)\leq 0\), the scalar-flat Kahler metric, if it exists, is unique in \(\mathcal{H}_{-\gamma}(\omega)\). However, if there exists a scalar-flat Kahler metric \(\omega_{0}\) in \(\mathcal{H}_{-\gamma}(\omega)\), the condition, \(\mathrm{Ric}(\omega)\leq 0\) implies \(\mathrm{Ric}(\omega)=0\). Hence, the uniqueness of scalar-flat ALE metric can be reduced to the uniqueness result of Ricci-flat ALE Kahle metric, which can be found in reference [20, 28, 13]. The point is that \(\omega_{0}=\omega+O(r^{-\gamma-2})\) implies by definition that the ADM masses of \(\omega\) and \(\omega_{0}\) are equal, \(\mathfrak{m}(\omega)=\mathfrak{m}(\omega_{0})\). According to mass formula by Hein-LeBrun [19], it follows that \(\int R(\omega)=\int R(\omega_{0})=0\). The assumption that \(\mathrm{Ric}(\omega)\leq 0\) implies that \(\mathrm{Ric}(\omega)=0\) (see Remark 7.4 for details). In fact, in Section 8, we will prove that many ALE Kahler manifolds do not admit any ALE Kahler metrics with \(\mathrm{Ric}\leq 0\) (or \(\mathrm{Ric}\geq 0\)) at all:
**Theorem D**.: _Let \(\mathcal{O}(-k)\) be the standard negative line bundle over \(\mathbb{CP}^{n-1}\) with \(n\geq 2\), \(k\neq n\), and let \(\omega\) be an ALE Kahler metric on \(\mathcal{O}(-k)\) with decay rate \(-\tau\), \(\tau>0\). Then, the Ricci form of \(\omega\), is of mixed type, i.e., neither \(\mathrm{Ric}(\omega)\geq 0\) nor \(\mathrm{Ric}(\omega)\leq 0\) is true._
In Riemannian geometry, AE metrics of negative Ricci curvature are well-known to exist in \(\mathbb{R}^{n}\) by explicit construction in Lohkamp [25]. Theorem D give a negative answer to this question in setting of ALE Kahler metrics.
An interesting question in this context is to ask whether some version of the Nonexistence Theorem D holds in general ALE Kahler manifolds or even AC Kahler manifolds.
**Question**.: _Is it true in any ALE Kahler manifold that the Ricci curvature form of an ALE Kahler metric can only be identically zero or of mixed type?_
This paper is a part of Ph.D thesis of the author. The author would like to express his gratitude to Professor Hans-Joachim Hein and Professor Bianca Santoro for suggesting the problem, and for constant support, many helpful comments, as well as much enlightening conversation. The author is also thankful to professor Gustav Holzegel for providing financial support during the last semester at University of Munster. The whole project is Funded by the DFG under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure, and by the CRC 1442, Geometry: Deformations and Rigidity, of the DFG.
## 2. \(\varepsilon\)-geodesic equations and openness
Recall that \(\varepsilon\) geodesic equations can be written as follows,
\[(E_{\varepsilon})\qquad\begin{cases}(\Theta+dd^{c}\widetilde{\Phi})^{n+1}=v( \varepsilon)(\Theta+dd^{c}\Psi)^{n+1},&\quad\text{in }X\times\Sigma,\\ \lambda\Theta<\Theta+dd^{c}\widetilde{\Phi}<\Lambda\Theta,&\quad\text{in }X \times\Sigma,\\ \widetilde{\Phi}|_{t=0,1}=\psi_{0,1},&\quad\text{on }X\times\partial\Sigma, \end{cases}\]
where \(\varepsilon\in(0,1]\) and \(0<\lambda<\Lambda\) are constants depending on \(\varepsilon\). The family of equations \((E_{\varepsilon})\) is called the \(\varepsilon\)-geodesic equations. The idea to solve the equation \((E_{\varepsilon})\) is the following. Firstly, we apply the continuity method to show that there exists a solution of \((E_{\varepsilon})\) in \(\mathcal{C}^{k,\alpha}\). In particular, consider the family of equations \((E_{s})\), \(s\in[\varepsilon,1]\). Obviously, there is a trivial solution at \((E_{1})\). Then, we shall prove the openness and closedness of \((E_{s})\) in certain regularity. In the current section, we deal with the openness of \((E_{s})\).
Assuming that there exists a solution of \((E_{s_{0}})\) in \(\mathcal{C}^{k,\alpha}\) for some \(s_{0}\in[\varepsilon,1]\), we will show in this subsection that \((E_{s})\) can be solved for all \(s\) in a small open neighborhood of \(s_{0}\). For simplicity, we write \(\Theta_{\Psi}=\Theta+dd^{c}\Psi\) as in Theorem A and \(\widetilde{\varphi}=\widetilde{\Phi}-\Psi\). Then, the equation \((E_{\varepsilon})\) can be written
as, \((\Theta_{\Psi}+dd^{c}\widetilde{\varphi})^{n+1}=\varepsilon\Theta_{\Psi}^{n+1}\) in \(X\times\Sigma\), with the boundary condition \(\widetilde{\varphi}=0\) on \(X\times\partial\Sigma\). Then, the Monge-Ampere operator is defined to be
\[\mathcal{M}(\chi)=\frac{(\Theta_{\Psi}+dd^{c}\chi)^{n+1}}{\Theta_{\Psi}^{n+1}}.\]
Let \(\widetilde{\varphi}\) be a solution of \((E_{s_{0}})\) for some \(s_{0}\in[\varepsilon,1]\). By assumption, \(\widetilde{\varphi}\) is \(\Theta_{\Psi}\)-plurisubharmonic satisfying \(c\Theta\leq\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\leq C\Theta\). Then, the linearization of Monge-Ampere operator at \(\widetilde{\varphi}\) is uniformly elliptic, and is given by
\[\mathcal{L}_{\widetilde{\varphi}}(\chi)=\big{(}\Delta_{\widetilde{\varphi}} \chi\big{)}\cdot\frac{(\Theta_{\Psi}+dd^{c}\widetilde{\varphi})^{n+1}}{\Theta _{\Psi}^{n+1}}=s_{0}\Delta_{\widetilde{\varphi}}\chi,\]
where \(\Delta_{\widetilde{\varphi}}\) represents the Laplacian with respect to \(\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\). Let \((\mathcal{C}^{k,\alpha})_{0}\) be the functions in \(\mathcal{C}^{k,\alpha}\) vanishing on the boundary \(X\times\partial\Sigma\). Then, we have the following property of \(\mathcal{L}_{\widetilde{\varphi}}\), from which the desired openness is clear by the implicit function theorem.
**Proposition 2.1**.: _Let \(\widetilde{\varphi}\) be the solution of \((E_{s_{0}})\), then the linearized operator \(\mathcal{L}_{\widetilde{\varphi}}:(\mathcal{C}^{k,\alpha})_{0}\to\mathcal{C}^{ k-2,\alpha}\) is an isomorphism for all integers \(k\geq 2\) and \(\alpha\in(0,1)\)._
Proof.: Let us first prove the surjectivity. Fixing \(f\in\mathcal{C}^{k-2,\alpha}\), the exhaustion argument will be applied to solve the equation \(\mathcal{L}_{\widetilde{\varphi}}u=f\). Take an exhaustive sequence of pre-compact sets, \(\Omega_{k}\subseteq X\times\Sigma\), with smooth boundary. In particular, by taking a sequence of subsets, \(B_{r_{k}}\times\Sigma\) where \(B_{r_{k}}=\{x\in X:r(x)\leq r_{k}\}\), and smoothing the corners, we can obtain the exhaustive sequence \(\{\Omega_{k}\}\). Then, we can solve the following Dirichlet problems,
\[(L_{k})\quad\begin{cases}\mathcal{L}_{\widetilde{\varphi}}u_{k}=f&\text{ in }\Omega_{k},\\ u_{k}=0&\text{ on }\partial\Omega_{k},\end{cases}\]
where \(f\in\mathcal{C}^{k-2,\alpha}\). The existence of the solution of \((L_{k})\) is a classic result of the Dirichlet problem on compact Riemannian manifolds with boundary. The key to complete the proof is to give the uniform estimates of \(u_{k}\). The main idea to show the \(\mathcal{C}^{0}\) uniform estimates is to construct barrier functions. Consider the function \(At(1-t)\). The fact that \(\lambda\Theta\leq\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\leq\Lambda\Theta\) implies \(\Delta_{\widetilde{\varphi}}At(1-t)\leq-\lambda A\). If we suppose that \(\|f\|_{L^{\infty}}\leq C_{0}\) and take \(A=C_{0}/\lambda\), then we have \(\Delta_{\widetilde{\varphi}}At(1-t)\leq f=\Delta_{\widetilde{\varphi}}u_{k}\). Combining with the fact that \(At(1-t)\geq 0\) on the boundary \(\partial\Omega_{k}\), the maximum principle implies that,
\[\|u_{k}\|_{L^{\infty}}\leq\frac{C_{0}}{\lambda}t(1-t)\leq\frac{C_{0}}{4\lambda}. \tag{2.1}\]
The uniform \(\mathcal{C}^{k,\alpha}\) estimates follows directly from the standard Schauder estimates. Precisely, for interior points \(p\in\Omega_{k}\) away from the boundary, we pick a pair of balls centered at \(p\), \(B_{\frac{1}{4}}(p)\subseteq B_{\frac{1}{2}}(p)\subset\Omega_{k}\). Then, the interior Schauder estimates implies that \(\|u_{k}\|_{k,\alpha;B_{\frac{1}{4}\Sigma}(p)}\leq C(\|u_{k}\|_{L^{\infty}(B_{ \frac{1}{4}}(p))}+\|f\|_{k-2,\alpha;B_{\frac{1}{4}}(p)})\). If \(p\in\Omega_{k}\) is close to the boundary, we can apply the boundary Schauder estimate. After straightening the boundary in case that the boundary portion on \(\partial\Omega_{k}\) is not flat, we can pick half balls, \(p\in B_{\frac{1}{4}}^{+}(q)\subseteq B_{\frac{1}{2}}^{+}(q)\) for some \(q\in\partial\Omega_{k}\). Together with the interior estimates, we have
\[\|u_{k}\|_{k,\alpha;\Omega_{k}}\leq C(\|u_{k}\|_{L^{\infty}(X\times\Sigma)}+ \|f\|_{k-2,\alpha;X\times\Sigma}), \tag{2.2}\]
where \(C\) depends only on \(n,k,\alpha,\lambda,\Lambda\). After passing to subsequence, we conclude that the limit function, \(u\), satisfies \(\mathcal{L}_{\widetilde{\varphi}}u=f\) in \(X\times\Sigma\) and \(u\equiv 0\) on \(X\times\partial\Sigma\). The uniqueness directly follows from the following maximum principle, Lemma 2.3.
The following lemma comes from Yau's generalized maximum principle, referring to [10, 29]. To describe the model metric on \(X\times\Sigma\), we introduce the asymptotic coordinates of \(X\times\Sigma\). Let \(\{z_{1},\ldots,z_{n}\}\) be asymptotic coordinates of the end of \(X\) and let \(w=t+is\) be the complex coordinate of \(\Sigma\). Real asymptotic coordinates are given by \(\{x_{1},\ldots,x_{2n},x_{2n+1}=t,x_{2n+2}=s\}\), where the complex coordinates
are written as \(z_{i}=x_{2i-1}+ix_{2i}\). The asymptotic coordinate system will be applied to describe the asymptotic behavior of prescribed Kahler metrics on \(X\times\Sigma\).
**Lemma 2.2**.: _Let \((X\times\Sigma,\Theta_{\widetilde{\Phi}})\) be the noncompact Kahler manifold as above with the Kahler metric \(\widetilde{g}\) associated with \(\widetilde{\Phi}\) satisfying, for some uniform constant \(0<\lambda<\Lambda\),_
\[\lambda\delta_{ij}\leq\widetilde{g}_{ij}\leq\Lambda\delta_{ij}\]
_in the asymptotic coordinates of \(X\times\Sigma\). Let \(u\) be a \(\mathcal{C}^{2}_{\text{loc}}\) function bounded from above on \(X\times\Sigma\). Suppose that \(\sup_{X\times\Sigma}u>\sup_{X\times\partial\Sigma}u\), then there exists a sequence \(\{x_{k}\}\) in \(X\times\Sigma^{\circ}\) such that_
\[\lim_{k\to\infty}u(x_{k})=\sup_{X\times\Sigma}u,\quad\lim_{k\to\infty}|du(x_{k })|_{\widetilde{g}}=0,\quad\limsup_{k\to\infty}\Delta_{\widetilde{g}}u(x_{k}) \leq 0. \tag{2.3}\]
Proof.: Let \(r\) be the radial function inherited from the asymptotic chart of \(X\), for instance, \(r=(\sum_{i=1}^{n}|z_{i}|^{2})^{1/2}\). The radial function can be extended to a non-negative smooth function in the whole space \(X\times\Sigma\) satisfying the estimate
\[|\nabla_{\widetilde{g}}r|_{\widetilde{g}}\leq C,\qquad|\Delta_{\widetilde{g}} r|\leq C, \tag{2.4}\]
for some uniform constant \(C\). Consider the function \(u_{\mathbf{e}}=u-\mathbf{e}r\). Since \(u_{\mathbf{e}}\) tends to negative infinity as \(r\) goes to infinity, \(u_{\mathbf{e}}\) achieves its maximum at some point \(x_{\mathbf{e}}\). And \(x_{\mathbf{e}}\) must be an interior point in \(X\times\Sigma\) based on the assumption that \(\sup_{X\times\Sigma}u>\sup_{X\times\partial\Sigma}u\). At \(x_{\mathbf{e}}\), the function \(u_{\mathbf{e}}\) satisfies
\[0=du_{\mathbf{e}}(x_{\mathbf{e}})=du(x_{\mathbf{e}})-\mathbf{e} dr(x_{\mathbf{e}}),\] \[0\geq\Delta_{\widetilde{g}}u_{\mathbf{e}}(x_{\mathbf{e}})=\Delta _{\widetilde{g}}u(x_{\mathbf{e}})-\mathbf{e}\Delta_{\widetilde{g}}r(x_{ \mathbf{e}})\]
and
\[u_{\mathbf{e}}(x_{\mathbf{e}})\geq u(x)-\mathbf{e}r(x),\quad\text{ for all }x\in X\times\Sigma.\]
Choosing \(\{x_{k}\}\) to be points achieving the maximum of \(u_{1/k}\), then combining with (2.4) and letting \(k\) go to infinity, we complete the proof of (2.3).
The following lemma is a strengthened version of the above maximum principle, based on solving the Dirichlet problem in \(X\times\Sigma\).
**Lemma 2.3**.: _Let \((X\times\Sigma,\widetilde{g})\) be the same as in Lemma 2.2. Suppose that \(u\) is a function in \(\mathcal{C}^{2}_{\text{loc}}(X\times\Sigma)\) and bounded from above. Suppose that \(u\) satisfies \(\Delta_{\widetilde{g}}u\geq 0\) in \(X\times\Sigma\) and \(u\leq 0\) on \(X\times\partial\Sigma\). Then \(u\leq 0\) in \(X\times\Sigma\)._
Proof.: Assuming \(u\) satisfies \(\sup_{X\times\Sigma}u\geq\delta>0\). According to the surjectivity part of proof of Proposition 2.1, there exists a function \(v\) satisfying
\[\begin{cases}\Delta_{\widetilde{g}}v=-1,&\text{ in }X\times\Sigma,\\ v=0,&\text{ on }X\times\partial\Sigma,\end{cases}\]
and \(\|v\|_{L^{\infty}}\leq C(n,\lambda,\Lambda)\). Consider the function \(u_{\mathbf{e}}=u-\mathbf{e}v\) for \(\mathbf{e}=\dfrac{\delta}{2C}\). Then \(\sup_{X\times\Sigma}u_{\mathbf{e}}\geq\dfrac{\delta}{2}>0\) and \(\Delta_{\widetilde{g}}u_{\mathbf{e}}\geq\mathbf{e}\). According to Lemma 2.2, there exists a sequence \(\{x_{k}\}\) in \(X\times\Sigma^{\circ}\) such that \(\lim_{k\to\infty}u_{\mathbf{e}}(x_{k})=\sup_{X\times\Sigma}u_{\mathbf{e}}\), \(\lim_{k\to\infty}|du_{\mathbf{e}}(x_{k})|_{\widetilde{g}}=0\), \(\limsup_{k\to\infty}\Delta_{\widetilde{g}}u_{\mathbf{e}}(x_{k})\leq 0\). However, \(\Delta_{\widetilde{g}}u_{\mathbf{e}}>0\), which leads to the contradiction.
## 3. A priori estimate up to \(\mathcal{C}^{0}\)
From section 3 to 5, we complete the proof of Theorem A. The key ingredient is to prove uniform a priori estimates up to order \(\mathcal{C}^{1,1}\) for the solution \(\widetilde{\varphi}=\widetilde{\Phi}-\Psi\) of the \(\varepsilon\)-geodesic equation (\(E_{\varepsilon}\)). These estimates will be uniform with respect to \(\varepsilon\in(0,1]\) and with respect to the distance from a fixed point in \(X\). (In section 6, we will also see that for a fixed \(\varepsilon>0\) it can be proved that \(\widetilde{\varphi}\) is decaying at spatial infinity. However, we are currently unable to make these decay estimates uniform with respect to \(\varepsilon\).)
These uniform \(\mathcal{C}^{1,1}\) estimates are then used in two ways:
* First, they allow us to solve \((E_{\varepsilon})\) for any fixed \(\varepsilon\in(0,1]\) via the continuity method in \((\mathcal{C}^{k,\alpha})_{0}\) for any \(k\geq 2\). Recall this is done by considering the family of equations \((E_{s})\) with \(s\in[\varepsilon,1]\), where openness in \((\mathcal{C}^{k,\alpha})_{0}\) follows from Proposition 2.1. The uniform \(\mathcal{C}^{1,1}\) estimates that we will prove, together with general regularity theory of the Monge-Ampere equation, then imply closedness. Here, it is not yet important that the \(\mathcal{C}^{1,1}\) estimates are uniform in \(\varepsilon\), and the higher \(\mathcal{C}^{k,\alpha}\) estimates will depend on \(\varepsilon\) because the ellipticity of the equation does. Also note that these higher-order estimates follow from standard local regularity in the interior and from [8, Section 2.1-2.2] near the boundary because we already have a true \(\mathcal{C}^{1,1}\) bound.
* Once \((E_{\varepsilon})\) is actually solved, we can then let \(\varepsilon\) go to zero and use the uniformity of the \(\mathcal{C}^{1,1}\) estimates of the \(\varepsilon\)-geodesic solution \(\widetilde{\varphi}\) to extract a subsequential limit \(\varphi\in\mathcal{C}^{1,1}\) such that \(\Phi=\Psi+\varphi\) solves the geodesic equation (1.4), (1.5), (1.6).
We omit these standard arguments and instead focus on the proof of the uniform \(\mathcal{C}^{1,1}\) a priori estimates of the \(\varepsilon\)-geodesic solution \(\widetilde{\varphi}\). For this we follow the outline of [7] in the compact case. However, we provide all the necessary details that are required in order to generalize this theory to the ALE case. In addition, we also make use of the recent advance [12] in order to obtain a \(\mathcal{C}^{1,1}\) estimate which is uniform in \(\varepsilon\).
In this section, we only deal with the uniform \(\mathcal{C}^{0}\) estimate. We begin with a standard comparison principle [5, Proposition 3.1].
**Lemma 3.1**.: _Let \(D\) be a bounded connected domain in \(\mathbb{C}^{n}\) with smooth boundary and \(u,v\in\mathcal{C}^{2}(D)\), plurisubharmonic functions in \(D\). If \(u=v\) on \(\partial D\) and \(u\geq v\), then we have_
\[\int_{\Omega}(dd^{c}u)^{n}\leq\int_{\Omega}(dd^{c}v)^{n}.\]
Then we can prove the following maximum principle for Monge-Ampere operators.
**Theorem 3.2**.: _Let \(\Theta\) be a fixed reference Kahler form and \(\Omega\), the pull-back of a semipositive \((1,1)\)-form in \(X\). Assume that \(u,v\in\mathcal{C}^{2}(X\times\Sigma)\) are bounded functions with \(\Omega+dd^{c}v\), \(\Omega+dd^{c}u\geq 0\). If for some positive constants \(\lambda\), \(\Lambda\), we have the following properties:_
\[(\Omega+dd^{c}v)^{n+1}\leq(\Omega+dd^{c}u)^{n+1} \text{in }X\times\Sigma, \tag{3.1}\] \[\lambda\Theta\leq\Omega+dd^{c}u\leq\Lambda\Theta \text{in }X\times\Sigma,\] (3.2) \[u\leq v \text{on }X\times\partial\Sigma, \tag{3.3}\]
_then \(u\leq v\) in \(X\times\Sigma\)._
Proof.: Assume \(u(z_{0})>v(z_{0})\) at some point \(z_{0}\in X\times\Sigma\). Let \(2h=u(z_{0})-v(z_{0})\). Then, we can modify \(u,v\) to be \(\tilde{u},\tilde{v}\) as follows:
\[\begin{split}\tilde{v}&=v+h,\\ \tilde{u}&=u+\frac{h}{2}|\tau|^{2}.\end{split} \tag{3.4}\]
It can be checked that \(\tilde{u}\), \(\tilde{v}\) are bounded functions satisfying that \(\tilde{u}\leq\tilde{v}\) on \(X\times\partial\Sigma\) and \(\tilde{u}(z_{0})\geq\tilde{v}(z_{0})+h\). By Wu-Yau's generalized maximum principle, there exists a sequence \(\{p_{k}\}\) in \(X\times\Sigma\) such that
\[\lim_{k\to\infty}(\tilde{u}-\tilde{v})(p_{k})=\sup_{X\times\Sigma}(\tilde{u}- \tilde{v})\geq h,\quad\limsup_{k\to\infty}dd^{c}(\tilde{u}-\tilde{v})(p_{k}) \leq 0.\]
For a sufficiently small constant \(\delta>0\), there exist a point \(p\in X\times\Sigma\), \(dd^{c}\tilde{u}(p)-dd^{c}\tilde{v}(p)\leq\delta\Theta\) and \(\eta_{0}=\tilde{u}(p)-\tilde{v}(p)\geq\sup_{X\times\Sigma}(\tilde{u}-\tilde{v})-\delta\). Fix a local holomorphic chart around \(p\), \(\{U,z^{i}:i=1,\ldots,n+1\}\) with \(z^{n+1}=\tau\). Without loss of generality, we assume \(U\) contains the unit disk in \(\mathbb{C}^{n+1}\) and for any local vector field \(V\in T^{1,0}U\),
\[C^{-1}|V|^{2}\leq\Theta(V,\overline{V})\leq C|V|^{2},\]
where the constant \(C\) only depends on the geometry of \(X\) and the reference metric \(\Theta\). Let \(\mathbf{e}=2C\delta\) and \(\eta=\eta_{0}-\frac{C\delta}{2}\). To derive the contradiction, we construct the following local functions in \(U\),
\[\overset{\widetilde{u}}{\widetilde{u}} =\tilde{u}-\mathbf{e}|z|^{2}, \tag{3.5}\] \[\overset{\widetilde{v}}{\widetilde{v}} =\tilde{v}+\eta.\]
If we denote the unit ball contained in the coordinate chart of \(U\) by \(B_{1}(p)\), we have \(\overset{\widetilde{u}}{\widetilde{u}}(p)-\overset{\widetilde{v}}{ \widetilde{v}}(p)=\frac{C\delta}{2}>0\) and \(\overset{\widetilde{u}}{\widetilde{u}}\leq\overset{\widetilde{u}}{\widetilde {v}}\) on \(\partial B_{1}(p)\). Consider the following subset of \(B_{1}(p)\),
\[D=\{z\in B_{1}(p):\overset{\widetilde{u}}{\widetilde{u}}(z)> \widetilde{v}(z)\}.\]
Let \(\rho\) be the local potential of \(\Omega\) in \(U\), \(\Omega=dd^{c}\rho\). According to Lemma 3.1,
\[\int_{D}\big{[}dd^{c}(\rho+\overset{\widetilde{w}}{\widetilde{v} })\big{]}^{n+1}\geq\int_{D}\big{[}dd^{c}(\rho+\overset{\widetilde{u}}{ \widetilde{u}})\big{]}^{n+1}. \tag{3.6}\]
Taking \(\mathbf{e}\leq\frac{\lambda}{4C}\),
\[dd^{c}(\rho+u-\mathbf{e}|z|^{2})\geq\frac{1}{2}dd^{c}(\rho+u).\]
Together with the construction of \(\overset{\widetilde{u}}{\widetilde{u}}\) and \(\overset{\widetilde{v}}{\widetilde{v}}\) in (3.4), (3.5),
\[\int_{D}\big{[}dd^{c}(\rho+v)\big{]}^{n+1} \geq\int_{D}\Big{[}dd^{c}\big{(}\rho+u+\frac{h}{2}|\tau|^{2}- \mathbf{e}|z|^{2}\big{)}\Big{]}^{n+1}\] \[\geq\frac{h\lambda^{n}}{2^{n+1}}\int_{D}\Theta^{n+1}+\int_{D} \big{[}dd^{c}(\rho+u)\big{]}^{n+1}-2\mathbf{e}\Lambda^{n}\int_{D}\Theta^{n+1}. \tag{3.7}\]
By picking \(\mathbf{e}\) smaller, \(\mathbf{e}\leq\frac{h\lambda^{n}}{2^{n+4}\Lambda^{n}}\), and combining with (3.1), we have,
\[\int_{D}\big{[}dd^{c}(\rho+v)\big{]}^{n+1} \geq\int_{D}\big{[}dd^{c}(\rho+u)\big{]}^{n+1}+\frac{h\lambda^{n} }{2^{n+4}}\int_{D}\Theta^{n+1}\] \[\geq\int_{D}\big{[}dd^{c}(\rho+v)\big{]}^{n+1}+\frac{h\lambda^{n} }{2^{n+4}}\int_{D}\Theta^{n+1}. \tag{3.8}\]
Since the second term of (3.8) is strictly positive, which leads to a contradiction, we complete the proof.
Let \(\widetilde{\varphi}=\widetilde{\Phi}-\Psi\) be the solution of \((E_{\varepsilon})\) after subtracting \(\Psi\). According to Theorem 3.2, we have a uniform lower bound \(\widetilde{\varphi}\geq 0\); hence, \(\widetilde{\Phi}\geq\Psi\). The upper bound is easy to construct. Consider the function defined in \(X\times\Sigma\), \(H=2t(1-t)\). By restricting to each section \(\Sigma_{x_{0}}=\{x_{0}\}\times\Sigma\overset{i_{x_{0}}}{\hookrightarrow}X\times\Sigma\), we have
\[i_{x_{0}}^{*}(\Theta_{\Psi}+dd^{c}H)\leq 0<i_{x_{0}}^{*}(\Theta_{\Psi}+dd^{c} \varphi).\]
Hence, \(\Delta_{\Sigma}H\leq dd^{c}\widetilde{\varphi}\) in \(\Sigma_{x_{0}}\) and \(H=\varphi=0\) on its boundary \(\partial\Sigma_{x_{0}}\). The maximum principle on compact manifolds with boundary implies that \(\widetilde{\varphi}\leq H\) on each section. Hence, we get the desired uniform \(\mathcal{C}^{0}\) estimate,
\[\Psi\leq\widetilde{\Phi}\leq\Psi+H.\]
## 4. A priori estimate up to \(\mathcal{C}^{1}\)
For the \(\mathcal{C}^{1}\) bound, Blocki gives an explicit estimate in the compact setting in [6]. We generalize this estimate to the noncompact case. The \(\mathcal{C}^{1}\) boundary estimate follows directly from the fact that \(\Psi\leq\widetilde{\Phi}\leq\Psi+H\) in \(X\times\Sigma\) and \(\Psi\), \(\widetilde{\Phi}\), \(\Psi+H\) agree along \(X\times\partial\Sigma\). Let \(\nabla\) be the Levi-Civita connection of \(\Theta_{\Psi}\) on \(X\times\Sigma\). Then we have
\[|\nabla\widetilde{\Phi}|_{\Theta_{\Psi}}\leq\max\{|\nabla\Psi|_{ \Theta_{\Psi}},|\nabla(\Psi+H)|_{\Theta_{\Psi}}\},\quad\text{ on }X\times\partial\Sigma.\]
Hence, \(\sup_{X\times\partial\Sigma}|\nabla\widetilde{\Phi}|_{\Theta_{\Psi}}\leq C\), where \(C\) is a uniform constant.
**Proposition 4.1**.: _Let \(\widetilde{\varphi}=\widetilde{\Phi}-\Psi\in\mathcal{C}^{3}_{loc}(X\times\Sigma)\) be a solution of \((E_{\varepsilon})\) and let \(\nabla\) be the Levi-Civita connection of the Kahler metric \(\Theta_{\Psi}\) on \(X\times\Sigma\). Assume that \(\widetilde{\varphi}\) lies in the space \(\mathcal{C}^{1}(X\times\Sigma,\Theta_{\Psi})\). Then,_
\[\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|_{\Theta_{\Psi}}\leq C,\]
_where \(C\) is a positive constant depending only on upper bounds for \(|\widetilde{\varphi}|\), on lower bounds for the bisectional curvature of \(\Theta_{\Psi}\), and on \(n\), but not on \(\varepsilon\)._
Proof.: Suppose that \(\inf_{X\times\Sigma}\widetilde{\varphi}=A\) and \(\sup_{X\times\Sigma}\widetilde{\varphi}=B\). Consider the following function,
\[\alpha=\log\beta-\gamma\circ\widetilde{\varphi},\]
where \(\beta=|\nabla\widetilde{\varphi}|^{2}_{\Theta_{\Psi}}\) and \(\gamma:[A,B]\to\mathbb{R}\) is a smooth function to be determined later. According to the assumption that \(\widetilde{\varphi}\) lies in the space \(\mathcal{C}^{1}\), Yau's maximum principle can be applied here. In particular, there exists a sequence in \(\{x_{k}\}\) in \(X\times\Sigma^{\circ}\) such that,
\[\lim_{k\to\infty}\alpha(x_{k})=\sup_{X\times\Sigma}\alpha,\quad \lim_{k\to\infty}|\nabla\alpha(x_{k})|_{\Theta_{\Psi}}=0,\quad\limsup_{k\to \infty}\Delta\alpha(x_{k})\leq 0,\]
where \(\Delta=\Delta_{\Theta_{\Psi}}\). Then, for a sufficiently small \(\mathbf{e}>0\) to be determined later and all \(k\gg 1\), we have
\[\alpha(x_{k})\geq\sup_{X\times\Sigma}\alpha-\mathbf{e},\quad| \nabla\alpha(x_{k})|_{\Theta_{\Psi}}\leq\mathbf{e},\qquad\Delta\alpha(x_{k}) \leq\mathbf{e}. \tag{4.1}\]
Fixing \(O=x_{k}\) satisfying (4.1), we can pick the normal coordinates around \(O\). Let \(g\) and \(\widetilde{g}\) denote the metric tensors corresponding to \(\Theta_{\Psi}\) and \(\Theta_{\widetilde{\Phi}}=\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\). Then there exist local holomorphic coordinates near \(O\) such that,
\[g_{i\overline{j}}(O)=\delta_{ij},\quad g_{i\overline{j},k}(O)=0 \quad\text{ and }\quad\widetilde{g}_{i\overline{j}}(O)\text{ is diagonal}.\]
By taking derivative of \(\alpha\),
\[\alpha_{p}=\frac{\beta_{p}}{\beta}-(\gamma^{\prime}\circ\widetilde{\varphi}) \cdot\widetilde{\varphi}_{p}.\]
Combining with condition (4.1), \(|\alpha_{p}(O)|\leq\mathbf{e}\). Then, at the point \(O\), we have
\[\alpha_{p\overline{p}}\geq\frac{\beta_{p\overline{p}}}{\beta}-[( \gamma^{\prime})^{2}+\gamma^{\prime\prime}]|\widetilde{\varphi}_{p}|^{2}- \gamma^{\prime}\widetilde{\varphi}_{p\overline{p}}-\mathbf{e}|\gamma^{\prime} ||\widetilde{\varphi}_{p}|-\mathbf{e}. \tag{4.2}\]
If we write the local potential of \(\widetilde{g}_{i\overline{j}}\) as \(u\) near \(O\), then the \(\varepsilon\)-geodesic equation is locally given by \(\det(u_{i\overline{j}})=\upsilon(\varepsilon)\det(g_{i\overline{j}})\). The direct derivative of the equation at \(O\) gives,
\[\sum_{p}\frac{u_{p\overline{p}j}}{u_{p\overline{p}}}=\big{(}\log \upsilon(\varepsilon)\big{)}_{j}. \tag{4.3}\]
Also notice that,
\[\beta_{p\overline{p}}\geq-D\beta+2\operatorname{Re}\sum_{j}u_{p \overline{p}j}\widetilde{\varphi}_{\overline{j}}+\sum_{j}|\widetilde{ \varphi}_{jp}|^{2}+\widetilde{\varphi}_{p\overline{p}}^{2},\]
where \(-D\) is the negative lower bound of bisectional curvature of \(\Theta_{\Psi}\). Recall that we have the assumption \(C^{-1}g_{i\overline{j}}\leq u_{i\overline{j}}\leq Cg_{i\overline{j}}\) and \(|\widetilde{\varphi}_{p}|<C\), where \(C\) is the constant from our assumption at the beginning of this section and we will get rid of this constant in the end. Together with (4.2) and (4.3), we have,
\[\begin{split} C\mathbf{e}\geq\sum_{p}\frac{\alpha_{p\overline{p} }}{u_{p\overline{p}}}\geq(\gamma^{\prime}&-D)\sum_{p}\frac{1}{u_ {p\overline{p}}}+\frac{1}{\beta}\sum_{jp}\frac{|\widetilde{\varphi}_{jp}|}{u_ {p\overline{p}}}\\ &-2\operatorname{Re}\frac{1}{\beta}\sum_{j}\big{(}\log\upsilon( \varepsilon)\big{)}_{j}\widetilde{\varphi}_{\overline{j}}\\ &-[(\gamma^{\prime})^{2}+\gamma^{\prime\prime}]\sum_{p}\frac{| \widetilde{\varphi}_{p}|^{2}}{u_{p\overline{p}}}-n\gamma^{\prime}-C(|\gamma^{ \prime}|+1)\mathbf{e}.\end{split} \tag{4.4}\]
According to Blocki's key observation in [6], after modified in our case, at the point \(O\), we have
\[\frac{1}{\beta}\sum_{j,p}\frac{|\widetilde{\varphi}_{jp}|^{2}}{u_{p\overline{p}} }\geq(\gamma^{\prime})^{2}\sum_{p}\frac{|\widetilde{\varphi}_{p}|^{2}}{u_{p \overline{p}}}-2\gamma^{\prime}-\frac{2+C\mathbf{e}}{\beta}-C(1+|\gamma^{ \prime}|)\mathbf{e},\]
and assuming that \(\beta\geq 1\), we have
\[\frac{2}{\beta}\operatorname{Re}\sum_{j}\big{(}\log\upsilon\big{)}_{j} \varphi_{\overline{j}}\geq-2\frac{|\nabla\log\upsilon(\varepsilon)|}{\sqrt{ \beta}}\geq-2(n+1)\frac{\big{|}\nabla\big{(}\upsilon(\varepsilon)^{\frac{1}{n+ 1}}\big{)}\big{|}}{\upsilon(\varepsilon)^{\frac{1}{n+1}}}\geq-V\sum_{p}\frac{ 1}{u_{p\overline{p}}}\]
where \(V\) is a uniform constant satisfying
\[V\geq 2(n+1)\big{|}\nabla\big{(}\upsilon(\varepsilon)^{\frac{1}{n+1}}\big{)} \big{|}.\]
Combining with (4.4),
\[C(1+|\gamma^{\prime}|)\mathbf{e}\geq(\gamma^{\prime}-D-V)\sum_{p}\frac{1}{u_{ p\overline{p}}}-\gamma^{\prime\prime}\sum_{p}\frac{|\widetilde{\varphi}_{p}|^{2}} {u_{p\overline{p}}}-(n+2)\gamma^{\prime}-2. \tag{4.5}\]
Now, we choose the function \(\gamma\) and the small number \(\mathbf{e}>0\) in (4.5) as follows. Let \(\gamma=(D+V+3)(t-A)-(B-A)^{-1}(t-A)^{2}\) and \(\mathbf{e}\leq C^{-1}(D+V+3)^{-1}\), then we have
\[\sum_{p}\frac{1}{u_{p\overline{p}}}+\frac{2}{B-A}\sum_{p}\frac{|\widetilde{ \varphi}_{p}|^{2}}{u_{p\overline{p}}}\leq 3+(n+2)(D+V+3).\]
Then, it is straightforward to conclude that \(\beta(O)\leq\max\{[(n+3)(D+V+3)]^{n+1}n(B-A),1\}\). Noting that \(\beta\leq\exp\{\mathbf{e}+\log\beta(O)-\gamma\circ\widetilde{\varphi}(O)+ \gamma\circ\widetilde{\varphi}\}\), hence, \(\beta\) is controlled by some uniform constant only depending on \(\|\widetilde{\varphi}\|_{L^{\infty}}\), \(D\), \(V\) and \(n\).
## 5. A priori estimate up to \(\mathcal{C}^{1,1}\)
First we deal with the uniform \(\mathcal{C}^{2}\) boundary estimate on \(X\times\partial\Sigma\). The technique is to construct local barrier functions near boundary, which is completely parallel to [8, 9, 17]. The statement is the following:
**Lemma 5.1**.: _Let the data \((X\times\Sigma,\Theta_{\Psi},\widetilde{\varphi})\) be the same as in Proposition 4.1. Let \(\nabla\) denote the Levi-Civita connection of \(\Theta_{\Psi}\) on \(X\times\Sigma\). Then_
\[\sup_{X\times\partial\Sigma}|\nabla^{2}\widetilde{\varphi}|_{\Theta_{\Psi}} \leq C,\]
_where the constant \(C\) only depends on \(\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|_{\Theta_{\Psi}}\) and on \((X\times\Sigma,\Theta_{\Psi})\)._
Proof.: Fixing a point \(p\in X\times\partial\Sigma\), we pick the local holomorphic coordinates around the point \(p\) such that the coordinates system is normal in \(X\) and in \(\Sigma\) direction, we still pick the standard coordinate function of annulus, denoted by \(\{x_{1},\ldots,x_{2n},x_{2n+1}=t,x_{2n}=s\}\) and the corresponding holomorphic coordinates, \(z_{i}=x_{2i-1}+ix_{2i}\). Throughout the proof, we assume the metric tensor \(g\) associated with \(\Theta_{\Psi}\) satisfies \(m\delta_{ij}\leq g_{i\overline{j}}\leq M\delta_{ij}\). In general, we need to prove the boundary \(C^{2}\) estimate at \(p\) in tangential-tangential, tangential-normal and normal-normal directions respectively. However, the tangential-tangential is trivial in our case and the normal-normal estimate follows directly from tangential-normal estimate. Here, we briefly summarize the proof of tangential-normal estimate by explicitly constructing the barrier functions.
Consider a small neighborhood near \(p\), \(B^{\prime}_{\delta}(p)=(X\times\Sigma)\cap B_{\delta}(p)\), where the small constant \(\delta\) will be determined later. Firstly, we construct the following auxiliary function in \(B^{\prime}_{\delta}(p)\),
\[v=\widetilde{\varphi}+Nt(1-t), \tag{5.1}\]
where \(N\) is a large constant to be determined. Then, it can be easily checked that
\[\widetilde{\Delta}v\leq n+1-m\sum_{i}\widetilde{g}^{i\overline{i}}-N \widetilde{g}^{n+1,\overline{n+1}},\]
where \(\widetilde{g}\) again denotes the metric tensor associated with \(\Theta_{\widetilde{\Phi}}=\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\) and \(\widetilde{\Delta}\) denotes the corresponding Laplacian. Notice that
\[-\frac{m}{2}\sum_{i}\widetilde{g}^{i\widetilde{i}}-N\widetilde{g}^{n+1, \overline{n+1}}\leq-\frac{mN^{\frac{1}{n+1}}}{2(\det\widetilde{g})^{\frac{1}{ n+1}}}=-\frac{mN^{\frac{1}{n+1}}}{2\upsilon(\varepsilon)^{\frac{1}{n+1}}}(\det g )^{-\frac{1}{n+1}}.\]
By taking \(N=[(n+1)(2/m)]^{n+1}\max_{B^{\prime}_{\delta}(p)}(\det g)\), we have \(\widetilde{\Delta}v\leq-\frac{m}{2}\sum_{i}\widetilde{g}^{i\widetilde{i}}\). Noting that \(\widetilde{\varphi}=\widetilde{\Phi}-\Psi\geq 0\), we have \(v\geq 0\) on \(\partial B^{\prime}_{\delta}(p)\). Then, the barrier functions can be constructed as follows:
\[w=Av+B|z|^{2}\pm\frac{\partial}{\partial x_{k}}\widetilde{\varphi},\quad\text { for }1\leq k\leq 2n\text{ or }k=2n+2.\]
By differentiating the Monge-Ampere equation \((E_{\varepsilon})\) in the local coordinates,
\[\pm\widetilde{\Delta}\Big{(}\frac{\partial}{\partial x_{k}}\widetilde{ \varphi}\Big{)}=\pm\big{(}\widetilde{g}^{i\widetilde{j}}(\widetilde{g})_{i \widetilde{j},k}-\widetilde{g}^{i\widetilde{j}}g_{i\widetilde{j},k}\big{)} \leq C(1+\sum\widetilde{g}^{i\widetilde{i}}),\]
where \(A\) and \(B\) are large positive constants to be determined. According to the \(\mathcal{C}^{1}\) estimate of \(\widetilde{\varphi}\), we assume that \(|\partial_{k}\widetilde{\varphi}|\leq C\). By picking a very large constant \(B\) such that, on \(\partial B^{\prime}_{\delta}(p)\), \(B|z|^{2}\pm\partial_{k}\widetilde{\varphi}\geq 0\), we have \(w\geq 0\) on \(\partial B^{\prime}_{\delta}(p)\). Then, we choose a large constant \(A\) such that \(\widetilde{\Delta}w\leq 0\) in \(B^{\prime}_{\delta}(p)\). Then, by maximum principle, \(w\geq 0\) in \(B^{\prime}_{\delta}(p)\). Together with the fact that \(w(p)=0\), we have \(\partial_{t}w\geq 0\) at \(p\), which implies the tangential-normal estimate on the boundary.
Lemma 5.1 together with Yau's standard calculation on Laplacian estimate implies the following interior Laplacian estimate, referring to [31].
**Lemma 5.2**.: _Let \(\widetilde{\varphi}\) be the solution of \((E_{\varepsilon})\) and \(\Delta\), \(\widetilde{\Delta}\), the Laplacian operators of \(g=\Theta_{\Psi}\) and \(\widetilde{g}=\Theta_{\widetilde{\Phi}}=\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\) respectively. Then, for any constant \(C\),_
\[\widetilde{\Delta}\big{(}e^{-C\widetilde{\varphi}}(n+1+\Delta \widetilde{\varphi})\big{)}\geq e^{-C\widetilde{\varphi}}\big{(}\Delta\log\upsilon( \varepsilon)-(n+1)^{2}\inf_{i\neq l}(R_{i\widetilde{i}\overline{l}})\big{)}\] \[-Ce^{-C\widetilde{\varphi}}(n+1)(n+1+\Delta\widetilde{\varphi})\] \[+(C+\inf_{i\neq l}(R_{i\widetilde{i}\overline{l}}))e^{-C \widetilde{\varphi}}(n+1+\Delta\widetilde{\varphi})^{1+\frac{1}{n}}\upsilon( \varepsilon)^{-1},\]
_where \(R\) denotes the curvature tensor of \(g\). From this, we can deduce the estimate_
\[\sup_{X\times\Sigma}|\Delta\widetilde{\varphi}|\leq C(1+\sup_{X\times\partial \Sigma}|\Delta\widetilde{\varphi}|),\]
_where \(C\) only depends on \(\sup_{X\times\Sigma}\widetilde{\varphi}\) and on a negative lower bound of \(\inf_{i\neq l}(R_{i\widetilde{i}\overline{l}})\)._
Lemma 5.2, together with Lemma 5.1, implies that there exists a uniform constant \(C\) only depending on \(\sup_{X\times\Sigma}\Delta\widetilde{\varphi}\) such that \(\varepsilon C^{-1}g_{i\widetilde{j}}\leq\widetilde{g}_{i\widetilde{j}}\leq Cg_ {i\widetilde{j}}\). This is already enough to apply the standard local regularity theory of the Monge-Ampere equation to prove \(\mathcal{C}^{k,\alpha}\) estimates for any \(k\geq 2\) that depend on a positive lower bound for \(\varepsilon\). In this way the equation \((E_{\varepsilon})\) can be solved using the continuity path \((E_{s})\), \(s\in[\varepsilon,1]\). However, in order to construct an honest geodesic by letting \(\varepsilon\to 0\), we require a full \(\mathcal{C}^{1,1}\) estimate which is uniform in \(\varepsilon\). In [12], \(\mathcal{C}^{1,1}\) regularity is proved in the compact case. The method can also be applied in the ALE Kahler setting.
**Proposition 5.3**.: _Let the data \((X\times\Sigma,\Theta_{\Psi},\widetilde{\varphi})\) be the same as in Proposition 4.1. If \(\widetilde{\varphi}\) lies in the space \(\mathcal{C}^{2}(X\times\Sigma,\Theta_{\Psi})\), then there exists a constant \(C\) such that_
\[|\nabla^{2}\widetilde{\varphi}|_{\Theta_{\Psi}}\leq C,\]
_where \(\nabla\) again denotes the Levi-Civita connection of the metric \(\Theta_{\Psi}\) and \(C\) depends only on \((X\times\Sigma,\Theta_{\Psi})\) and on \(\sup_{X\times\Sigma}|\widetilde{\varphi}|\), \(\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|_{\Theta_{\Psi}}\), \(\sup_{X\times\Sigma}|\Delta\widetilde{\varphi}|\), \(\sup_{X\times\partial\Sigma}|\nabla^{2}\widetilde{\varphi}|_{\Theta_{\Psi}}\)._
Proof.: We again write \(g\) for the metric tensor associated with \(\Theta_{\Psi}\). Let \(\lambda_{1}(\nabla^{2}\widetilde{\varphi})\) be the largest eigenvalue of the real Hessian \(\nabla^{2}\widetilde{\varphi}\). By observing that there exists a uniform constant \(C\) such that \(\lambda_{1}(\nabla^{2}\widetilde{\varphi})\leq|\nabla^{2}\widetilde{\varphi}|_ {g}\leq C\lambda_{1}(\nabla^{2}\widetilde{\varphi})+C\), it suffices to prove that \(\lambda_{1}(\nabla^{2}\widetilde{\varphi})\) has a uniform upper bound. Consider the following quantity,
\[Q=\log\lambda_{1}(\nabla^{2}\widetilde{\varphi})+h(|\nabla\widetilde{\varphi}| _{g}^{2})-A\widetilde{\varphi},\]
where \(h\) is defined to be \(h(s)=-\frac{1}{2}\log\big{(}1+\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}| _{g}^{2}-s\big{)}\) and \(A\) is a uniform large positive constant to be determined later. We can further modify this quantity to \(Q_{\mathbf{e}}=Q-\mathbf{e}r\), where \(\mathbf{e}\) is a small positive constant to be determined later. According to the assumption that \(|\nabla^{2}\widetilde{\varphi}|\) is bounded and hence so is \(Q\), the modified quantity \(Q_{\mathbf{e}}\) attains its maximum at some point \(x_{\mathbf{e}}\in X\times\Sigma\). The same argument as in Lemma 2.2 implies that \(\lim_{\mathbf{e}\to 0}Q(x_{\mathbf{e}})=\sup_{X\times\Sigma}Q\). In the following, we assume \(\mathbf{e}\) is small enough such that \(|Q(x_{\mathbf{e}})-\sup_{X\times\Sigma}Q|<1\) and always write \(p=x_{\mathbf{e}}\). Since \(Q_{\mathbf{e}}\) might not be smooth at \(p\) if the eigenspace of \(\lambda_{1}(\nabla^{2}\widetilde{\varphi})(p)\) has dimension greater than one, a perturbation argument used in [12] can be applied to the quantity \(Q_{\mathbf{e}}\) here.
Fix normal coordinates \((z_{1},\ldots,z_{n+1})\) with respect to \(g\) at \(p\) such that \((\widetilde{\varphi}_{i\overline{j}})\) is diagonal at \(p\). Define the corresponding real coordinates \((x_{1},\ldots,x_{2n})\) by \(z_{i}=x_{2i-1}+ix_{2i}\). Let \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{2n}\) be the eigenvalues of \(\nabla^{2}\widetilde{\varphi}\) at \(p\) and \(V_{1},\ldots,V_{2n}\), the corresponding unit eigenvectors at \(p\). The eigenvectors can be extended to vector fields with constant coefficients in a small neighborhood of \(p\), also denoted by \(V_{1},\ldots,V_{2n}\), and can be represented by \(V_{\alpha}=V_{\alpha}^{\beta}\partial_{x_{\beta}}\) in the local coordinates. The perturbation argument is to perturb \(\nabla^{2}\widetilde{\varphi}\) locally around \(p\) and to ensure that \(\lambda_{1}>\lambda_{2}\) near \(p\). Precisely, consider the following locally defined tensor field,
\[P=\sum_{\alpha,\beta}\big{(}\delta_{\alpha\beta}-V_{1}^{\alpha}V_{1}^{\beta} \big{)}dx_{\alpha}\otimes dx_{\beta}.\]
Let \(\lambda_{i}^{\prime}=\lambda_{i}(\nabla^{2}\widetilde{\varphi}-P)\). Then, one can easily check that \(\lambda_{1}^{\prime}(p)=\lambda_{1}(p)\) and \(\lambda_{i}^{\prime}(p)=\lambda_{i}(p)-1\) for \(i\geq 2\). Hence, there exists a neighborhood of \(p\) such that \(\lambda_{1}^{\prime}>\lambda_{2}^{\prime}\geq\ldots\geq\lambda_{2n}^{\prime}\) and \(\lambda_{1}^{\prime}\leq\lambda_{1}\). Consider the following perturbed quantities,
\[\hat{Q}=\log\lambda_{1}^{\prime}+h(|\nabla\widetilde{\varphi}|_{g}^{2})-A \widetilde{\varphi},\quad\hat{Q}_{\mathbf{e}}=\hat{Q}-\mathbf{e}r.\]
Therefore, \(\hat{Q}_{\mathbf{e}}\) is a smooth quantity with a local maximum at \(p\). Then, we have,
\[|d\hat{Q}(p)|_{g}\leq C\mathbf{e},\quad\Delta\hat{Q}(p)\leq C\mathbf{e}.\]
The following inequality follows directly from the calculation in [12, Lemma 2.1]. The only information we need in the calculation is the second derivative of the Monge-Ampere equation at \(p\). We will not repeat the details here. By assuming \(\lambda_{1}^{\prime}\geq 1\) at \(p\), and again writing \(\widetilde{g}\) for the metric tensor associated with \(\Theta_{\widetilde{\Phi}}=\Theta_{\Psi}+dd^{c}\widetilde{\varphi}\), we have
\[\Delta\hat{Q} \geq 2\sum_{\alpha>1}\frac{\widetilde{g}^{\overline{i}}|\partial_{i} (\widetilde{\varphi}_{V_{\alpha}V_{1}})|^{2}}{\lambda_{1}(\lambda_{1}-\lambda_ {\alpha})}+\frac{\widetilde{g}^{\overline{i}}\widetilde{g}^{\overline{j}}|V_{1 }\big{(}\widetilde{g}_{i\overline{j}}\big{)}|^{2}}{\lambda_{1}}-\frac{ \widetilde{g}^{\overline{i}}|\partial_{i}(\widetilde{\varphi}_{V_{1}V_{1}})|^ {2}}{\lambda_{1}^{2}} \tag{5.2}\] \[+h^{\prime}\sum_{k}\widetilde{g}^{\overline{i}}\big{(}|\widetilde {\varphi}_{ik}|^{2}+|\widetilde{\varphi}_{i\overline{k}}|^{2}\big{)}+h^{ \prime\prime}\widetilde{g}^{\overline{i}}|\partial_{i}|\nabla\widetilde{ \varphi}|_{g}^{2}\big{|}\] \[+(A-B)\sum_{i}\widetilde{g}^{\overline{i}}-An,\]
where the constant \(B\) only depends on \((X\times\Sigma,g)\) and \(\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|_{g}\). To cancel the annoying terms, we deal with the third term in (5.2), \(\lambda_{1}^{-2}\widetilde{g}^{\overline{i}}|\partial_{i}(\widetilde{\varphi} _{V_{1}V_{1}})|^{2}\). To estimate the term, we split it into the
following two parts,
\[I_{1} =(1-2\delta)\frac{\widetilde{g}^{\vec{i}}|\partial_{i}(\widetilde{ \varphi}_{V_{1}V_{1}})|^{2}}{\lambda_{1}^{2}},\] \[I_{2} =2\delta\frac{\widetilde{g}^{\vec{i}}|\partial_{i}(\widetilde{ \varphi}_{V_{1}V_{1}})|^{2}}{\lambda_{1}^{2}},\]
where \(0<\delta<1/4\) is to be determined later. For \(I_{1}\), referring to [12, Lemma 2.2], by assuming that \(\lambda_{1}^{\prime}\geq D/\delta\), where \(D\) only depends on \((X\times\Sigma,g)\) and \(\sup_{X\times\Sigma}\Delta\widetilde{\varphi}\), we have
\[I_{1}\leq\sum_{i,j}\frac{\widetilde{g}^{\vec{i}}\widetilde{g}^{\vec{j}} \widetilde{j}^{\prime}\big{|}V_{1}\big{(}\widetilde{g}_{i\vec{i}}\big{)}\big{|} }{\lambda_{1}}+2\sum_{\alpha>1}\sum_{i}\frac{\widetilde{g}^{\vec{i}}|\partial _{i}(\widetilde{\varphi}_{V_{\alpha}V_{1}})|^{2}}{\lambda_{1}(\lambda_{1}- \lambda_{\alpha})}+\sum_{i}\widetilde{g}^{\vec{i}}. \tag{5.3}\]
To estimate \(I_{2}\), recall the fact that \(d\hat{Q}_{\mathbf{e}}=0\) and apply the derivative of eigenvalues referring to [12, Lemma 5.2]. Then, we have
\[I_{2} =2\delta\sum_{i}\widetilde{g}^{\vec{i}}\big{|}A\widetilde{ \varphi}_{i}+h^{\prime}\partial_{i}|\nabla\widetilde{\varphi}|_{g}^{2}- \mathbf{e}r_{i}\big{|}^{2} \tag{5.4}\] \[\leq 8\delta A^{2}\sum_{i}\widetilde{g}^{\vec{i}}|\widetilde{ \varphi}_{i}|^{2}+2(h^{\prime})^{2}\sum_{i}\widetilde{g}^{\vec{i}}|\partial_{ i}|\nabla\widetilde{\varphi}|_{g}^{2}|^{2}+C\mathbf{e}\sum_{i}\widetilde{g}^{ \vec{i}}.\]
Combining (5.2), (5.3), (5.4) and \(\Delta\hat{Q}\leq C\mathbf{e}\), then, by assuming \(\lambda_{1}^{\prime}\geq D/\delta\), we have
\[C\mathbf{e} \geq h^{\prime}\sum_{k}\widetilde{g}^{\vec{i}}\big{(}|\widetilde{ \varphi}_{ik}|^{2}+|\widetilde{\varphi}_{i\vec{k}}|^{2}\big{)}+\big{(}h^{ \prime\prime}-2(h^{\prime})^{2}\big{)}\widetilde{g}^{\vec{i}}\big{|}\partial _{i}|\nabla\widetilde{\varphi}|^{2}\big{|}\] \[-8\delta A^{2}\widetilde{g}^{\vec{i}}|\widetilde{\varphi}_{i}|^{ 2}+(A-B-C\mathbf{e})\sum_{i}\widetilde{g}^{\vec{i}}-An.\]
Notice that \(h^{\prime\prime}=2(h^{\prime})^{2}\). Picking \(\mathbf{e}\leq 1/C\), \(A=B+2\) and \(\delta=\big{(}8A^{2}(\sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|^{2}+1) \big{)}^{-1}\), then we have
\[h^{\prime}\sum_{k}\widetilde{g}^{\vec{i}}\big{(}|\widetilde{\varphi}_{ik}|^{2 }+|\widetilde{\varphi}_{i\vec{k}}|^{2}\big{)}+\sum_{i}\widetilde{g}^{\vec{i} \vec{i}}\leq An+1.\]
Recall \(\widetilde{g}_{i\vec{l}}\leq Cg_{\vec{l}\vec{j}}\), where \(C\) only depends on \(\sup_{X\times\Sigma}\Delta\widetilde{\varphi}\). Hence, at \(p\), \(\widetilde{g}^{\vec{i}}\equiv C^{-1}\). Then,
\[\lambda_{1}(p)\leq\max\Big{\{}\frac{D}{\delta},\big{\{}(An+1)C-n\big{\}}(1+ \sup_{X\times\Sigma}|\nabla\widetilde{\varphi}|_{g}^{2})\Big{\}}.\]
Together with the fact that \(\sup_{X\times\Sigma}Q\leq Q(p)+1\), we prove that \(\sup_{X\times\Sigma}\lambda_{1}\) is bounded by some uniform constant.
## 6. The asymptotic behavior of \(\varepsilon\)-geodesics
In this section we prove Theorem B on the asymptotic behavior of \(\varepsilon\)-geodesics for a fixed \(\varepsilon>0\). We use the notation introduced before Theorem B and we assume \(\psi_{0},\psi_{1}\in\mathcal{H}_{-\gamma}(\omega)\) (\(\gamma>0\)). Actually, we are really interested in the case when \(-\gamma=2-2\tilde{\tau}\) due to theorem 1.1. In \(\varepsilon\)-geodesic equation (\(E_{\varepsilon}\)), the derivatives of function \(\upsilon(\varepsilon)\) decays at infinity with order \(-\varsigma\), \(|\nabla^{k}\upsilon(\varepsilon)|\leq O(r^{-\varsigma-k})\) with \(\varsigma\geq\gamma\) for \(k\geq 1\). Without loss of generality, we assume \(\varsigma\geq\gamma>\tau\), otherwise theorem B can be proved more easily without iteration (step 3).
We also write \(\varphi_{\varepsilon}=\Phi_{\varepsilon}-\Psi\), so that the solution is given by \(\Theta+dd^{c}\Phi_{\varepsilon}=\Theta_{\Psi}+dd^{c}\varphi_{\varepsilon}\) with \(\varphi_{\varepsilon}=0\) on \(X\times\partial\Sigma\). In Aleyasin [1], a rough idea is given to prove the asymptotic behavior of \(\varepsilon\)-geodesics by constructing barrier functions in the (strictly easier) special case where the asymptotic coordinates are \(J\)-holomorphic and the decay rate of the ALE Kahler metric to the Euclidean metric is high enough. However, even in this special case, the details are actually more involved than what is suggested in [1]. Here we give a complete proof in the general setting.
_Step 1: Differentiating the Monge-Ampere equation._ The Monge-Ampere equation can be written explicitly in the asymptotic coordinates of \(X\times\Sigma\). As the complex structure \(J\) of \(X\) does not coincide with the Euclidean complex structure \(J_{0}\) of the asymptotic coordinates in general, we will use real coordinates for clarity. By passing to universal covering of the end, we are able to work with the global coordinates. Precisely, let \(\{z_{1},\ldots,z_{n}\}\) be the asymptotic complex coordinates of \(\mathbb{C}^{n}\backslash B_{R}\) and \(w=t+is\) the complex coordinate of \(\Sigma\). The corresponding real coordinates are \(\{x_{1},\ldots,x_{2n},\,x_{2n+1}=t,x_{2n+2}=s\}\), where \(z_{k}=x_{2k-1}+ix_{2k}\) for \(k=1,\ldots,n\). From now on:
* Latin indices \(i,j,\ldots\) will denote the real coordinates from \(1\) to \(2n+2\).
* Greek indices \(\alpha,\beta,\ldots\) will denote the real coordinates from \(1\) to \(2n\).
* The bold Greek indices \(\boldsymbol{\mu},\boldsymbol{\nu}\) will denote the real coordinates from \(2n+1\) to \(2n+2\).
In these coordinates, we write the Riemannian metric tensors corresponding to \(\Theta_{\Psi}\) and \(\Theta_{\Psi}+dd^{c}\varphi_{\varepsilon}\) as \(g_{ij}\) and \((g_{\varphi_{\varepsilon}})_{ij}\), respectively.
Throughout this section, we work in the asymptotic chart of \(X\). This allows us to use the Euclidean metric on \((\mathbb{R}^{2n}\setminus B_{R})\times\Sigma\) as a reference metric to measure derivatives. This is helpful because it enables us to write down equations with a good structure. Let \(|\cdot|_{0}\) denote the Euclidean length, \(\nabla_{0}\) the Euclidean Levi-Civita connection and \(\nabla_{0,X}\) (\(\nabla_{0,\Sigma}\)) the component of \(\nabla_{0}\) acting only in the space (time) directions on \((\mathbb{R}^{2n}\setminus B_{R})\times\Sigma\).
Then, the equation \((E_{\varepsilon})\) can be written as
\[\sqrt{\det\left((g_{\varphi_{\varepsilon}})_{ij}\right)}=\upsilon(\varepsilon )\sqrt{\det(g_{ij})}. \tag{6.1}\]
Recall that \(\upsilon\) satisfies conditions in (1.8). By differentiating the log of both sides by \(D_{\alpha}=\partial/\partial_{x_{\alpha}}\), we have
\[g_{\varphi_{\varepsilon}}^{ij}D_{\alpha}(g_{\varphi_{\varepsilon}})_{ij}=g^{ ij}D_{\alpha}g_{ij}+D_{\alpha}\log\upsilon(\varepsilon). \tag{6.2}\]
The first goal is to rewrite the equation (6.2) to be an elliptic equation in terms of \(D_{\alpha}\varphi_{\varepsilon}\). Let \(e_{1},\ldots,e_{2n+2}\) represent the real coordinate vector fields of \(x_{1},\ldots,x_{2n+2}\). Notice that \((g_{\varphi_{\varepsilon}})_{ij}=g_{ij}+dd^{c}\varphi_{\varepsilon}(e_{i},Je _{j})\). We compute \(D_{\alpha}\) of the second term:
\[\begin{split} D_{\alpha}[dd^{c}\varphi_{\varepsilon}(e_{i},Je_{ j})]=&-d\circ J\circ d(D_{\alpha}\varphi_{\varepsilon})(e_{i},Je_{j})-d \circ(D_{\alpha}J)\circ d\varphi_{\varepsilon}(e_{i},Je_{j})\\ &-d\circ J\circ d\varphi_{\varepsilon}(e_{i},(D_{\alpha}J)e_{j}).\end{split} \tag{6.3}\]
Observe that \(D_{\alpha}J\) is completely horizontal because \(J\) preserves the product structure of the tangent bundle \(T((\mathbb{R}^{2n}\setminus B_{R})\times\Sigma)\) and \(J|_{T\Sigma}\) is constant. Thus,
\[D_{\alpha}J=(D_{\alpha}J)_{\xi}^{\beta}(e_{\xi}^{*}\otimes e_{\beta}),\quad(D _{\alpha}J)_{\xi}^{\boldsymbol{\mu}}=0,\quad(D_{\alpha}J)_{\boldsymbol{\nu}} ^{\beta}=0,\quad(D_{\alpha}J)_{\boldsymbol{\nu}}^{\boldsymbol{\mu}}=0, \tag{6.4}\]
where the coefficients \((D_{\alpha}J)_{\xi}^{\beta}\) depend only on \(x_{1},\ldots,x_{2n}\) and not on \(x_{2n+1},x_{2n+2}\). In the same way, we can also see that
\[|\nabla_{0,X}^{m}(D_{\alpha}J)|_{0}=O(r^{-\tau-1-m})\,\,\,(\text{all}\,\,m \geq 0),\,\,\nabla_{0,\Sigma}^{m}(D_{\alpha}J)=0\,\,\,(\text{all}\,\,m\geq 1). \tag{6.5}\]
Moreover, it is obvious that
\[\Delta_{g_{\varphi_{\varepsilon}}}(D_{\alpha}\varphi_{\varepsilon})=\text{tr }_{g_{\varphi_{\varepsilon}}}(dd^{c}(D_{\alpha}\varphi_{\varepsilon})(\cdot,J \cdot))=g_{\varphi_{\varepsilon}}^{ij}dd^{c}(D_{\alpha}\varphi_{\varepsilon})(e _{i},Je_{j}). \tag{6.6}\]
Then, (6.3)-(6.4) imply that
\[\begin{split} g_{\varphi_{\varepsilon}}^{ij}D_{\alpha}[dd^{c} \varphi_{\varepsilon}(e_{i},Je_{j})]&=\Delta_{g_{\varphi_{ \varepsilon}}}(D_{\alpha}\varphi_{\varepsilon})+\mathbf{O}(r^{-\tau-1-m})\, \,\,\& g_{\varphi_{\varepsilon}}^{-1}\,\
by \(D_{\alpha}g_{ij}=\widehat{\mathbf{O}}(r^{-\tau-1})\) and
\[|\nabla_{0,X}^{m}\nabla_{0,\Sigma}^{k}D_{\alpha}\log\upsilon(\varepsilon)|_{0}=O (r^{-\varsigma-1-m})\;(\text{all }m,k\geq 0),\]
by \(D_{\alpha}\log\upsilon(\varepsilon)=\widehat{\mathbf{O}}(r^{-\varsigma-1})\) the equation (6.2) can be rewritten as
\[\begin{split}\Delta_{g_{\varphi_{\varepsilon}}}(D_{\alpha} \varphi_{\varepsilon})&=\mathbf{O}(r^{-\tau-1})\,\,\raisebox{-1.72pt}{\includegraphics[height=72.27pt]{././.
and comparing with the inequality (6.11), we have \(\Delta_{g_{\varphi_{\varepsilon}}}u_{1}\geq\Delta_{g_{\varphi_{\varepsilon}}}h\). Together with the fact that \(u_{1}=h=0\) on \(X\times\partial\Sigma\), Lemma 2.2 implies that \(h\geq u_{1}\) in \(X\times\Sigma\). The same method shows the upper bound \(h\leq-u_{1}\), which, together with the lower bound, implies that for each spatial index \(1\leq\alpha\leq 2n\),
\[|D_{\alpha}\varphi_{\varepsilon}|\leq C\big{(}\|\varphi_{\varepsilon}\|_{C^{2 }(X\times\Sigma,\Theta_{\Psi})},\Lambda,g,J,\varepsilon^{-1}\big{)}t(1-t)r^{- \tau-1}\,\,\,\text{on}\,\,\,\{r\geq 2R_{0}\}\times\Sigma. \tag{6.15}\]
_Step 3: Barrier estimate of the second derivatives._ Now, it comes to deal with the asymptotic behavior of the second derivative.
For a preliminary estimate, we go back to the full formula (6.8) for \(\Delta_{g_{\varphi_{\varepsilon}}}(D_{\alpha}\varphi_{\varepsilon})\). For every \(\mathbf{a}\in(0,1)\), the Euclidean \(\mathcal{C}^{0,\mathbf{a}}\) norm of the right-hand side on a restricted unit ball \(\hat{B}_{1}(p)=B_{1}(p)\cap((\mathbb{R}^{2n}\setminus B_{R})\times\Sigma)\) with \(r(p)=r\geq 2R\) is still bounded by \(C(\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma,\Theta_{\Psi})},\Lambda,g, J,\varepsilon^{-1})r^{-\tau-1}\) thanks to the Evans-Krylov estimates applied to \(\varphi_{\varepsilon}\) in the interior and the estimates of [8, Sections 2.1-2.2] at the boundary. (The precise dependence of this constant on the ellipticity, and hence on \(\varepsilon^{-1}\), is not clear but also not needed.) Likewise, the \(\mathcal{C}^{0,\mathbf{a}}\) norm of the coefficient tensor of the PDE, \(g_{\varphi_{\varepsilon}}^{-1}\), is bounded by \(C(\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma,\Theta_{\Psi})},\Lambda,g,J, \varepsilon^{-1})\). Applying the classic interior and boundary Schauder estimates to (6.8), we thus obtain from (6.15) that
\[\|D_{\alpha}\varphi_{\varepsilon}\|_{\mathcal{C}^{2,\mathbf{a}}(\hat{B}_{1}(p ))}\leq C\big{(}\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma,\Theta_{\Psi} )},\Lambda,g,J,\varepsilon^{-1}\big{)}r^{-\tau-1}. \tag{6.16}\]
These estimates will now be used to start a bootstrap to obtain some decay for \(D_{\beta}D_{\alpha}\varphi_{\varepsilon}\) using the same barrier method as in Step 2. Differentiate the equation (6.8) again by \(D_{\beta}=\partial/\partial x_{\beta}\) for \(1\leq\beta\leq 2n\). This yields
(6.17)
As before, we have that \(\Lambda^{-1}g^{-1}\leq g_{\varphi_{\varepsilon}}^{-1}\leq\varepsilon^{-1} \Lambda g^{-1}\), and we also have
\[|D_{\beta}g_{\varphi_{\varepsilon}}|_{0}\leq|D_{\beta}g|_{0}+|\nabla_{0,X} \nabla_{0}^{2}\varphi_{\varepsilon}|_{0}=O(r^{-\tau-1}) \tag{6.18}\]
thanks to the preliminary estimate (6.16). Similarly, all derivatives of \(\varphi_{\varepsilon}\) on the right-hand side of (6.17) are at worst of order 3, with at least one purely spatial derivative, and hence can be bounded by \(O(r^{-\tau-1})\) thanks to (6.16). In this way, we obtain that
(6.19)
The majority of terms on the right-hand side actually decay faster than \(O(r^{-2\tau-2})\), and the only term that might decay more slowly is \(\widehat{\mathbf{O}}(r^{-\tau-2})\). So far, we can only bound this by \(O(r^{-\tau-2})\). However, by applying the same method as in the weighted estimate of the first derivative in Step 2, we can then construct the following barrier function for \(D_{\beta}D_{\alpha}\varphi_{\varepsilon}\):
\[u_{2}=E^{\prime}\Big{\{}\big{(}1-\chi_{\frac{R_{0}}{2}}\big{)}\Big{(}\frac{R_{ 0}}{2}\Big{)}^{-\tau-2}t(t-1)+\chi_{\frac{R_{0}}{2}}t(t-1)r^{-\tau-2}\Big{\}}, \tag{6.20}\]
where \(R_{0}\) is the same constant as in (6.12) and \(E^{\prime}\) is another uniform constant depending on \(R_{0}\), \(\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma,\Theta_{\Psi})}\), \(\Lambda\), \(g\), \(J\) and on the constant of (6.16). Hence, we get the weighted estimate for \(D_{\beta}D_{\alpha}\varphi_{\varepsilon}\):
\[|D_{\beta}D_{\alpha}\varphi_{\varepsilon}|\leq C\big{(}\|\varphi_{\varepsilon} \|_{C^{2}(X\times\Sigma,\Theta_{\Psi})},\Lambda,g,J,\varepsilon^{-1}\big{)}r^ {-\tau-2}. \tag{6.21}\]
According to the full formula (6.17) for \(\Delta_{g_{\varphi_{\varepsilon}}}(D_{\beta}D_{\alpha}\varphi_{\varepsilon})\) and (6.16), in the restricted unit ball \(\hat{B}_{1}(p)\), the \(\mathcal{C}^{0,\mathbf{a}}\) norm of all terms on the right hand side of (6.17) are bounded by \(C(\|\varphi_{\varepsilon}\|_{C^{2}},\Lambda,g,J,\varepsilon^{-1})r^{-\tau-2}\). Applying the classic interior and boundary Schauder estimates to (6.17), we thus obtain from (6.15) that
\[\|D_{\alpha}D_{\beta}\varphi_{\varepsilon}\|_{\mathcal{C}^{2,\mathbf{a}}(\hat {B}_{1}(p))}\leq C\big{(}\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma, \Theta_{\Psi})},\Lambda,g,J,\varepsilon^{-1}\big{)}r^{-\tau-2}. \tag{6.22}\]
#### Step 4: Iterative improvement of the barrier estimates
In this step, we improve the decay order of the estimates we obtain in (6.16) and (6.22) by an iteration argument. Recall that from Steps 2-3 we have the following weighted estimates to start the iteration process (see (6.16) and (6.22)):
\[\begin{split}&\|\nabla_{0,X}\varphi_{\varepsilon}\|_{\mathcal{C}^ {0,\mathbf{a}}(\hat{B}_{1}(p))}+\|\nabla_{0}\nabla_{0,X}\varphi_{\varepsilon} \|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p))}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\| \nabla_{0}^{2}\nabla_{0,X}\varphi_{\varepsilon}\|_{\mathcal{C}^{0,\mathbf{a}} (\hat{B}_{1}(p))}=O(r^{-\tau-1}),\\ &\|\nabla_{0,X}^{2}\varphi_{\varepsilon}\|_{\mathcal{C}^{0, \mathbf{a}}(\hat{B}_{1}(p))}+\|\nabla_{0}\nabla_{0,X}^{2}\varphi_{\varepsilon} \|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p))}=O(r^{-\tau-2}).\end{split} \tag{6.23}\]
To complete the iteration argument, we need to improve the decay of the term \(g_{\varphi_{\varepsilon}}^{-1}-g^{-1}\).More precisely, this term occurs in a combination \((g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\alpha}g_{ij}\) in the first derivative estimate (Step 2), and in combinations \((g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\beta}D_{\alpha}g_{ij}\) and \([g_{\varphi_{\varepsilon}}^{ik}D_{\beta}(g_{\varphi_{\varepsilon}})_{kl}g_{ \varphi_{\varepsilon}}^{jl}-g^{ik}D_{\beta}g_{kl}g^{jl}]D_{\alpha}g_{ij}\) (to get optimal decay rate of \(D_{\alpha}D_{\beta}\varphi_{\varepsilon}\), we need to analyze this term) in the second derivative estimate (Step 3). We will now analyze these combinations more carefully. All constants in this step may depend on \(\|\varphi_{\varepsilon}\|_{C^{2}(X\times\Sigma,\Theta_{\Psi})},\Lambda,g,J, \varepsilon^{-1}\). Let \(\varphi\) be a continuous function defined in \((X\backslash B_{R})\times\Sigma\) with at most polynomial growth rate at infinity, for simplicity, we introduce the notation \((\varphi)^{\sharp}\) to denote the decay rate of \(\varphi\) and \((D_{X}\varphi)^{\sharp}\), \((D_{X}^{2}\varphi)^{\sharp}\) to denote the decay rate of \(\|\nabla_{0,X}\varphi\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p))}+\| \nabla_{0}\nabla_{0,X}\varphi\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p))}+ \|\nabla_{0}^{2}\nabla_{0,X}\varphi\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p ))}\), \(\|\nabla_{0,X}^{2}\varphi\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p))}+\| \nabla_{0}\nabla_{0,X}^{2}\varphi\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}_{1}(p ))}\) respectively.
The metric tensor \((g_{\varphi_{\varepsilon}})_{ij}\) and its inverse can be written as \((2n+2)\times(2n+2)\)-matrices
\[\widetilde{P}=\begin{pmatrix}P&\eta^{t}\\ \eta&\mathfrak{p}\end{pmatrix},\quad(\widetilde{P})^{-1}=\begin{pmatrix}Q& \xi^{t}\\ \xi&\mathfrak{q}\end{pmatrix},\]
where \(P\), \(Q\) are \(2n\times 2n\)-matrices, \(\mathfrak{p}\), \(\mathfrak{q}\) are \(2\times 2\)-matrices and \(\eta\), \(\xi\) are \(2\times 2n\)-matrices. By direct calculation, we have
\[Q=P^{-1}-P^{-1}\eta^{t}\xi,\quad\xi=-\mathfrak{p}^{-1}\eta Q,\quad\mathfrak{q}= (I_{2}-\xi\eta^{t})\mathfrak{p}^{-1}. \tag{6.24}\]
The fact that \(\Lambda^{-1}\varepsilon I_{2n+2}\leq\widetilde{P}\leq\Lambda I_{2n+2}\) implies that \(|\xi|\leq C|\eta|\). The weighted estimate (6.23), together with the fall-off condition of the metric \(g\), implies that \(|\eta|=O(r^{(D_{X}\varphi_{\varepsilon})^{\sharp}})\). Then, from (6.24), we have
\[Q=P^{-1}+O(|\eta|^{2}),\quad\xi=O(|\eta|),\quad\mathfrak{q}=\mathfrak{p}^{-1}+O (|\eta|^{2}). \tag{6.25}\]
Similarly, let \(\widetilde{P}^{\prime}\) denote the matrix of \(g\) in asymptotic coordinates. If we write
\[\widetilde{P}^{\prime}=\begin{pmatrix}P^{\prime}&(\eta^{\prime})^{t}\\ \eta^{\prime}&\mathfrak{p}^{\prime}\end{pmatrix},\quad(\widetilde{P}^{\prime})^ {-1}=\begin{pmatrix}Q^{\prime}&(\xi^{\prime})^{t}\\ \xi^{\prime}&\mathfrak{q}^{\prime}\end{pmatrix},\]
then we have
\[Q^{\prime}=(P^{\prime})^{-1}+O(|\eta^{\prime}|^{2}),\quad\xi^{\prime}=O(|\eta^{ \prime}|),\quad\mathfrak{q}^{\prime}=(\mathfrak{p}^{\prime})^{-1}+O(|\eta^{ \prime}|^{2}), \tag{6.26}\]
where \(|\eta^{\prime}|=O(r^{-\gamma-1})\). According to the estimate (6.23), \(|P-P^{\prime}|=O(r^{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp}})\) and hence \(|P^{-1}-(P^{\prime})^{-1}|=O(r^{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp}})\) as well because \(P,P^{\prime}\) are uniformly bounded. Moreover, \(\mathfrak{p},\mathfrak{q},\mathfrak{p}^{\prime},\mathfrak{q}^{\prime}\) are all uniformly equivalent to \(I_{2}\) but there is no reason for \(\mathfrak{p}-\mathfrak{p}^{\prime}\) to decay. Then (6.25) and (6.26) imply
that
\[|Q-Q^{\prime}| =O(r^{\max\{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp},2(D_{X}\varphi _{\varepsilon})^{\sharp},-2\gamma-2\}}), \tag{6.27}\] \[|\xi-\xi^{\prime}| =O(r^{\max\{(D_{X}\varphi_{\varepsilon})^{\sharp},-\gamma-1\}}),\] \[|\mathfrak{p}-\mathfrak{p}^{\prime}| =O(1).\]
Then, by calculating blockwise and using that \(D_{\alpha}g_{\boldsymbol{\mu\nu}}=0\), we have
\[(g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\alpha}g_{ij} =|Q-Q^{\prime}|O(r^{-\tau-1})+|\xi-\xi^{\prime}|O(r^{-\gamma-2}), \tag{6.28}\] \[(g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\beta}D_{\alpha}g_{ij} =|Q-Q^{\prime}|O(r^{-\tau-2})+|\xi-\xi^{\prime}|O(r^{-\gamma-3}).\]
By inserting (6.27), (6.28) into (6.8), we have
\[|D_{\alpha}\varphi_{\varepsilon}|=O(r^{\max\{(D_{X}^{2}\varphi_{\varepsilon} )^{\sharp}-\tau-1,(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,-2\gamma-3,- \varsigma-1\}}). \tag{6.29}\]
For the last but one term of (6.17),
\[D_{\alpha}g_{ij}\big{(}g_{\varphi_{\varepsilon}}^{ik}g_{ \varphi_{\varepsilon}}^{jl}D_{\beta}(g_{\varphi_{\varepsilon}})_{kl}-g^{ik} g^{jl}D_{\beta}g_{kl}\big{)} =D_{\alpha}g_{ij}g_{\varphi_{\varepsilon}}^{ik}g_{\varphi_{ \varepsilon}}^{jl}(D_{\beta}(g_{\varphi_{\varepsilon}})_{kl}-D_{\beta}g_{kl}) \tag{6.30}\] \[+D_{\alpha}g_{ij}D_{\beta}g_{kl}g_{\varphi_{\varepsilon}}^{jl}(g_ {\varphi_{\varepsilon}}^{ik}-g^{ik})\] \[+D_{\alpha}g_{ij}D_{\beta}g_{kl}g_{\varphi_{\varepsilon}}^{ik}(g_ {\varphi_{\varepsilon}}^{jl}-g^{jl}).\]
For the first term of right hand side of (6.30), by using \(|D_{\beta}(g_{\varphi_{\varepsilon}})_{kl}-D_{\beta}g_{kl}|\leq|\nabla_{0}^{2 }\nabla_{0,X}\varphi_{\varepsilon}|\), we obtain that the decay rate of the first term is given by \((D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1\). For the second and third terms, we need to analyze \((g_{\varphi_{\varepsilon}}^{ik}-g^{ik})\). Similar to (6.28), we have
\[D_{\alpha}g_{ij}D_{\beta}g_{kl}g^{ik}(g_{\varphi_{\varepsilon}}^{jl}-g^{jl}) =|Q-Q^{\prime}|O(r^{-2\tau-2})+|\xi-\xi^{\prime}|O(r^{-\tau-\gamma-3})+O(r^{-2 \gamma-4}). \tag{6.31}\]
By inserting (6.27) into (6.31), we have
\[D_{\alpha}g_{ij}\big{(}g_{\varphi_{\varepsilon}}^{ik}g_{\varphi_{\varepsilon }}^{jl}D_{\beta}(g_{\varphi_{\varepsilon}})_{kl}-g^{ik}g^{jl}D_{\beta}g_{kl} \big{)}=O(r^{\max\{(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,(D_{X}^{2} \varphi_{\varepsilon})^{\sharp}-2\tau-2,-2\gamma-4\}}). \tag{6.32}\]
Then, inserting (6.27), (6.28) into (6.17), we have
\[|D_{\alpha}D_{\beta}\varphi_{\varepsilon}|=O(r^{\max\{(D_{X}^{2}\varphi_{ \varepsilon})^{\sharp}-\tau-1,(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,-2 \gamma-4,-\varsigma-2\}}).\]
We can go one step further by applying Schauder estimates to (6.8) and (6.17) and to obtain \(\mathcal{C}^{2,\mathfrak{a}}\) estimates for \(D_{\alpha}\varphi_{\varepsilon}\) and \(D_{\alpha}D_{\beta}\varphi_{\varepsilon}\) in \(\hat{B}_{1}(p)\). Indeed, those terms on the right-hand side of the PDEs (6.8), (6.17) that were known to decay pointwise with rate \(\max\{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp},(D_{X}\varphi_{\varepsilon} )^{\sharp}\}-\tau-1\) already after Step 3 are actually also decaying at rate \(\max\{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp},(D_{X}\varphi_{\varepsilon} )^{\sharp}\}-\tau-1\) in \(\mathcal{C}^{0,\mathfrak{a}}(\hat{B}_{1}(p))\) norm. This is clear from (6.23). So we just need to find the decay rates of the most difficult terms, \((g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\alpha}g_{ij}\) in (6.8) and \((g_{\varphi_{\varepsilon}}^{ij}-g^{ij})D_{\beta}D_{\alpha}g_{ij}\), \(D_{\alpha}g_{ij}(g_{\varphi_{\varepsilon}}^{ik}g_{\varphi_{\varepsilon}}^{jl}D _{\beta}(g_{\varphi_{\varepsilon}})_{kl}-g^{ik}g^{jl}D_{\beta}g_{kl})\) in (6.17) in \(\mathcal{C}^{0,\mathfrak{a}}(\hat{B}_{1}(p))\) norm as well. For this we need to go back and also estimate the \(\mathcal{C}^{0,\mathfrak{a}}\)-norm of \(Q-Q^{\prime}\) and \(\xi-\xi^{\prime}\) in \(\hat{B}_{1}(p)\), as follows. By using (6.23), we have that
\[[P^{-1}-(P^{\prime})^{-1}]_{\mathcal{C}^{0,\mathfrak{a}}(\hat{B}_{1}(p))}=O(r^ {(D_{X}^{2}\varphi_{\varepsilon})^{\sharp}}),\quad[\xi]_{\mathcal{C}^{0, \mathfrak{a}}(\hat{B}_{1}(p))}=O(r^{(D_{X}\varphi_{\varepsilon})^{\sharp}}).\]
Then, based on (6.25), we have that
\[[Q-Q^{\prime}]_{\mathcal{C}^{0,\mathfrak{a}}(\hat{B}_{1}(p))} =O(r^{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp},2(D_{X}\varphi_{ \varepsilon})^{\sharp},-2\gamma-2}), \tag{6.33}\] \[[\xi-\xi^{\prime}]_{\mathcal{C}^{0,\mathfrak{a}}(\hat{B}_{1}(p))} =O(r^{\max\{(D_{X}\varphi_{\varepsilon})^{\sharp},-\gamma-1\}}).\]
Then we can proceed as in (6.28) and (6.32), obtaining that the decay rates of \([\Delta_{\varphi_{\varepsilon}}D_{\alpha}\varphi_{\varepsilon}]_{\mathcal{C}^{0, \mathfrak{a}}(\hat{B}_{1}(p))}\) and \([\Delta_{\varphi_{\varepsilon}}D_{\alpha}D_{\beta}\varphi_{\varepsilon}]_{ \mathcal{C}^{0,\mathfrak{a}}(\hat{B}(p))}\) are \(\max\{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp}-\tau-1,(D_{X}\varphi_{ \varepsilon})^{\sharp}-\tau-1,-2\gamma-3,-\varsigma-2\}\) and \(\max\{(D_{X}^{2}\varphi_{\varepsilon})^{\sharp}-\tau-1,\tau-1,-2\gamma-3,- \varsigma-2\}\).
\(\tau-1,(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,-2\gamma-4,-\varsigma-2\}\) respectively. According to the classic interior and boundary Schauder estimates, we improve (6.29) to \(\mathcal{C}^{2,\mathbf{a}}(\hat{B}_{1}(p))\) norm,
\[\begin{split}\|D_{\alpha}\varphi_{\varepsilon}\|_{\mathcal{C}^{2, \mathbf{a}}(\hat{B}_{1}(p))}&=O(r^{\max\{(D_{X}^{2}\varphi_{ \varepsilon})^{\sharp}-\tau-1,(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,-2 \gamma-3,-\varsigma-1\}}),\\ \|D_{\alpha}D_{\beta}\varphi_{\varepsilon}\|_{\mathcal{C}^{2, \mathbf{a}}(\hat{B}_{1}(p))}&=O(r^{\max\{(D_{X}^{2}\varphi_{ \varepsilon})^{\sharp}-\tau-1,(D_{X}\varphi_{\varepsilon})^{\sharp}-\tau-1,-2 \gamma-4,-\varsigma-2\}}).\end{split} \tag{6.34}\]
Inserting (6.23) into (6.34), and using (6.34) again to improve (6.23), we finally obtain the following estimates:
\[\|D_{\alpha}\varphi_{\varepsilon}\|_{\mathcal{C}^{2,\mathbf{a}}(\hat{B}_{1}(p ))}=O(r^{\max\{-2\gamma-3,-\varsigma-1\}}),\quad\|D_{\alpha}D_{\beta}\varphi_ {\varepsilon}\|_{\mathcal{C}^{2,\mathbf{a}}(\hat{B}_{1}(p))}=O(r^{\max\{-2 \gamma-4,-\varsigma-2\}}). \tag{6.35}\]
Note that according to (6.35), because \(\Psi\) was chosen to be linear in \(t\), the decay rate of \(\varphi_{\varepsilon}\) is faster than the decay rate of the boundary data \(\psi_{0},\psi_{1}\).
#### Step 5: Proof of Theorem B
In Step 4, we have obtained the optimal decay rates in the cases of \(k=1,2\) (even though it is not required in the proof of Theorem B). In this step, we give optimal estimates for \(k\geq 3\) and complete the proof of Theorem B.
For the higher order derivatives, by differentiating the Monge-Ampere equation (6.2) \(m\) times, similar to (6.8) and (6.17) and writing \(D_{K}=D_{\kappa_{1}}\cdots D_{\kappa_{m}}\) (\(1\leq\kappa_{i}\leq 2n\), for \(i=1,\ldots,m\)), instead of giving a full formula as (6.8) and (6.17), we write a simplified formula of \(\Delta_{\varphi_{\varepsilon}}D_{K}\varphi_{\varepsilon}\):
\[\begin{split}|\Delta_{\varphi_{\varepsilon}}D_{K}\varphi_{ \varepsilon}|&\leq\sum_{i=1}^{m}O(r^{-\tau-2-m+i})|\nabla_{0,X}^{ i}\varphi_{\varepsilon}|+\sum_{i=1}^{m}O(r^{-\tau-1-m+i})|\nabla_{0}\nabla_{0,X}^ {i}\varphi_{\varepsilon}|\\ &+\sum_{i=1}^{m-1}O(r^{-\tau-m+i})|\nabla_{0}^{2}\nabla_{0,X}^{ i}\varphi_{\varepsilon}|+\sum_{i=1}^{m}|\nabla_{0,X}^{i}g_{jl}||\nabla_{0,X}^{m-i}(g_ {\varphi_{\varepsilon}}^{jl}-g^{jl})|\\ &+O(r^{-\varsigma-m}).\end{split} \tag{6.36}\]
Applying induction on \(m\), according to iteration process (step 4), we can assume for \(k\leq m-1\)
\[||\nabla_{0,X}^{k}\varphi_{\varepsilon}||_{\hat{B}_{1}(p)}=O(r^{\max\{-2 \gamma-2,-\varsigma\}-k}). \tag{6.37}\]
To find the optimal decay rates, the most difficult term is \(\sum_{i=1}^{m}|\nabla_{0,X}^{i}g_{jl}||\nabla_{0,X}^{m-i}(g_{\varphi_{ \varepsilon}}^{jl}-g^{jl})|\). Notice that by (6.31) and (6.34), we have
\[\big{|}D_{K_{1}}g_{jl}D_{K_{2}}g_{ik}(g_{\varphi_{\varepsilon}}^{ij}-g^{ij}) \big{|}=O(r^{-2\gamma-2-k_{1}-k_{2}}),\]
where \(K_{1}\), \(K_{2}\) are \(k_{1}\)-, \(k_{2}\)-multi-indices respectively. Then, we apply induction on \(k\) to find decay rate of \(|D_{K_{1}}g_{jl}D_{K_{2}}g_{ik}D_{K}(g_{\varphi_{\varepsilon}}^{ij}-g^{ij})|\), where \(K\) is a \(k\)-multi-index. Applying one derivative to \((g_{\varphi_{\varepsilon}}^{ij}-g^{ij})\), by (6.30), we can prove that
\[|D_{K_{1}}g_{jl}D_{K_{2}}g_{ik}D_{K}(g_{\varphi_{\varepsilon}}^{ij}-g^{ij})|=O( r^{-2\gamma-2-k_{1}-k_{2}-k}). \tag{6.38}\]
Then, by (6.30) and (6.38), we have
\[\begin{split}|\nabla_{0,X}^{i}g_{jl}||\nabla_{0,X}^{m-i}(g_{ \varphi_{\varepsilon}}^{jl}-g^{jl})|&\leq|\nabla_{0,X}^{i}g_{jl}| \Big{\{}\big{|}\nabla_{0,X}^{m-i-1}\big{[}g_{\varphi_{\varepsilon}}^{jk}g_{ \varphi_{\varepsilon}}^{sl}(\nabla_{0,X}(g_{\varphi_{\varepsilon}})_{ks}- \nabla_{0,X}g_{ks})\big{]}\big{|}\\ &+2\big{|}\nabla^{m-i-1}\big{[}g_{\varphi_{\varepsilon}}^{jk}(g_ {\varphi_{\varepsilon}}^{ls}-g^{ik})\nabla_{0,X}g_{ks}\big{]}\big{|}\Big{\}}\\ &=O(r^{-2\gamma-2-m}).\end{split}\]
Combining with (6.37), we have that the right-hand side of (6.36) is \(O(r^{-2\tau+2-k})\). Using the construction of barrier functions in Step 2-3, we obtain that \(|D_{K}\varphi_{\varepsilon}|\leq Cr^{-2\tau+2-m}\). To apply Schauder
estimates to the \(m\)-th derivative of Monge-Ampere equation, we also need to know the decay rate of \([\Delta_{\varphi_{\varepsilon}}D_{K}\varphi_{\varepsilon}]_{\mathcal{C}^{0, \mathbf{a}}(\hat{B}(p))}\):
\[[\Delta_{\varphi_{\varepsilon}}D_{K}\varphi_{\varepsilon}]_{ \mathcal{C}^{0,\mathbf{a}}(\hat{B}(p))} \leq\sum_{i=1}^{m}O(r^{-\tau-2-m+i})\|\nabla^{i}_{0,X}\varphi_{ \varepsilon}\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}(p))}\] \[+\sum_{i=1}^{m}O(r^{-\tau-1-m+i})\|\nabla_{0}\nabla^{i}_{0,X} \varphi_{\varepsilon}\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}(p))}\] \[+\sum_{i=1}^{m-1}O(r^{-\tau-m+i})\|\nabla^{2}_{0}\nabla^{i}_{0,X} \varphi_{\varepsilon}\|_{\mathcal{C}^{0,\mathbf{a}}(\hat{B}(p))}\] \[+\sum_{i=1}^{m}\big{\|}|\nabla^{i}_{0,X}g_{jl}||\nabla^{m-i}_{0, X}(g^{jl}_{\varphi_{\varepsilon}}-g^{jl})|\big{\|}_{\mathcal{C}^{0,\mathbf{a}}( \hat{B}(p))}+O(r^{-\varsigma-m})\] \[=O(r^{\max\{-2\gamma-2,-\varsigma\}-m})\]
Hence, we have \(\|D_{K}\varphi_{\varepsilon}\|_{\mathcal{C}^{2,\mathbf{a}}(\hat{B}_{1}(p))} \leq Cr^{\max\{-2\gamma-2,-\varsigma\}-m}\), for \(m\geq 1\). To prove that \(\varphi_{\varepsilon}\) is in \(\mathcal{H}_{-\gamma}\), by integrating \((\varphi_{\varepsilon})_{r}=O(r^{\max\{-2\gamma-2,-\varsigma\}-1})\) in the radial direction from infinity to \(r=R\), we obtain a function \(\hat{\varphi}_{\varepsilon}\) defined in \(X\setminus B_{R}\) with decay rate \(\max\{-2\gamma-2,-\varsigma\}\). Then,
\[\varphi_{\varepsilon}-\hat{\varphi}_{\varepsilon}=c(\theta,t), \tag{6.39}\]
where \(c(\theta,t)\) is a function in \(X\setminus B_{R}\) independent of radius \(r\) and \(\theta\) be viewed as a variable on the link. It suffices to prove that \(c(\theta,t)\) is independent of \(\theta\). By taking derivative of (6.39), we have \(|\nabla_{0,X}c(\theta,t)|=O(r^{\max\{-2\gamma-2,-\varsigma\}-1})\). In the case that \(c(\theta,t)\) is not constant with respect to \(\theta\), \(|\nabla_{0,X}c(\theta,t)|\sim r^{-1}\), which contradicts to the fact that \(\max\{-2\gamma-2,-\varsigma\}<-1\). Hence we proved that \(\varphi_{\varepsilon}=c(t)+O(r^{-\max\{-2\gamma-2,-\varsigma\}})\). We conclude that, for \(\Phi_{\varepsilon}=\varphi_{\varepsilon}+\Psi\),
\[\sup_{(\mathbb{R}^{2n}\setminus B_{R})\times\Sigma}\Big{(}|\nabla^{k}_{0,X} \Phi_{\varepsilon}|+|\nabla^{k}_{0,X}\dot{\Phi}_{\varepsilon}|+|\nabla^{k}_{0,X}\ddot{\Phi}_{\varepsilon}|\Big{)}\leq C(k,\varepsilon^{-1})r^{-\gamma-k} \quad\text{for all }k\geq 1.\]
In conclusion, we have proved Theorem B.
## 7. Convexity of the Mabuchi \(K\)-energy
According to Theorem 1.1 (assuming \(\tau=\tilde{\tau}\)), we can restrict ourselves to the space
\[\mathcal{H}_{-2\tau+2}=\{\varphi\in\mathcal{C}^{\infty}_{-2\tau+2}:\omega_{ \varphi}=\omega+dd^{c}\varphi>0\},\quad\tau>n-1,\]
and the function \(\upsilon(\varepsilon)\) is constructed by (1.9) and (1.10). In the previous section, we proved that for any two given boundary data \(\psi_{0},\psi_{1}\in\mathcal{H}_{-2\tau+2}\), there exists a solution of the \(\varepsilon\)-geodesic equation \((E_{\varepsilon})\) in the same space \(\mathcal{H}_{-2\tau+2}\).
The derivative of the Mabuchi \(K\)-energy can be defined as follows: for \(\psi\in T_{\varphi}\mathcal{H}_{-2\tau+2}\),
\[\delta_{\psi}\mathcal{K}(\varphi)=-\int_{X}\psi R(\omega_{\varphi})\frac{\omega _{\varphi}^{n}}{n!}.\]
The integral converges because \(-2-2\tau<-2n\), equivalently, \(\tau>n-1\). In the following proposition, the second derivative of Mabuchi \(K\)-energy will be calculated in \(M=X_{R}=\{x\in X:r(x)\leq R\}\) containing boundary terms, and it will be clear that these boundary terms go to zero as \(R\to\infty\). Precisely, we consider Mabuchi \(K\)-energy restricted in \(M\),
\[\delta_{\psi}\mathcal{K}_{M}(\varphi)=-\int_{M}\psi R(\omega_{\varphi})\omega_{ \varphi}^{n}. \tag{7.1}\]
The calculation of the second variation of \(\mathcal{K}_{M}\) is due to my advisor Bianca Santoro in one of her unpublished notes, several years before I started this project. The limiting case \(R\to\infty\) was previously stated by Aleyasin [1] without details concerning the vanishing of boundary terms.
To simplify the notation, in the following proposition, we write \(R_{\varphi}=R(\omega_{\varphi})\), \(\mathrm{Ric}_{\varphi}=\mathrm{Ric}(\omega_{\varphi})\), \(\Delta=g^{i\overline{\varphi}}_{\varphi}\partial_{i}\partial_{\overline{k}}\), \(|\cdot|=|\cdot|_{\omega_{\varphi}}\), \(\nabla=\nabla_{\omega_{\varphi}}\) and \(\mathcal{D}f=\nabla_{i}\nabla_{k}fdz^{i}dz^{k}\), where \(\nabla_{i}\nabla_{k}f=f_{,ik}\) is a covariant derivative of \(f\) with respect to \(\omega_{\varphi}\). Recall that \(\mathcal{D}\) is called the Lichnerowicz operator, and \(\mathcal{D}f=0\) if and only if \(\mathrm{grad}^{1,0}f\) is a holomorphic type \((1,0)\) vector field.
**Proposition 7.1** (Santoro).: _Along a path of potentials \(\varphi(t)\in\mathcal{H}_{-2r+2}\),_
\[\begin{split}\frac{d^{2}\mathcal{K}_{M}}{dt^{2}}&=- \int_{M}[\ddot{\varphi}-\frac{1}{2}|\nabla\dot{\varphi}|^{2}]R_{\varphi} \omega_{\varphi}^{n}+\int_{M}|\mathcal{D}\dot{\varphi}|^{2}\omega_{\varphi}^ {n}\\ &-\frac{n(n-1)}{2}\int_{\partial M}\dot{\varphi}d^{c}\dot{ \varphi}\wedge\mathrm{Ric}_{\varphi}\wedge\omega_{\varphi}^{n-2}+ni\int_{ \partial M}\dot{\varphi}g^{i\overline{l}}_{\varphi}(\mathrm{Ric}_{\varphi})_{ i\overline{l}}\dot{\varphi}_{k}dz^{i}\wedge\omega_{\varphi}^{n-1}\\ &-ni\int_{\partial M}\dot{\varphi}g^{i\overline{j}}_{\varphi} \dot{\varphi}_{,ik\overline{j}}dz^{k}\wedge\omega_{\varphi}^{n-1}-ni\int_{ \partial M}g^{k\overline{l}}_{\varphi}\dot{\varphi}_{,ki}dz^{i}\wedge\omega_ {\varphi}^{n-1}.\end{split} \tag{7.2}\]
_Furthermore, by taking \(R\to\infty\) in (7.2), we have_
\[\frac{d^{2}\mathcal{K}}{dt^{2}}=\int_{X}\big{[}\ddot{\varphi}-\frac{1}{2}| \nabla\dot{\varphi}|^{2}\big{]}R_{\varphi}\frac{\omega_{\varphi}^{n}}{n!}+ \int_{X}|\mathcal{D}\dot{\varphi}|^{2}\frac{\omega_{\varphi}^{n}}{n!}. \tag{7.3}\]
Proof.: By taking the second derivative of Mabuchi \(K\)-energy in \(M\), we have
\[\begin{split}\frac{d^{2}\mathcal{K}_{M}}{dt^{2}}=& \frac{d}{dt}\left[-\int_{M}R_{\varphi}\dot{\varphi}\,\omega_{ \varphi}^{n}\right]\\ =&-\int_{M}\ddot{\varphi}\,R_{\varphi}\,\omega_{ \varphi}^{n}-n\int_{M}\dot{\varphi}\frac{d}{dt}(\mathrm{Ric}_{\varphi})\wedge \omega_{\varphi}^{n-1}\\ &-n(n-1)\int_{M}\varphi\mathrm{Ric}_{\varphi}\wedge\omega_{ \varphi}^{n-2}\wedge(i\partial\overline{\partial}\dot{\varphi}).\end{split} \tag{7.4}\]
The second term of (7.4) needs one integration by parts, and we get
\[\begin{split}-n\int_{M}\dot{\varphi}\frac{d}{dt}(\mathrm{Ric}_{ \varphi})\wedge\omega_{\varphi}^{n-1}&=-n\int_{M}\dot{\varphi} \left[-i\partial\overline{\partial}\left(\frac{d}{dt}(\log\omega_{\varphi}^{n} )\right)\wedge\omega_{\varphi}^{n-1}\right]\\ &=n\int_{M}\dot{\varphi}\left[i\partial\overline{\partial}\left( \frac{n\omega_{\varphi}^{n-1}\wedge i\partial\overline{\partial}\dot{\varphi}} {\omega_{\varphi}^{n}}\right)\wedge\omega_{\varphi}^{n-1}\right]\\ &=\int_{M}\dot{\varphi}(\Delta^{2}\dot{\varphi})\omega_{\varphi} ^{n}.\end{split}\]
Now, to the term \(\int_{M}\dot{\varphi}\mathrm{Ric}_{\varphi}\wedge\omega_{\varphi}^{n-2}\wedge i \partial\overline{\partial}\dot{\varphi}\). For simplicity, \(\dot{\varphi}=u\),
\[\begin{split}&\int_{M}\dot{\varphi}\mathrm{Ric}_{\varphi} \wedge\omega_{\varphi}^{n-2}\wedge i\partial\overline{\partial}\dot{\varphi}\\ &=-i\int_{M}\partial u\wedge\overline{\partial}u\wedge\mathrm{ Ric}_{\varphi}\wedge\omega_{\varphi}^{n-2}+\frac{1}{2}\int_{\partial M}ud^{c}u \wedge\mathrm{Ric}_{\varphi}\wedge\omega_{\varphi}^{n-2}\\ &=-i\int_{M}\partial u\wedge\overline{\partial}u\wedge\mathring{ \mathrm{Ric}}_{\varphi}\wedge\omega_{\varphi}^{n-2}-\frac{i}{n}\int_{M} \partial u\wedge\overline{\partial}u\wedge R_{\varphi}\omega_{\varphi}^{n-1} \\ &+\frac{1}{2}\int_{\partial M}ud^{c}u\wedge\mathrm{Ric}_{\varphi} \wedge\omega_{\varphi}^{n-2}\\ &=-i\int_{M}\partial u\wedge\overline{\partial}u\wedge\mathring{ \mathrm{Ric}}_{\varphi}\wedge\omega_{\varphi}^{n-2}-\frac{1}{2n^{2}}\int_{M}| \nabla u|^{2}R_{\varphi}\omega_{\varphi}^{n}\\ &+\frac{1}{2}\int_{\partial M}ud^{c}u\wedge\mathrm{Ric}_{\varphi} \wedge\omega_{\varphi}^{n-2},\end{split}\]
where \(\mathring{\mathrm{Ric}}\) is the traceless part of Ricci. If \(\psi\) is any primitive \((1,1)\)-form, then
\[*\psi=\frac{-1}{(n-2)!}\psi\wedge\omega^{n-2},\ \ \mathrm{and\ hence}\ \ n(n-1)\mathring{\mathrm{Ric}}\wedge\omega^{n-2}_{\varphi}=-n!(* \mathring{\mathrm{Ric}}).\]
Hence,
\[\begin{split}& n(n-1)\int_{M}i\partial u\wedge\overline{\partial}u \wedge\mathring{\mathrm{Ric}}_{\varphi}\wedge\omega^{n-2}_{\varphi}\\ &=-\int_{M}n!(*\mathring{\mathrm{Ric}}_{\varphi})\wedge(i \partial u\wedge\overline{\partial}u)\\ &=-\int_{M}\langle\mathring{\mathrm{Ric}}_{\varphi},i\partial u \wedge\overline{\partial}u\rangle\,\omega^{n}_{\varphi}\\ &=-\int_{M}\langle\mathrm{Ric}_{\varphi},i\partial u\wedge \overline{\partial}u\rangle\,\omega^{n}_{\varphi}+\int_{M}\langle\frac{1}{n}R _{\varphi}\,\omega_{\varphi},i\partial u\wedge\overline{\partial}u\rangle\, \omega^{n}_{\varphi}.\end{split}\]
Note that
\[\begin{split}\int_{M}\langle\frac{1}{n}R_{\varphi}\,\omega_{ \varphi},i\partial u\wedge\overline{\partial}u\rangle\,\omega^{n}_{\varphi}& =\int_{M}\langle(n-1)!R_{\varphi}\,\omega_{\varphi},i\partial u \wedge\overline{\partial}u\rangle\,\frac{\omega^{n}_{\varphi}}{n!}\\ &=\int_{M}(i\partial u\wedge\overline{\partial}u)\wedge*[(n-1)!R_ {\varphi}\omega_{\varphi}]\\ &=\int_{M}R_{\varphi}(i\partial u\wedge\overline{\partial}u) \wedge\omega^{n-1}_{\varphi}\\ &=\frac{1}{2n}\int_{M}|\nabla u|^{2}R_{\varphi}\,\omega^{n}_{ \varphi}.\end{split}\]
Thus, we get that
\[\begin{split}\frac{d^{2}\mathcal{K}_{M}}{dt^{2}}=&- \int_{M}[\ddot{\varphi}-\frac{1}{2}|\nabla\dot{\varphi}|^{2}]R_{\varphi}\, \omega^{n}_{\varphi}-\int_{M}\langle\mathrm{Ric}_{\varphi},i\partial u\wedge \overline{\partial}u\rangle\,\omega^{n}_{\varphi}\\ &+\int_{M}u(\Delta^{2}u)\,\omega^{n}_{\varphi}-\frac{n(n-1)}{2} \int_{\partial M}ud^{c}u\wedge\mathrm{Ric}_{\varphi}\wedge\omega^{n-2}_{\varphi }.\end{split} \tag{7.5}\]
**Lemma 7.2**.: _Let \(f\) be a smooth function defined on \(M\). Then we have that_
\[\begin{split}\Delta^{2}f&=\mathcal{D}^{*}\mathcal{D} f-g^{i\overline{k}}_{\varphi}g^{\mathcal{I}}_{\varphi}(\mathrm{Ric}_{ \varphi})_{\mathcal{I}\overline{l}}f_{\mathcal{I}\overline{k}}-g^{i\overline{k }}_{\varphi}g^{\mathcal{I}}_{\varphi}(\nabla_{\overline{k}}(\mathrm{Ric}_{ \varphi})_{\mathcal{I}\overline{l}})f_{j}.\end{split}\]
_Hence,_
\[\begin{split}\int_{M}u(\Delta^{2}u)\omega^{n}_{\varphi}& -\int_{M}\langle\mathrm{Ric}_{\varphi},i\partial u\wedge \overline{\partial}u\rangle\omega^{n}_{\varphi}\\ &=\int_{M}|\mathcal{D}u|^{2}\omega^{n}_{\varphi}+ni\int_{\partial M }ug^{i\overline{k}}_{\varphi}(\mathrm{Ric}_{\varphi})_{\mathcal{I}\overline{l} }u_{k}dz^{i}\wedge\omega^{n-1}_{\varphi}\\ &-ni\int_{\partial M}ug^{i\overline{j}}_{\varphi}\nabla_{ \overline{j}}\nabla_{k}\nabla_{i}udz^{k}\wedge\omega^{n-1}_{\varphi}-ni\int_ {\partial M}g^{k\overline{l}}_{\varphi}u_{\mathcal{I}u,ki}dz^{i}\wedge\omega^ {n-1}_{\varphi}.\end{split} \tag{7.6}\]
Proof.: Notice that
\[\nabla_{j}\nabla_{\overline{k}}\nabla_{i}f=\nabla_{\overline{k}}\nabla_{j} \nabla_{i}f-R^{\ m}_{i\ j\overline{k}}f_{m}\]
Then, we have
\[\begin{split}\Delta^{2}f&=g^{i\overline{j}}_{\varphi }g^{k\overline{l}}_{\varphi}\nabla_{\overline{l}}\nabla_{k}\nabla_{ \overline{j}}\nabla_{i}f\\ &=g^{i\overline{j}}_{\varphi}g^{k\overline{l}}_{\varphi}\nabla_{ \overline{l}}(\nabla_{\overline{j}}\nabla_{k}\nabla_{i}f-R^{\ m}_{k\
Here \(\mathcal{D}^{*}\mathcal{D}=g^{i\overline{j}}g^{k\overline{l}}\nabla_{\overline{l}} \nabla_{\overline{j}}\nabla_{k}\nabla_{i}\). Then, we have
\[\int_{M}u\mathcal{D}^{*}\mathcal{D}u\omega_{\varphi}^{n}=\int_{M}g_{\varphi}^{k \overline{l}}\big{(}ug_{\varphi}^{i\overline{l}}\nabla_{\overline{j}}\nabla_{k }\nabla_{i}u\big{)}_{\overline{l}}\omega_{\varphi}^{n}-\int_{M}\big{(}g_{ \varphi}^{k\overline{l}}u\overline{g}_{\varphi}^{i\overline{l}}\nabla_{ \overline{j}}\nabla_{k}\nabla_{i}u\big{)}\omega_{\varphi}^{n}.\]
The Stokes' theorem can be applied to the first term in the above formula by observing that if we write \(\mathfrak{h}=ih_{k}dz^{k}=i(ug_{\varphi}^{i\overline{l}}\nabla_{\overline{j}} \nabla_{k}\nabla_{i}u)dz^{k}\), then \(g_{\varphi}^{k\overline{l}}(h_{k})_{\overline{l}}\omega_{\varphi}^{n}=n \overline{\partial}\mathfrak{h}\wedge\omega_{\varphi}^{n-1}\). Hence,
\[\int_{M}g_{\varphi}^{k\overline{l}}\big{(}ug_{\varphi}^{i\overline{j}}\nabla_ {\overline{j}}\nabla_{k}\nabla_{i}u\big{)}_{\overline{l}}\omega_{\varphi}^{n }=\int_{\partial M}n\mathfrak{h}\wedge\omega_{\varphi}^{n-1}.\]
Similarly,
\[-\int_{M}\big{(}g_{\varphi}^{k\overline{l}}u\overline{g}_{\varphi }^{i\overline{l}}\nabla_{\overline{j}}\nabla_{k}\nabla_{i}u\big{)}\omega_{ \varphi}^{n} =-\int_{M}g_{\varphi}^{i\overline{l}}\big{(}g_{\varphi}^{k \overline{l}}u\overline{\nabla}_{k}\nabla_{i}u\big{)}_{\overline{j}}\omega_{ \varphi}^{n}+\int_{M}|\mathcal{D}u|^{2}\omega_{\varphi}^{n}\] \[=\int_{M}|\mathcal{D}u|^{2}\omega_{\varphi}^{n}-ni\int_{\partial M }g_{\varphi}^{k\overline{l}}u\overline{l}_{\overline{i},ki}dz^{i}\wedge\omega_ {\varphi}^{n-1}.\]
We have
\[\int_{M}u\Delta^{2}u =\int_{M}|\mathcal{D}u|^{2}-\int_{M}ug_{\varphi}^{k\overline{l}}g _{\varphi}^{m\overline{j}}(\mathrm{Ric}_{\varphi})_{k\overline{j}}u_{m \overline{l}}-\int_{M}ug_{\varphi}^{k\overline{l}}g_{\varphi}^{m\overline{j}} \big{(}\nabla_{\overline{l}}(\mathrm{Ric}_{\varphi})_{k\overline{j}}\big{)}u_ {m}\] \[-ni\int_{\partial M}ug_{\varphi}^{i\overline{l}}\nabla_{\overline{ j}}\nabla_{k}\nabla_{i}udz^{k}\wedge\omega_{\varphi}^{n-1}-ni\int_{ \partial M}g_{\varphi}^{k\overline{l}}u_{\overline{l}}dz^{i}\wedge\omega_{ \varphi}^{n-1}.\]
Notice that
\[-\int_{M}\langle\mathrm{Ric}_{\varphi},i\partial u\wedge\overline{\partial}u \rangle\omega_{\varphi}^{n}=-\int_{M}g_{\varphi}^{i\overline{j}}g_{\varphi}^{ k\overline{l}}(\mathrm{Ric}_{\varphi})_{i\overline{l}}u_{k}u_{\overline{j}} \omega_{\varphi}^{n},\]
and integrating by parts,
\[-\int_{M}g_{\varphi}^{i\overline{j}}g_{\varphi}^{k\overline{l}}( \mathrm{Ric}_{\varphi})_{i\overline{l}}u_{k}u_{\overline{j}}\omega_{\varphi}^ {n} =\int_{M}g_{\varphi}^{i\overline{j}}\big{(}g_{\varphi}^{k\overline {l}}(\mathrm{Ric}_{\varphi})_{i\overline{l}}u_{k}u\big{)}_{\overline{j}} \omega_{\varphi}^{n}+\int_{M}ug_{\varphi}^{i\overline{j}}g_{\varphi}^{k \overline{l}}(\mathrm{Ric}_{\varphi})_{i\overline{l}}u_{k\overline{j}}\omega_{ \varphi}^{n}\] \[+\int_{M}ug_{\varphi}^{i\overline{j}}g_{\varphi}^{k\overline{l}} \big{(}\nabla_{\overline{j}}(\mathrm{Ric}_{\varphi})_{i\overline{l}}\big{)}u_ {k}\omega_{\varphi}^{n}\] \[=ni\int_{\partial M}ug_{\varphi}^{k\overline{l}}(\mathrm{Ric}_{ \varphi})_{i\overline{l}}u_{k}dz^{i}\wedge\omega_{\varphi}^{n-1}+\int_{M}ug_{ \varphi}^{i\overline{j}}g_{\varphi}^{k\overline{l}}(\mathrm{Ric}_{\varphi})_{i \overline{l}}u_{k\overline{j}}\omega_{\varphi}^{n}\] \[+\int_{M}ug_{\varphi}^{i\overline{j}}g_{\varphi}^{k\overline{l}} \big{(}\nabla_{\overline{j}}(\mathrm{Ric}_{\varphi})_{i\overline{l}}\big{)}u_ {k}\omega_{\varphi}^{n}.\]
Hence, we proved that
\[\int_{M}u(\Delta^{2}u)\omega_{\varphi}^{n} -\int_{M}\langle\mathrm{Ric}_{\varphi},i\partial u\wedge\overline{ \partial}u\rangle\omega_{\varphi}^{n}=\int_{M}|\mathcal{D}u|^{2}\omega_{\varphi}^ {n}+ni\int_{\partial M}ug_{\varphi}^{k\overline{l}}(\mathrm{Ric}_{\varphi})_{i \overline{l}}u_{k}dz^{i}\wedge\omega_{\varphi}^{n-1}\] \[-ni\int_{\partial M}ug_{\varphi}^{i\overline{j}}\nabla_{\overline{ j}}\nabla_{k}\nabla_{i}udz^{k}\wedge\omega_{\varphi}^{n-1}-ni\int_{ \partial M}g_{\varphi}^{k\overline{l}}u_{\overline{l}}u_{k,ki}dz^{i}\wedge\omega _{\varphi}^{n-1},\]
which completes the proof of the lemma.
The integration formula (7.6) in this lemma, together with (7.5), completes the proof of (7.2). It suffices to show that all boundary terms in this formula vanish as \(R\to\infty\). According to Theorem B, we can check that the decay rates of the integrands integrated on \(\partial M\) are at most \(-2\tau-1<-2n+1\). This completes the proof.
**Theorem 7.3**.: _Assume that \(\omega\) is an ALE Kahler metric on \(X\) such that the Ricci curvature of \(\omega\) is non-positive, \(\mathrm{Ric}(\omega)\leq 0\). Then, along each \(\varepsilon\)-geodesic in \(\mathcal{H}_{-2\tau+2}(\omega)\), \(\varphi(t)\), the Mabuchi \(K\)-energy is convex._
Proof.: The proof is parallel to Chen [9]. Here, we just do the calculation in the ALE setting. Define \(f=\ddot{\varphi}-\frac{1}{2}|\nabla\dot{\varphi}|^{2}_{\omega_{\varphi}}\). Then the \(\varepsilon\)-geodesic equation can be written as
\[\varepsilon\frac{\omega^{n}}{\omega_{\varphi}^{n}}=f.\]
According to (7.3), together with the observation, \(\operatorname{Ric}(\omega_{\varphi})=\operatorname{Ric}(\omega)+dd^{c}\log f\), we have
\[\frac{d^{2}\mathcal{K}}{dt^{2}} =\int_{X}\left|\mathcal{D}\dot{\varphi}(t)\right|^{2}_{\omega_{ \varphi}}\omega_{\varphi}^{n}-\int_{X}fR(\omega_{\varphi})\,\omega_{\varphi}^{n}\] \[=\int_{X}\left|\mathcal{D}\dot{\varphi}(t)\right|^{2}_{\omega_{ \varphi}}\omega_{\varphi}^{n}-\int_{X}f\operatorname{tr}_{\omega_{\varphi}} \operatorname{Ric}(\omega)\,\omega_{\varphi}^{n}-\int_{X}f\Delta_{\omega_{ \varphi}}\log f\,\omega_{\varphi}^{n}\] \[=\int_{X}\left|\mathcal{D}\dot{\varphi}(t)\right|^{2}_{\omega_{ \varphi}}\omega_{\varphi}^{n}-\int_{X}f\operatorname{tr}_{\omega_{\varphi}} \operatorname{Ric}(\omega)\,\omega_{\varphi}^{n}+\int_{X}\frac{|\nabla f|^{2} _{\omega_{\varphi}}}{f}\,\omega_{\varphi}^{n}\geq 0.\]
We have the last equality because \(f|\nabla\log f|_{\omega_{\varphi}}=O(r^{-2\tau-1})\) and \(-2\tau-1<-(2n-1)\), so that the relevant boundary integral vanishes. Hence, we have proved the convexity of the Mabuchi \(K\)-energy.
**Remark 7.4**.: A quick corollary of Theorem 7.3 is that assuming \(\operatorname{Ric}(\omega)\leq 0\), the scalar-flat Kahler metric, if it exists, is unique in \(\mathcal{H}_{-2\tau+2}(\omega)\). The proof is also parallel to Chen [9]. However, if there exists a scalar-flat Kahler metric in \(\mathcal{H}_{-2\tau+2}(\omega)\), the condition, \(\operatorname{Ric}(\omega)\leq 0\), implies \(\operatorname{Ric}(\omega)=0\). Hence, the uniqueness of scalar-flat ALE Kahler metric can be reduced to the uniqueness result of Ricci-flat ALE Kahler metric (which can be found in many reference [20, 28, 13]). A short proof is given as follows. Let \(\omega_{0}\) be a scalar-flat Kahler metric in \(\mathcal{H}_{-2\tau+2}(\omega)\). The fact, \(\omega_{0}=\omega+O(r^{-2\tau})\), implies that the ADM mass of \(\omega\) and \(\omega_{0}\) are equal, \(\mathfrak{m}(\omega)=\mathfrak{m}(\omega_{1})\). According to mass formula by Hein-LeBrun [19],
\[\mathfrak{m}(\omega)=A(n,c_{1}(X),[\omega])+B(n)\int_{X}R(\omega)\frac{\omega^ {n}}{n!},\]
where \(A(n,c_{1}(X),[\omega])\) is a constant only determined by the dimension \(n\), the first Chern class of \(X\) and the cohomology class of \(\omega\) and \(B(n)\) only depends on dimension \(n\). The fact, \(\mathfrak{m}(\omega)=\mathfrak{m}(\omega_{1})\), together with the mass formula, implies that
\[\int_{X}R(\omega)=\int_{X}R(\omega_{1})=0.\]
The assumption that \(\operatorname{Ric}(\omega)\leq 0\) implies that \(\operatorname{Ric}(\omega)=0\). Then, by a simple argument, we can prove that all scalar-flat ALE Kahler metrics in \([\omega]\) is actually Ricci-flat. The expansion of scalar-flat Kahler metrics (Theorem 1.1) implies that the Ricci form, \(\operatorname{Ric}(\omega_{1})\), decays to zero at infinite with decay rate faster than \(-2n\). The ddbar lemma implies that there exist \(f\in\mathcal{C}^{\infty}_{2-2n}\) such that
\[\operatorname{Ric}(\omega_{1})=dd^{c}f.\]
Taking trace with respect to \(g\), we have that \(\Delta f=0\). By solving the Laplacian equation (for instance, see [30, Propsition 2.3]), there is a unique solution in the space \(\mathcal{C}^{\infty}_{-\delta}\) (for \(-\delta\in(-\infty,0)\backslash D\)). Hence, \(f\equiv 0\), which implies that \(\omega\) is Ricci-flat.
## 8. Nonexistence of non-positive (or non-negative) Ricci curvature
Consider the standard family of negative line bundles, \(\mathcal{O}(-k)\), over \(\mathbb{CP}^{n-1}\) together with their natural projections \(\pi:\mathcal{O}(-k)\to\mathbb{CP}^{n-1}\). The total spaces of \(\mathcal{O}(-k)\) are fundamental examples of ALE Kahler manifolds by viewing \(\mathcal{O}(-k)\) as a resolution space of \(\mathbb{C}^{n}/\mathbb{Z}_{k}\). Let \(\omega\) be any ALE Kahler metric on \(\mathcal{O}(-k)\) asymptotic to the Euclidean metric with decay rate \(-\tau\) (\(\tau>0\)). In the following, we shall prove the nonexistence of a sign of the Ricci curvature of \(\omega\) in the case \(k\neq n\). When \(k=n\), there always exists a Ricci-flat ALE Kahler metric in each compactly supported ALE Kahler class, see [21, 22, 27].
**Theorem 8.1**.: _Let \(\mathcal{O}(-k)\) be the standard negative line bundle over \(\mathbb{CP}^{n-1}\) with \(n\geq 2\) and \(k\neq n\). Let \(\omega\) be an ALE Kahler metric on \(\mathcal{O}(-k)\) with decay rate \(-\tau\)\((\tau>0)\). Then, the Ricci form of \(\omega\), \(\rho\), is of mixed type, i.e., neither \(\rho\geq 0\) nor \(\rho\leq 0\) is true._
Proof.: Notice that for each integer \(k\geq 1\), there is a compactification of \(\mathcal{O}(-k)\) by adding a divisor at infinity, \(D_{\infty}\cong\mathbb{CP}^{n-1}\). We denote the compactified manifold as \(M_{k}\) and the natural embedding \(j:\mathcal{O}(-k)\to M_{k}\) is holomorphic. \(M_{k}\) is a \(\mathbb{CP}^{1}\)-bundle over \(\mathbb{CP}^{n-1}\). Denote \(D_{0}\) as the divisor corresponding to the base manifold, \(\mathbb{CP}^{n-1}\subset\mathcal{O}(-k)\hookrightarrow M_{k}\). Then, the normal line bundles of \(D_{0}\) and \(D_{\infty}\) are given by
\[N_{D_{0}/M_{k}}=\mathcal{O}(-k),\quad N_{D_{\infty}/M_{k}}=\mathcal{O}(k). \tag{8.1}\]
The following facts on the geometry of \(M_{k}\) can be checked by viewing \(M_{k}\) as a smooth toric variety. \(M_{k}\) can be described by \(2n\) coordinate charts with coordinates \(\{U_{i};\ u_{i}^{1},\ldots,\)\(u_{i}^{n-1},u_{i}\}\), \(\{V_{i};\ v_{i}^{1},\ldots,v_{i}^{n-1},v_{i}\}\)\((0\leq i\leq n-1)\), where the coordinates are related by
\[(u_{i}^{1},\ldots,u_{i}^{n-1},u_{i})=\left(\frac{1}{u_{0}^{i}}, \frac{u_{0}^{1}}{u_{0}^{i}},\ldots,\frac{\widehat{u_{0}^{1}}}{u_{0}^{i}}, \ldots,\frac{u_{0}^{n-1}}{u_{0}^{i}},(u_{0}^{i})^{k}u_{1}^{n}\right),\quad 1 \leq i\leq n-1,\] \[(v_{i}^{1},\ldots,v_{i}^{n-1},v_{i})=\left(u_{i}^{1},\ldots,u_{i }^{n-1},\frac{1}{u_{i}}\right),\quad 0\leq i\leq n-1.\]
The divisor classes of \(M_{k}\) are generated by the class of \(D_{0}\cong\mathbb{CP}^{n-1}\), the zero section of \(\mathcal{O}(-k)\subset M_{k}\), and the class of \(D_{f}\), the total space of the restriction of the \(\mathbb{CP}^{1}\)-bundle \(M_{k}\to D_{0}\) to a linear subspace of \(D_{0}\). Restricting \(D_{0}\) and \(D_{f}\) to \(U_{0}\), we can write
\[D_{0}=\overline{(u_{0}=0)},\qquad D_{f}=\overline{(u_{0}^{1}=0)}.\]
The divisor at infinity, \(D_{\infty}\), can be represented by \((u_{0}=\infty)=(v_{0}=0)\) and \(D_{\infty}\) can be represented in terms of \(D_{0}\) and \(D_{f}\) as follows,
\[D_{\infty}=D_{0}+kD_{f} \tag{8.2}\]
By viewing \(D_{0}\), \(D_{f}\) and \(D_{\infty}\) as smooth complex hypersurfaces of \(M_{k}\), the Poincare duals of \(D_{0}\), \(D_{f}\) and \(D_{\infty}\) have natural explicit representatives denoted by \(\rho_{0}\), \(\rho_{f}\), \(\rho_{\infty}\) respectively. For instance, in \(U_{0}\),
\[\rho_{0}|_{U_{0}} =\frac{1}{n\pi}i\partial\overline{\partial}\log\frac{(1+\sum_{j }|u_{0}^{j}|^{2})^{k}|u_{0}|^{2}+1}{(1+\sum_{j}|u_{0}^{j}|^{2})^{k}|u_{0}|^{2}}, \tag{8.3}\] \[\rho_{f}|_{U_{0}} =\frac{1}{n\pi}i\partial\overline{\partial}\log\big{(}1+\sum_{j }|u_{0}^{j}|^{2}\big{)},\] (8.4) \[\rho_{\infty}|_{U_{0}} =\frac{1}{n\pi}i\partial\overline{\partial}\log\big{[}\big{(}1+ \sum_{j}|u_{0}^{j}|^{2}\big{)}^{k}|u_{0}|^{2}+1\big{]}. \tag{8.5}\]
_Step 1: Extension of the ALE Ricci form to \(M_{k}\)._ Recall that the diffeomorphism \(\Phi:(\mathbb{C}^{n})^{*}/\mathbb{Z}_{k}\to\mathcal{O}(-k)\setminus D_{0}\) gives a holomorphic asymptotic chart of \(\mathcal{O}(-k)\). The diffeomorphism \(\Phi\) can be explicitly written as
\[\Phi:(\mathbb{C}^{n})^{*}\to\mathcal{O}(-k)\setminus D_{0},\quad\Phi(z_{1}, \ldots,z_{n})\big{|}_{U_{0}}=\Big{(}\frac{z_{2}}{z_{1}},\ldots,\frac{z_{n}}{z_{ 1}},z_{1}^{k}\Big{)}.\]
In the coordinate chart \(\{U_{0};\ u_{0}^{1},\ldots,u_{0}^{n-1},u_{0}\}\), we have \(r^{2k}=(1+\sum_{j}|u_{0}^{j}|^{2})^{k}|u_{0}|^{2}\). By the asymptotic condition of \(\omega\), in an asymptotic chart of \(\mathcal{O}(-k)\), \(\log(\omega^{n}/\omega_{0}^{n})\) can be viewed as a function of decay order \(O(r^{-\tau})\), where \(\omega_{0}\) is the standard Euclidean metric on the asymptotic chart. Thus, the Ricci form satisfies
\[\rho=-i\partial\overline{\partial}\log\frac{\omega^{n}}{\omega_{0}^{n}}=O(r^{- \tau-2}).\]
The adjunction formula tells us that as line bundles over \(\mathcal{O}(-k)\),
\[K_{\mathcal{O}(-k)}=\frac{n-k}{k}[D_{0}].\]
Since \(\rho\) is the curvature form of a Hermitian metric on \(K_{\mathcal{O}(-k)}^{-1}\) and \(\rho_{0}\) is the curvature form of a Hermitian metric on \([D_{0}]\), it follows that
\[\rho+\frac{n-k}{k}\rho_{0}\,\,\text{is globally}\,\,i\partial\overline{ \partial}\text{-exact}.\]
By restricting \(\rho_{0}\) in (8.3) to the asymptotic chart of \(\mathcal{O}(-k)\), we have
\[\rho_{0}=-i\partial\overline{\partial}\log(1+r^{-2k}).\]
Hence, by Theorem 1.1, \(\rho\) can be written as
\[\rho=-\frac{n-k}{k}\rho_{0}+i\partial\overline{\partial}f\quad \text{ for }f\in\mathcal{C}_{-r^{\prime}}^{\infty}(\mathcal{O}(-k)),\quad\tau^{\prime}= \min\{2k,\tau\}>0.\]
Since \(\rho\) cannot be extended smoothly to \(M_{k}\), we define a smooth cut-off function \(\chi\),
\[\chi(t)=\begin{cases}1,&\quad 0\leq t\leq 1,\\ 0,&\quad t\geq 2,\\ \text{smooth},&\quad 1<t<2,\end{cases}\]
and we define \(\chi_{R}(t)=\chi(t/R)\). Applying the cutoff function, we can extend \(\rho\) to be
\[\rho_{R}=\begin{cases}-\frac{n-k}{k}\rho_{0}+i\partial\overline{ \partial}\big{(}\chi_{R}f\big{)},&\quad\text{in }M_{k}\setminus D_{\infty},\\ -\frac{n-k}{k}\rho_{0},&\quad\text{on }D_{\infty}.\end{cases}\]
_Step 2: Integral argument for \(n=2\)._ Recall that the intersection numbers between \(D_{\infty}\), \(D_{0}\) and \(D_{f}\) are given by
\[(D_{0})\cdot(D_{0})=-k,\quad(D_{0})\cdot(D_{f})=1,\quad(D_{f}) \cdot(D_{f})=0,\quad(D_{0})\cdot(D_{\infty})=0. \tag{8.6}\]
In particular, if we integrate \(\rho\) over \(D_{0}\), then
\[\int_{D_{0}}\rho=\int_{D_{0}}\rho_{R}=\int_{M_{k}}\rho_{R}\wedge \rho_{0}=-\frac{2-k}{k}\int_{M_{k}}\rho_{0}^{2}=2-k. \tag{8.7}\]
On the other hand, we have \(\rho_{R}\to\rho\) pointwise and \(\rho_{R}=O(r^{-\tau^{\prime}-2})\) uniformly as \(R\to\infty\). Hence, by the dominated convergence theorem,
\[\int_{\{u_{0}^{1}=0\}}\rho=\lim_{R\to\infty}\int_{\{u_{0}^{1}=0\} }\rho_{R}=\int_{M_{k}}\rho_{R}\wedge\rho_{f}=-\frac{2-k}{k}\int_{M_{k}}\rho_{ 0}\wedge\rho_{f}=\frac{k-2}{k}. \tag{8.8}\]
Now assume that \(\rho\) is seminegative (or semipositive). Then the left-hand sides of both (8.7) and (8.8) are non-positive (or non-negative). However, the right-hand sides have opposite signs because \(k\neq 2\). This is a contradiction.
_Step 3: Integral argument for \(n\geq 3\)._ In higher dimension, the difficulty is to calculate the intersection numbers of divisors. However, in the case of \(M_{k}\), we can apply the formula of intersection numbers on toric varieties [16, Chapter VII.6], or, more explicitly, take integral of formulas of Poincare dual (8.3)-(8.5). Notice that
\[\int_{D_{0}}\rho_{0}^{n-1}=(-k)^{n-1}. \tag{8.9}\]
Then, we have
\[\int_{D_{0}}\rho^{n-1}=\int_{D_{0}}\rho_{R}^{n-1}=\int_{M_{k}} \rho_{R}^{n-1}\wedge\rho_{0}=\Big{(}-\frac{n-k}{k}\Big{)}^{n-1}(-k)^{n-1}.\]
On the other hand, we have \(\rho_{R}\to\rho\) pointwise and \(\rho_{R}=O(r^{-\tau^{\prime}-2})\) uniformly as \(R\to\infty\). Hence, by the dominated convergence theorem,
\[\int_{\{u_{0}^{1}=0\}}\rho^{n-1} =\lim_{R\to\infty}\int_{\{u_{0}^{1}=0\}}\rho_{R}^{n-1}=\lim_{R\to \infty}\int_{D_{f}}\rho_{R}^{n-1}=\lim_{R\to\infty}\int_{M_{k}}\rho_{R}^{n-1} \wedge\rho_{f}\] \[=\left(-\frac{n-k}{k}\right)^{n-1}\int_{M_{k}}\rho_{0}^{n-1} \wedge\rho_{f}=\left(-\frac{n-k}{k}\right)^{n-1}(-k)^{n-2},\]
where the last equality can be observed from (8.2) and (8.9):
\[\int_{M_{k}}\rho_{0}^{n-1}\wedge\rho_{f}=\int_{M_{k}}\rho_{0}^{n-1}\wedge \frac{1}{k}(\rho_{\infty}-\rho_{0})=0-\frac{1}{k}\int_{M_{k}}\rho_{0}^{n}=(-k) ^{n-2}\]
because \(\rho_{0}|_{D_{\infty}}=0\). By the same argument as in dimension \(2\), we complete the proof.
|
2304.09645 | A motivic circle method | The circle method has been successfully used over the last century to study
rational points on hypersurfaces. More recently, a version of the method over
function fields, combined with spreading out techniques, has led to a range of
results about moduli spaces of rational curves on hypersurfaces. In this paper
a version of the circle method is implemented in the setting of the
Grothendieck ring of varieties. This allows us to approximate the classes of
these moduli spaces directly, without relying on point counting, and leads to a
deeper understanding of their geometry. | Margaret Bilu, Tim Browning | 2023-04-19T13:36:58Z | http://arxiv.org/abs/2304.09645v1 | # A motivic circle method
###### Abstract.
The circle method has been successfully used over the last century to study rational points on hypersurfaces. More recently, a version of the method over function fields, combined with spreading out techniques, has led to a range of results about moduli spaces of rational curves on hypersurfaces. In this paper a version of the circle method is implemented in the setting of the Grothendieck ring of varieties. This allows us to approximate the classes of these moduli spaces directly, without relying on point counting, and leads to a deeper understanding of their geometry.
2010 Mathematics Subject Classification: 14H10 (11D72, 11P55, 14E18)
###### Contents
* 1 Introduction
* 2 The Grothendieck ring of varieties with exponentials
* 3 The motivic circle method
* 4 The weight function
* 5 Weyl differencing and a general exponential sum bound
* 6 The motivic major arcs
* 7 The motivic minor arcs
* 8 The space of morphisms
* 9 The variety of lines
* A Geometry of numbers over function fields
## 1. Introduction
Let \(k\) be a field and let \(f\in k[x_{1},\ldots,x_{n}]\) be a non-singular homogeneous polynomial of degree \(d\geq 2\), defining a hypersurface \(X\subset\mathbf{A}^{n}\). For global fields, the density of \(k\)-points on \(X\) has been the object of intense study over the years. When \(k=\mathbf{Q}\), the Hardy-Littlewood circle method can be used to cast light on the limit
\[\lim_{B\to\infty}B^{-(n-d)}\#\{\mathbf{x}\in\mathbf{Z}^{n}:|\mathbf{x}|\leq B,\ f(\mathbf{x})=0\},\]
where \(\mathbf{x}=(x_{1},\ldots,x_{n})\) and \(|\mathbf{x}|=\max_{1\leq i\leq n}|x_{i}|.\) Thus, it follows from work of Birch [1] that this limit exists as a product of local densities, provided that \(n>2^{d}(d-1)\). Lee [13] has worked out the analogous statement when \(k=\mathbf{F}_{q}(t)\), for a finite field \(\mathbf{F}_{q}\) of characteristic \(>d\). Thus, under the same assumption \(n>2^{d}(d-1)\), a similar statement is proved about the existence and nature of the limit
\[\lim_{e\to\infty}q^{-e(n-d)}\#\{\mathbf{g}\in\mathbf{F}_{q}[t]^{n}:\deg(g_{1} ),\ldots,\deg(g_{n})\leq e,\ f(\mathbf{g})=0\},\]
where \(\mathbf{g}=(g_{1},\ldots,g_{n})\). This paper is concerned with the field \(k=\mathbf{C}(t)\) and the geometry of the affine variety
\[M_{e}=\{\mathbf{g}\in\mathbf{C}[t]^{n}:\deg(g_{1}),\ldots,\deg(g_{n})\leq e, \ f(\mathbf{g})=0\},\]
## 1. Introduction
Let \(X\) be a smooth smooth smooth smooth manifold with
Note that \(\Lambda_{N}(f,\infty)\) can naturally be identified with \(\Lambda_{N}(f,0)\). Bearing this notation in mind, we shall prove the following result.
**Theorem 1.1**.: _Let \(f\in\mathbf{C}[x_{1},\dots,x_{n}]\) be a non-singular homogeneous polynomial of degree \(d\geq 3\), defining a hypersurface \(X\subset\mathbf{A}^{n}\). Assume \(n>2^{d}(d-1)\) and let \(e\geq 1\). Then_
\[[M_{e}]=\mathbf{L}^{\mu(e)}\left(\mathfrak{S}(f)\cdot\lim_{N\to\infty}\mathbf{ L}^{-N(n-1)}[\Lambda_{N}(f,\infty)]+R_{e}\right)\]
_in \(\widehat{\mathscr{M}_{\mathbf{C}}}\), where_
\[\mathfrak{S}(f)=\prod_{x\in\mathbf{A}^{1}}\lim_{N\to\infty}\mathbf{L}^{-N(n-1 )}[\Lambda_{N}(f,x)]\]
_is a motivic Euler product and \(R_{e}\) is an error term satisfying_
\[w(R_{e})\leq 4-\frac{n-2^{d}(d-1)}{2^{d-2}}\left(1+\left\lfloor\frac{e+1}{2d -2}\right\rfloor\right).\]
The term \(\mathfrak{S}(f)\) is the _motivic singular series_ and will emerge as an infinite convergent sum, whose precise definition is given in (6.11). In Sections 6.3.3 and 6.3.4 we will express \(\mathfrak{S}(f)\) as a motivic Euler product, using the construction of motivic Euler products found in [2, Section 3]. The relevant background facts about motivic Euler products will be recalled in Section 2.5.
_Remark 1.2_.: In this paper the focus is on hypersurfaces of degree \(d\geq 3\). However, on combining Propositions 3.7 and 3.8 in (3.1), it is also possible to deduce a statement for \(d=2\), with the only difference being a weaker error term if \(e\leq n/2-7\).
The proof of Theorem 1.1 relies on the development of a motivic version of the Hardy-Littlewood circle method. The arguments parallel some of the sheaf-theoretic arguments in [1], but are much more attuned to the steps taken in the classical circle method over \(\mathbf{Q}\) in [10], and its incarnation over \(\mathbf{F}_{q}(t)\) in [11]. The motivic circle method is developed over the course of several sections. In Section 3 we lay the foundations of the method and in Section 5 we work out a Weyl differencing argument to bound the weight of a general class of motivic exponential sums. In Sections 6 and 7 we analyse the contribution from the major and minor arcs, respectively. Our treatment of the minor arcs relies on tools from the geometry of numbers over the function field \(\mathbf{C}(t)\), in the spirit of work by Mahler [15]. We have collected together the necessary facts in Appendix A.
One notable feature of our work is that we work over \(\mathbf{C}\) throughout and never rely on the kind of counting arguments over finite fields that arise from spreading out. In doing so, we believe that Theorem 1.1 has the potential to unlock even more geometric information about the parameter space \(M_{e}\), and its cousins. We explain some applications in the remainder of this introduction.
### The Hodge-Deligne polynomial
As explained in [14, Prop. 3.2.14 of Chapter 2], the Hodge-Deligne polynomial gives a unique motivic measure \(\operatorname{HD}:K_{0}(\operatorname{Var}_{\mathbf{C}})\to\mathbf{Z}[u,v]\), via
\[\operatorname{HD}(X)=\sum_{p,q\geq 0}(-1)^{p+q}h^{p,q}(X)u^{p}v^{q},\]
where \(h^{p,q}(X)\) are the virtual Hodge numbers of \(X\). One has \(\operatorname{HD}(\mathbf{L})=uv\). Moreover, \(\operatorname{HD}\) induces a motivic measure
\[\widehat{\mathscr{M}_{\mathbf{C}}}\to\mathbf{Z}[u,v][[(uv)^{-1}]].\]
Applying this motivic measure to both sides of the equality in Theorem 1.1, it is possible to express a positive proportion of the coefficients of \(\operatorname{HD}(M_{e})\) in terms of the Hodge-Deligne polynomial of the motivic Euler product \(\mathfrak{S}(f)\). Following the categorification of inclusion-exclusion, recently worked out by Das and Howe [1], it would be interesting
to discern whether this could give access to homological stability results for the parameter space \(M_{e}\).
While the use of spreading-out and counting procedures usually only gives access to dimension and irreducibility results, which corresponds to the dominant term of the Hodge-Deligne polynomial, our motivic approach computes a positive proportion of the coefficients of this polynomial. We shall take this point of view further in Corollary 1.6 in the special case \(e=1\).
### The space of morphisms
Let \(f\in\mathbf{C}[x_{1},\dots,x_{n}]\) be a non-singular homogeneous polynomial of degree \(d\), and let \(\tilde{X}\subset\mathbf{P}^{n-1}\) be the hypersurface it defines. We can also study the space of morphisms of degree \(e\) from \(\mathbf{P}^{1}\) to \(\tilde{X}\), given by
\[\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})=\left\{\mathbf{g}\in( \mathbf{C}[t]^{n}-\{0\})/\mathbf{C}^{\times}:\begin{array}{l}\max\deg g_{i}=e,\ \gcd(g_{1},\dots,g_{n})=1,\\ f(\mathbf{g})=0\end{array}\right\}.\]
The restriction of \((\mathbf{C}[t]^{n}-\{0\})/\mathbf{C}^{\times}\) to polynomials of degree \(\leq e\) is isomorphic to \(\mathbf{P}^{n(e+1)-1}\), and so the expected dimension of \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})\) is \(n(e+1)-1-(de+1)=\mu(e)-1\), because it is cut out by \(de+1\) equations.
Our next result is deduced from Theorem 1.1 using inclusion-exclusion and supplies a similar result for the space of morphisms \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})\).
**Theorem 1.3**.: _Let \(f\in\mathbf{C}[x_{1},\dots,x_{n}]\) be a non-singular homogeneous polynomial of degree \(d\geq 3\), defining a hypersurface \(\tilde{X}\subset\mathbf{P}^{n-1}\). Assume \(n>2^{d}(d-1)\) and let \(e\geq 1\). Then_
\[[\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]=\frac{\mathbf{L}^{\mu(e)-1 }}{1-\mathbf{L}^{-1}}\left(\prod_{v\in\mathbf{P}^{1}}c_{v}+S_{e}\right)\]
_in \(\widehat{\mathscr{M}_{\mathbf{C}}}\), where_
\[c_{v}=(1-\mathbf{L}^{-1})\frac{[\tilde{X}]}{\mathbf{L}^{n-2}}\]
_and \(S_{e}\) is an error term satisfying_
\[w(S_{e})\leq 4-\left(\frac{n-2^{d}(d-1)}{2^{d-1}(d-1)}\right)(e+1).\]
### Irreducibility and dimension of the space of morphisms
We shall describe the properties of the weight function \(w:\mathscr{E}xp\mathscr{M}_{\mathbf{C}}\to\mathbf{Z}\) in Section 4. One of its key properties is the identity \(w([V])=2\dim V\), for any \(\mathbf{C}\)-variety \(V\), which is proved in Lemma 4.6. In fact, by Lemma 4.8, we have
\[w([V]-[W])\leq 2r-1,\]
whenever \(V,W\) are irreducible \(\mathbf{C}\)-varieties of equal dimension \(r\). Rather than apply the statement of Theorem 1.3, we shall instead invoke its proof and the expression (8.2) in particular. Lemma 6.16 and Corollary 6.15 imply that \(\Lambda_{N}(f,\infty)\subset\mathbf{A}^{Nn}\) is irreducible and of dimension \(N(n-1)\). Hence \(\sigma_{\infty}(f)=1+E_{1}\) in (8.2), with \(w(E_{1})\leq-1\). Moreover, it follows from Remark 6.6 that \(\mathfrak{S}(f)=1+E_{2}\) in \(\mathscr{E}xp\mathscr{M}_{\mathbf{C}}\), where \(w(E_{2})<0\), and
\[\frac{(1-\mathbf{L}^{-(n-d)})(1-\mathbf{L}^{-(n-d)+1})}{1-\mathbf{L}^{-1}}=1+E _{3},\]
where \(w(E_{3})\leq-2\). Hence the following result is an easy consequence of (8.2) in the proof of Theorem 1.3
**Corollary 1.4**.: _Let \(f\in\mathbf{C}[x_{1},\dots,x_{n}]\) be a non-singular homogeneous polynomial of degree \(d\geq 3\), defining a hypersurface \(\tilde{X}\subset\mathbf{P}^{n-1}\). Assume \(n>2^{d}(d-1)\) and let \(e\geq 1\). Then \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})\) is an irreducible variety of dimension \(\mu(e)-1\)._
This refines [1, Thm. 1.1], which achieves the same conclusion under the more stringent assumption \(n>2^{d}\left(d-\frac{1}{2}\right)\).
### Weak approximation over \(\mathbf{C}(t)\)
Given a smooth hypersurface \(\tilde{X}\subset\mathbf{P}^{n-1}\) and points \(p_{1},\dots,p_{m}\in\mathbf{P}^{1}\) and \(x_{1},\dots,x_{m}\in\tilde{X}\), a natural generalisation involves studying the space \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X};p_{1},\dots,p_{m};x_{1},\dots,x_{m})\) of degree \(e\) morphisms \(g:\mathbf{P}^{1}\to\tilde{X}\), such that \(g(p_{i})=x_{i}\) for \(1\leq i\leq m\). Assuming that \(n>2^{d}(d-1)\) and \(e\) is large enough in terms of \(m\), the methods of this paper are robust enough to study the class of this variety in \(\widehat{\mathscr{M}}_{\mathbf{C}}\), which would yield an analogue of Corollary 1.4 for \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X};p_{1},\dots,p_{m};x_{1},\dots,x_{m})\). This would address a question raised by Will Sawin in his lecture at the Banff workshop "Geometry via Arithmetic" in July 2021, who highlighted that methods from algebraic geometry break down when \(m\) is allowed to be arbitrary, even for generic hypersurfaces.
The study of \(\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X};p_{1},\dots,p_{m};x_{1},\dots,x_{m})\) can also be interpreted as a weak approximation problem over \(\mathbf{C}(t)\), a topic that was studied by Hassett and Tschinkel [11] for rationally connected varieties. Whereas Hassett and Tschinkel are only able to treat weak approximation at the places of good reduction, our approach would allow one to work with arbitrary places.
### Motivic Manin-Peyre
Our work allows us to address a question of Peyre [10, Question 5.4] about the convergence of
\[[\operatorname{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]\mathbf{L}^{-e(n-d)},\]
for a smooth hypersurface \(\tilde{X}\subset\mathbf{P}^{n-1}\) of degree \(d\) that is defined over \(\mathbf{C}\). If \(n>2^{d}(d-1)\), then Theorem 1.3 shows that this sequence converges in the weight topology to
\[\frac{\mathbf{L}^{n-2}}{1-\mathbf{L}^{-1}}\prod_{v\in\mathbf{P}^{1}}c_{v},\]
where
\[c_{v}=(1-\mathbf{L}^{-1})\frac{[\tilde{X}]}{\mathbf{L}^{n-2}}.\]
This is consistent with a motivic analogue of the Manin-Peyre conjecture described by Faisant [14, p. 3], who suggests that the limit can be interpreted as an adelic volume
\[\frac{\mathbf{L}^{\dim(\tilde{X})}}{(1-\mathbf{L}^{-1})^{\operatorname{rkPic} (\tilde{X})}}\prod_{v\in\mathbf{P}^{1}}\mathfrak{c}_{v},\]
where \(\mathfrak{c}_{v}=(1-\mathbf{L}^{-1})^{\operatorname{rkPic}(\tilde{X})}[ \tilde{X}]\mathbf{L}^{-\dim\tilde{X}}\), at all but finitely many places. Indeed, we have \(\dim(\tilde{X})=n-2\) and \(\operatorname{rkPic}(\tilde{X})=1\), by the Lefschetz hyperplane theorem.
### The variety of lines on hypersurfaces
Our method is robust enough to give non-trivial results already in the case \(e=1\) of lines. Let \(\tilde{X}\subset\mathbf{P}^{n-1}\) as above be a smooth hypersurface of degree \(d\), and let \(F_{1}(\tilde{X})\) be the Fano variety of lines associated to \(\tilde{X}\). We shall prove the following result in Section 9.
**Theorem 1.5**.: _Let \(\tilde{X}\subset\mathbf{P}^{n-1}\) be a smooth hypersurface of degree \(d\geq 3\) with \(n>2^{d}(d-1)\). Then_
\[[F_{1}(\tilde{X})]=\frac{\mathbf{L}^{-d}[\tilde{X}]^{2}-\mathbf{L}^{n-d-2}[ \tilde{X}]}{1+\mathbf{L}^{-1}}+\widehat{R}_{n,d}\]
_in \(\widehat{\mathscr{M}}_{\mathbf{C}}\), where_
\[w(\widehat{R}_{n,d})\leq 4n-2d-6-\frac{n-2^{d}(d-1)}{2^{d-2}}.\]
As a consequence, in Section 9 we may compute a positive proportion of the coefficients of the Hodge-Deligne polynomial of \(F_{1}(\tilde{X})\).
**Corollary 1.6**.: _Let \(\tilde{X}\subset\mathbf{P}^{n-1}\) be a smooth hypersurface of degree \(d\geq 3\), with \(n>2^{d}(d-1)\). Let \(p,q\in\mathbf{Z}\) such that \(p,q\leq 2n-5-d\) and_
\[p+q>4n-2d-6-\frac{n-2^{d}(d-1)}{2^{d-2}}.\]
_Then_
\[h^{p,q}(F_{1}(\tilde{X}))=\begin{cases}\left\lfloor\frac{2n-d-5-p}{2}\right \rfloor+1&\text{ if }p=q,\\ 0&\text{ otherwise.}\end{cases}\]
Let us now discuss in a bit more detail the case \(d=3\), that is, when \(\tilde{X}\) is a smooth cubic hypersurface. In this case, \(F_{1}(\tilde{X})\) is known to be a smooth variety of dimension \(2n-8\) as soon as \(n\geq 4\), by work of Altman and Kleiman [1]. Galkin and Schinder [11] have established a relationship between the classes \([\tilde{X}]\) and \([F_{1}(\tilde{X})]\) in \(K_{0}(\mathrm{Var}_{\mathbf{C}})\): it follows from [11, Thm. 5.1] that
\[\mathbf{L}^{2}[F_{1}(\tilde{X})]=[\mathrm{Sym}^{2}\tilde{X}]-(1+\mathbf{L}^{ n-2})[\tilde{X}]. \tag{1.4}\]
In fact, in the cubic case, Corollary 1.6 can be deduced from [11, Thm. 6.1]. (See also Eq. (4.21) in the book by Huybrechts [12]).
On taking \(d=3\) in Theorem 1.5, we obtain
\[\mathbf{L}^{2}[F_{1}(\tilde{X})]=\frac{\mathbf{L}^{-1}[\tilde{X}]^{2}- \mathbf{L}^{n-3}[\tilde{X}]}{1+\mathbf{L}^{-1}}+\widehat{R}_{n},\]
if \(n\geq 17\), where \(w(\widehat{R}_{n})\leq\frac{7}{2}n.\) Note that \(w(\mathbf{L}^{2}[F_{1}(\tilde{X})])=4+2(2n-8)=4n-12\), and so we do indeed get non-trivial information. Comparing it with (1.4), and assuming \(n\geq 17\), this results in the expression
\[[\mathrm{Sym}^{2}\tilde{X}]=\frac{\mathbf{L}^{-1}[\tilde{X}]^{2}-\mathbf{L}^ {n-3}[\tilde{X}]}{1+\mathbf{L}^{-1}}+\mathbf{L}^{n-2}[\tilde{X}]+\widehat{R^ {\prime}}_{n}, \tag{1.5}\]
where \(w(\widehat{R^{\prime}}_{n})\leq\frac{7}{2}n\). In general there seems no reason to expect a relation between the class of \(\mathrm{Sym}^{2}\tilde{X}\) and the classes \([\tilde{X}]^{2}\) and \([\tilde{X}]\) in the Grothendieck group, but (1.5) gives a good approximation of the class of \(\mathrm{Sym}^{2}\tilde{X}\) in the weight topology. In Remark 9.2 we use work of Burillo [13] to calculate part of the Hodge-Deligne polynomial of \(\mathrm{Sym}^{2}\tilde{X}\) and check that Corollary 1.6 is consistent with (1.4) when \(n\geq 17\).
It would be interesting to see whether our work could shed light on recent questions of Popov [14] around the class in \(K_{0}(\mathrm{Var}_{\mathbf{C}})\) of the moduli space that parameterises twisted cubics in smooth cubic hypersurfaces \(\tilde{X}\subset\mathbf{P}^{n-1}\).
### Acknowledgements
The authors would like to thank Yohan Brunebarbe, Antoine Chambert-Loir, Lois Faisant and Will Sawin for useful discussions. M.B. received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant agreement No. 893012. T.B. was supported by FWF grant P 36278 and by a grant from the Institute for Advanced Study School of Mathematics.
## 2. The Grothendieck ring of varieties with exponentials
In this section we define the Grothendieck ring of varieties with exponentials, relative to an arbitrary noetherian scheme \(S\). The _Grothendieck group of varieties with exponentials_\(K_{0}(\mathrm{ExpVar}_{S})\) is defined by generators and relations. Generators are pairs \((X,f)\), where
\(X\) is a variety over \(S\) and \(f:X\to\mathbf{A}^{1}\) is a morphism. We call such a pair a _variety with exponential_. Relations are the following:
\[(X,f)-(Y,f\circ u)\]
whenever \(X\), \(Y\) are \(S\)-varieties, \(f\colon X\to\mathbf{A}^{1}\) a morphism, and \(u\colon Y\to X\) is an \(S\)-isomorphism;
\[(X,f)-(Y,f|_{Y})-(U,f|_{U})\]
whenever \(X\) is an \(S\)-variety, \(f\colon X\to\mathbf{A}^{1}\) a morphism, \(Y\) a closed subscheme of \(X\) and \(U=X\setminus Y\) its open complement; and
\[(X\times_{\mathbf{Z}}\mathbf{A}^{1},\mathrm{pr}_{2})\]
where \(X\) is a \(S\)-variety and \(\mathrm{pr}_{2}\) is the second projection. We will write \([X,f]\) (or \([X,f]_{S}\) if we want to keep track of the base scheme \(S\)) for the class in \(K_{0}(\mathrm{ExpVar}_{S})\) of a pair \((X,f)\). The product \([X,f][Y,g]=[X\times_{S}Y,f\circ\mathrm{pr}_{1}+g\circ\mathrm{pr}_{2}]\) endows \(K_{0}(\mathrm{ExpVar}_{S})\) with a ring structure. We denote by \(\mathbf{L}\), or \(\mathbf{L}_{S}\), the class of \([\mathbf{A}^{1}_{S},0]\) in \(K_{0}(\mathrm{ExpVar}_{S})\). As for the usual Grothendieck ring \(K_{0}(\mathrm{Var}_{S})\), we may invert \(\mathbf{L}\), which gives us a ring denoted by \(\mathscr{E}xp\mathscr{M}_{S}\). As proved in [1, Lemma 1.1.3], the natural morphism \(K_{0}(\mathrm{Var}_{S})\to K_{0}(\mathrm{ExpVar}_{S})\) given by sending \([X]\) to \([X,0]\) is injective, and the same is true after inverting \(\mathbf{L}\).
Any morphism \(u:T\to S\) of noetherian schemes naturally induces a group homomorphism
\[u_{!}:K_{0}(\mathrm{ExpVar}_{T})\to K_{0}(\mathrm{ExpVar}_{S})\]
and a ring homomorphism
\[u^{*}:K_{0}(\mathrm{ExpVar}_{S})\to K_{0}(\mathrm{ExpVar}_{T}).\]
The latter endows \(K_{0}(\mathrm{ExpVar}_{T})\) with a \(K_{0}(\mathrm{ExpVar}_{S})\)-module structure: in particular, this allows us to write expressions involving elements of both rings, which will implicitly be viewed as living in \(K_{0}(\mathrm{ExpVar}_{T})\).
### Functional interpretation
An element \(\varphi\in K_{0}(\mathrm{ExpVar}_{S})\) may be interpreted as a motivic function with source \(S\). More precisely, for every point \(s\in S\) with residue field \(k(s)\), denote by \(\varphi(s)\) the element \(s^{*}\varphi\in K_{0}(\mathrm{ExpVar}_{k(s)})\). Then the following lemma says that a motivic function is determined by its values:
**Lemma 2.1**.: _Let \(\varphi\in K_{0}(\mathrm{ExpVar}_{S})\). Assume that \(\varphi(s)=0\) for all \(s\in S\). Then \(\varphi=0\)._
Proof.: See [1, Lemma 1.1.8].
_Remark 2.2_.: Let \(k\) be a field. Using the relations, we see that \([\mathbf{A}^{1},\lambda\mathrm{id}]=0\) in \(K_{0}(\mathrm{ExpVar}_{k})\) for any non-zero \(\lambda\in k\). More generally, for any variety \(X\) over \(k\) with a morphism \(u:X\to\mathbf{G}_{m}\), we have
\[[X\times\mathbf{A}^{1},(x,t)\mapsto u(x)t]=0\]
in \(K_{0}(\mathrm{ExpVar}_{k})\). Indeed, using Lemma 2.1 we see that this holds already in \(K_{0}(\mathrm{ExpVar}_{X})\).
**Lemma 2.3**.: _Let \(V\) be a finite-dimensional \(k\)-vector space and \(f:V\to k\) a linear form. Then_
\[[V,f]=\begin{cases}\mathbf{L}^{\dim V}&\text{if }f=0,\\ 0&\text{otherwise.}\end{cases}\]
Proof.: [1, Lemma 1.1.11].
### Interpretation as exponential sums
Let \(\psi:\mathbf{F}_{q}\to\mathbf{C}^{*}\) be a non-trivial additive character. Then there is a ring homomorphism
\[K_{0}(\operatorname{ExpVar}_{\mathbf{F}_{q}})\to\mathbf{C},\]
given by
\[[X,f]\mapsto\sum_{x\in X(\mathbf{F}_{q})}\psi(f(x)).\]
In general, even when the ground field \(k\) is not finite, we will think of the elements of \(\operatorname{ExpVar}_{k}\) and \(\mathscr{E}xp\mathscr{M}_{k}\) as exponential sums.
### A cohomological realisation
Let \(k\) be a finite field, let \(\ell\neq\operatorname{char}k\) be a prime, and let \(k^{s}\) be a separable closure of \(k\). We denote \(G_{k}=\operatorname{Gal}(k^{s}/k)\). Consider the category \(\operatorname{Rep}_{G_{k}}\mathbf{Q}_{\ell}\) of continuous \(\mathbf{Q}_{\ell}\)-representations of \(G_{k}\) and the corresponding Grothendieck ring \(K_{0}(\operatorname{Rep}_{G_{k}}\mathbf{Q}_{\ell})\). We fix a non-trivial additive character \(\psi:k\to\mathbf{C}^{*}\), and denote by \(\mathcal{L}_{\psi}\) the corresponding Artin-Schreier sheaf on \(\mathbf{A}_{k}^{1}\). There is a ring homomorphism
\[K_{0}(\operatorname{ExpVar}_{k})\to K_{0}(\operatorname{Rep}_{G_{k}}\mathbf{Q }_{\ell}),\]
given by
\[[X,f]\mapsto\sum_{i\geq 0}(-1)^{i}[H^{i}_{\operatorname{\acute{e}t},c}(X \times_{k}k^{s},f^{*}\mathcal{L}_{\psi})].\]
This motivic measure provides a dictionary between some computations in this paper and those carried out in [1].
### Motivic functions and integrals
In this section, we follow the ideas of [1, 1.2.1], except that we adapt the notation and normalisations to make them more convenient for our setting.
The motivic functions we consider are motivic analogues of compactly supported and locally constant functions on the locally compact field \(k((t^{-1}))\), where \(k\) is a finite field. For such a function \(\varphi\), there exist integers \(M\geq N\) such that \(\varphi\) is zero outside \(t^{M}k[[t^{-1}]]\), and such that \(\varphi\) is invariant modulo \(t^{N}k[[t^{-1}]]\), so that \(\varphi\) may be seen as a function on the quotient \(t^{M}k[[t^{-1}]]/t^{N}k[[t^{-1}]]\). We then say that \(\varphi\) is of level \((M,N)\). The latter can be endowed with the structure of an affine space of dimension \(M-N\) over the field \(k\), through the identification
\[\begin{array}{cccc}t^{M}k[[t^{-1}]]/t^{N}k[[t^{-1}]]&\to&\mathbf{A}_{k}^{M-N }(k)\\ a_{M}t^{M}+a_{M-1}t^{M-1}+\cdots+a_{N+1}t^{N+1}\bmod t^{N}k[[t^{-1}]]&\mapsto&( a_{M},\ldots,a_{N+1})\end{array}.\]
_Notation 2.4_.: For any field \(k\) and for \(M\geq N\) integers, we denote by \(\mathbf{A}_{k}^{(M,N)}\) the affine space \(\mathbf{A}_{k}^{M-N}\), interpreted as the domain of definition for functions which are zero outside \(t^{M}k[[t^{-1}]]\) and invariant modulo \(t^{N}k[[t^{-1}]]\) as above. More generally, for \(n\geq 1\) an integer, we denote by \(\mathbf{A}_{k}^{n(M,N)}\) the affine space \((\mathbf{A}_{k}^{(M,N)})^{n}\). If \(S\) is a variety over \(k\), we denote by \(\mathbf{A}_{S}^{n(M,N)}\) the extension \(\mathbf{A}_{k}^{n(M,N)}\times_{k}S\).
Writing \(K_{\infty}=k((t^{-1}))\), we denote by \(\mathscr{F}_{S}(K_{\infty}^{n},M,N)\) the ring \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}_{S}^{n(M,N)}}\) and interpret it as the ring of \(S\)-families of motivic functions of level \((M,N)\) in \(n\) variables. In the same way that is explained in [1, 1.2.3], as \(M\) and \(N\) vary, the rings \(\mathscr{F}_{S}(K_{\infty}^{n},M,N)\) fit into a directed system with direct limit denoted \(\mathscr{F}_{S}(K_{\infty}^{n})\), the total ring of \(S\)-families of motivic functions in \(n\) variables.
If we have an element \(\varphi\in\mathscr{F}_{S}(K_{\infty}^{n})\) we define its integral in the following way: pick a pair \((M,N)\) such that \(\varphi\in\mathscr{F}_{S}(K_{\infty}^{n},M,N)\), and we put
\[\int\varphi:=\mathbf{L}^{(N+1)n}[\varphi]_{S},\]
where \([\varphi]_{S}\) is the image of \(\varphi\) in \(\mathscr{E}xp\mathscr{M}_{S}\) via the forgetful morphism
\[\mathscr{E}xp\mathscr{M}_{\mathbf{A}_{S}^{n(M,N)}}\to\mathscr{E}xp\mathscr{M}_{S}.\]
We may also write \(\int\varphi=\int_{\mathbf{x}}\varphi(\mathbf{x})\mathrm{d}\mathbf{x}\), or simply \(\int\varphi=\int_{\mathbf{x}}\varphi(\mathbf{x})\), if we want to stress with respect to which variables the integral is taken. Analogously to [1, 1.2.3], this definition does not depend on the choice of \((M,N)\) (since the integral remains the same if we pick a larger \(M\) or a smaller \(N\)), and defines an \(\mathscr{E}xp\mathscr{M}_{S}\)-linear map
\[\int:\mathscr{F}_{S}(K_{\infty})\to\mathscr{E}xp\mathscr{M}_{S}.\]
**Example 2.5**.: Write \(\mathbf{T}=t^{-1}k[[t^{-1}]]\), the analogue of the fundamental compact interval \([0,1]\) in the function field circle method literature. Let \(\varphi=\mathbf{1}_{\mathbf{T}^{n}}\) be the characteristic function of \(\mathbf{T}^{n}\). Picking \(M=N=-1\), we get \(\int\varphi=1\). In other words, with this normalisation, the motivic volume of \(\mathbf{T}^{n}\) is \(1\).
_Remark 2.6_ (Change of variables).: Let \(\varphi\in\mathscr{E}xp\mathscr{M}_{\mathbf{A}_{S}^{n(M,N)}}\) and assume that we want to perform a change of variables \(\mathbf{x}\mapsto t^{p}\mathbf{x}=:\mathbf{y}\) for some integer \(p\in\mathbf{Z}\). This change of variable induces an isomorphism
\[\mathbf{A}_{S}^{n(M,N)}\to\mathbf{A}_{S}^{n(M+p,N+p)},\]
which gives us an isomorphism between the corresponding Grothendieck rings, sending \(\varphi\) to the function \(\psi:\mathbf{y}\mapsto\varphi(t^{-p}\mathbf{y})\). Moreover, this induces an isomorphism of the underlying \(S\)-varieties with exponentials, which implies the equality \([\varphi]_{S}=[\psi]_{S}\). We may conclude that we have the change of variable formula
\[\int_{\mathbf{x}}\varphi(\mathbf{x})\mathrm{d}\mathbf{x}=\mathbf{ L}^{n(N+1)}[\varphi]_{S}=\mathbf{L}^{n(N+1)}[\psi]_{S} =\mathbf{L}^{-pn}\int_{\mathbf{y}}\psi(\mathbf{y})\mathrm{d}\mathbf{y}\] \[=\mathbf{L}^{-pn}\int_{\mathbf{y}}\varphi(t^{-p}\mathbf{y}) \mathrm{d}\mathbf{y}.\]
### Motivic Euler products
We will use the motivic Euler products of [1, Section 3], more specifically, the variant where the coefficients are varieties with exponentials. We recall its definition here. Let \(X\) be a variety over a field \(k\), and let
\[(A_{i})_{i\geq 1}=(X_{i},f_{i}:X_{i}\to\mathbf{A}^{1})_{i\geq 1}\]
be a family of quasi-projective varieties with exponentials over \(X\). Let \(n\geq 1\) be an integer, and let \(\omega=(n_{i})_{i\geq 1}\) be a partition of \(n\), with \(n_{i}\) being the number of occurrences of \(i\), so that \(n=\sum_{i\geq 1}in_{i}\). We define the variety with exponential \(\mathrm{Conf}^{\omega}(X_{i},f_{i})_{i\geq 1}\) in the following way: consider the product
\[\prod_{i\geq 1}X_{i}^{n_{i}}\]
together with the morphism to \(\prod_{i\geq 1}X^{n_{i}}\) induced by the structural morphisms \(X_{i}\to X\). We denote by \(\left(\prod_{i\geq 1}X_{i}^{n_{i}}\right)_{*,X}\) the open subset lying above the complement of the big diagonal in \(\prod_{i\geq 1}X^{n_{i}}\) (that is, points with no two coordinates being equal). Then the variety \(\mathrm{Conf}^{\omega}(X_{i})_{i\geq 1}\)underlying \(\mathrm{Conf}^{\omega}(X_{i},f_{i})_{i\geq 1}\) is defined to be the quotient of \(\left(\prod_{i\geq 1}X_{i}^{n_{i}}\right)_{*,X}\) by the natural permutation action of the product of symmetric groups \(\prod_{i\geq 1}\mathfrak{S}_{i}\). The corresponding morphism \(f^{\omega}:\mathrm{Conf}^{\omega}(X_{i})_{i\geq 1}\to\mathbf{A}^{1}\) is induced in the obvious way by the natural \(\prod_{i\geq 1}\mathfrak{S}_{n_{i}}\)-invariant morphism \(\prod_{i\geq 1}f_{i}^{n_{i}}:\prod_{i\geq 1}X_{i}^{n_{i}}\to\mathbf{A}^{1}\), given by
\[(x_{i,1},\ldots,x_{i,n_{i}})_{i\geq 1}\mapsto\sum_{i\geq 1}\left(f_{i}(x_{i,1})+ \cdots+f(x_{i,n_{i}})\right).\]
We define the motivic Euler product by
\[\prod_{x\in X}\left(1+\sum_{i\geq 1}A_{i,x}T^{i}\right):=1+\sum_{n\geq 1}\left( \sum_{\omega}[\operatorname{Conf}^{\omega}(X_{i},f_{i})_{i\geq 1}]\right)T^{n} \in K_{0}(\operatorname{ExpVar}_{k})[[T]],\]
where the inner sum is over partitions \(\omega\) of \(n\). Note that the left-hand side is only a _notation_ for the series on the right-hand side. In [10], it is shown that this notion of motivic Euler product satisfies many good properties, in particular in terms of multiplicativity.
## 3. The motivic circle method
We start by recalling the main ideas behind the function field version of the circle method. Let \(k=\mathbf{F}_{q}\) be a finite field. The field \(k\) is endowed with an additive character \(\psi:k\to\mathbf{C}^{*}\) defined by
\[x\mapsto\exp\left(\frac{2i\pi}{p}\mathrm{Tr}_{\mathbf{F}_{q}/\mathbf{F}_{p}}( x)\right).\]
The role of the compact unit interval \([0,1]\) is played by the set
\[\mathbf{T}=t^{-1}\mathbf{F}_{q}[[t^{-1}]]=\{\alpha\in\mathbf{F}_{q}((t^{-1})),\ \mathrm{ord}(\alpha)\leq-1\}.\]
The field \(k((t^{-1}))\) is locally compact, and we may normalise its Haar measure so that \(\int_{\mathbf{T}}\mathrm{d}\alpha=1\). The igniting spark of the circle method over \(\mathbf{F}_{q}(t)\) is the identity
\[\int_{\mathbf{T}}\psi(\mathrm{res}(x\alpha))\mathrm{d}\alpha=\begin{cases}1& \text{ if }x=0,\\ 0&\text{ otherwise,}\end{cases}\]
where, for a Laurent series \(f\in k((t^{-1}))\), we denote by \(\mathrm{res}(f)\) the coefficient of \(t^{-1}\) in \(f\).
If we want to count solutions of some polynomial equation over \(k\), we may use this identity to rewrite the quantity we are interested in as the integral
\[\int_{\mathbf{T}}S(\alpha)\mathrm{d}\alpha\]
of an appropriate exponential sum \(S(\alpha)\). The method then proceeds, as in the classical case, with cutting up \(\mathbf{T}\) into major and minor arcs. The contribution of the major arcs is rewritten as a product of local factors, but one requires a suitably strong upper bound for the contribution from the minor arcs.
The main obstruction in passing from \(k=\mathbf{F}_{q}\) to \(k=\mathbf{C}\) is that we lose local compactness, and there is no theory of integration in the classical sense that we could use to reproduce the above steps. This is why we need to introduce ideas from _motivic integration_: exponential sums will be replaced by elements in the Grothendieck rings of varieties with exponentials, bearing in mind Section 2.2. Integrals will be replaced by the operation defined in Section 2.4. We will use the cut-and-paste relations in the Grothendieck ring of varieties to decompose our expression into major and minor arcs. The decomposition into local factors of the major arc contribution will be done via the notion of motivic Euler products, while the minor arcs will be bounded in a topology defined using Hodge theory. Once this translation into the motivic setting has been properly made, the motivic circle method will bear many similarities with the function field circle method, as implemented by Lee [14].
_Notation 3.1_.: For any integer \(m\), let \(\operatorname{Poly}_{\leq m}\) be the space of polynomials of degree \(\leq m\) with coefficients in \(\mathbf{C}\). We let \(\operatorname{MPoly}_{m}\) denote the space of monic polynomials of degree exactly \(m\) with coefficients in \(\mathbf{C}\). Both spaces are simply affine spaces over \(\mathbf{C}\); the first one has dimension \(m+1\) and the second one has dimension \(m\). We let \(\mathbf{T}\) be the space of elements \(\alpha\in\mathbf{C}((t^{-1}))\) such that \(\mathrm{ord}(\alpha)\leq-1\). For an element \(f\in\mathbf{C}((t^{-1}))\), we denote by \(\{f\}\in\mathbf{T}\) its fractional part.
### Approximation by rational functions
The classical circle method relies greatly on approximation of real numbers by rationals. In the same way, its function field version depends on approximation of power series in \(\mathbf{T}\) by rational functions.
When we want to approximate \(\alpha\in\mathbf{T}\) by a rational function \(\frac{h_{1}}{h_{2}}\), there are two parameters:
* the bound on the degree of \(h_{2}\);
* the precision of the approximation, as quantified by a bound on the order of \(\alpha h_{2}-h_{1}\).
Thus, we define
\[\mathbf{T}_{m,s}=\{\alpha\in\mathbf{T},\ \exists h_{2}\in\operatorname{Poly}_{ \leq m},h_{1}\in\operatorname{Poly}_{<\deg h_{2}},\ \ \operatorname{ord}(\alpha h_{2}-h_{1})<-s\},\]
for \(m\geq 0\) and \(s\geq 1\).
We can reformulate the definition of \(\mathbf{T}_{m,s}\) purely in terms of linear algebra, as follows. We have \(\alpha\in\mathbf{T}_{m,s}\) if and only if there exists a polynomial \(c_{0}+c_{1}t+\cdots+c_{m}t^{m}\) such that the coefficients of \(t^{-1},\ldots,t^{-s}\) in the series
\[\left(\sum_{i\geq 1}b_{i}t^{-i}\right)(c_{0}+c_{1}t+\cdots+c_{m}t^{m})\]
are zero. This gives us a linear system of \(s\) equations in \(c_{0},\ldots,c_{m}\):
\[b_{1}c_{0}+b_{2}c_{1}+\cdots+b_{m+1}c_{m} =0,\] \[b_{2}c_{0}+b_{3}c_{1}+\cdots+b_{m+2}c_{m} =0,\] \[\vdots\] \[b_{s}c_{0}+b_{s+1}c_{1}+\cdots+b_{m+s}c_{m} =0,\]
which translates into
\[\left(\begin{array}{c}c_{0}\\ \vdots\\ c_{m}\end{array}\right)\in\operatorname{Ker}\left(\begin{array}{cccc}b_{1}&b _{2}&\ldots&b_{m+1}\\ \vdots&&\vdots\\ b_{s}&b_{s+1}&\ldots&b_{m+s}\end{array}\right).\]
In other words, denoting by \(M\) the above \(s\times(m+1)\) matrix, we have that \(\alpha\in\mathbf{T}_{m,s}\) if and only if \(M\) has non-trivial kernel. Our first observation, is that this is always satisfied if \(m+1>s\), that is, if \(m\geq s\). In other words, whenever \(m\geq s\) we have \(\mathbf{T}_{m,s}=\mathbf{T}\). This gives us the functional version of Dirichlet approximation.
**Lemma 3.2** (Dirichlet approximation).: _Let \(\alpha\in\mathbf{T}\). Then for any \(m\geq 1\) there exist polynomials \(h_{1},h_{2}\) such that \(\deg h_{1}<\deg h_{2}\leq m\) and \(\operatorname{ord}(\alpha h_{2}-h_{1})<-m\)._
In general, the larger \(s\) becomes, the more equations we impose on the coefficients \(c_{0},\ldots,c_{m}\) of \(h_{2}\), and we get a more and more restrictive set.
In the motivic setting, there is an extra parameter, namely the number of coefficients of \(\alpha\) we actually take into account in our analysis. More precisely, we will be looking at expressions of the form \(\operatorname{res}(\alpha f(g_{1},\ldots,g_{n}))\) where \(f\) is a polynomial of degree \(d\) and \(g_{1},\ldots,g_{n}\) have degrees at most \(e\), and thus only the first \(de+1\) coefficients of \(\alpha\) will matter. If we want to make do with only considering these coefficients, i.e. if we don't want the matrix \(M\) to contain any additional coefficients of \(\alpha\), we would need to impose the condition \(s+m\leq de+1.\) Motivated by the extra constraint, we define
\[A_{m}^{de+1}=\{(b_{1},\ldots,b_{de+1})\in\mathbf{A}^{de+1}:\sum_{i=1}^{de+1}b_ {i}t^{-i}\in\mathbf{T}_{m,de-m+1}\}.\]
_Remark 3.3_.: Note in particular that when \(m\geq\frac{de+1}{2}\), then \(de+1-m\leq\frac{de+1}{2}\leq m\). Thus \(A_{m}^{de+1}=\mathbf{A}^{de+1}\) in this case.
_Remark 3.4_.: Note that \(A_{m}^{de+1}\) is the space of \((b_{1},\dots,b_{de+1})\in\mathbf{A}^{de+1}\) such that there exist coprime polynomials \(h_{1},h_{2}\) with \(h_{2}\) monic, \(\deg h_{1}<\deg h_{2}\leq m\) and \(\operatorname{ord}\left(\alpha-\frac{h_{1}}{h_{2}}\right)\leq-de-2+m-\deg h_{2}\).
_Remark 3.5_.: The sets \(A_{m}^{de+1}\) can be stratified into sets \(A_{m,m^{\prime}}^{de+1}\subset A_{m}^{de+1}\) of those \(\alpha\) such that \(h_{2}(t)\) is of degree exactly \(m^{\prime}\leq m\). We may deduce
\[\dim A_{m,m^{\prime}}^{de+1}\leq 2m^{\prime}+(m-m^{\prime})\leq m+m^{\prime} \leq 2m,\]
from which we in particular have that \(\dim A_{m}^{de+1}\leq 2m\).
### Activation of the circle method
**Lemma 3.6**.: _We have_
\[[M_{e}]=\mathbf{L}^{-de-1}[(\operatorname{Poly}_{\leq e})^{n}\times\mathbf{A} ^{de+1},\operatorname{res}(\alpha f(g_{1},\dots,g_{n}))]\]
_in the Grothendieck ring \(\mathscr{E}xp\mathscr{M}_{\operatorname{Poly}_{\leq e}^{n}}\)._
Proof.: Write
\[f(g_{1},\dots,g_{n})=c_{0}(\mathbf{g})+c_{1}(\mathbf{g})t+\dots+c_{de}( \mathbf{g})t^{de},\]
where \(c_{0},\dots,c_{de}\) are regular functions in the coefficients of \(g_{1},\dots,g_{n}\), and
\[\alpha=b_{1}t^{-1}+\dots+b_{de+1}t^{-de-1}.\]
Then
\[\operatorname{res}(\alpha f(g_{1},\dots,g_{n}))=b_{1}c_{0}+\dots+b_{de+1}c_{de}\]
is a linear form in the coefficients of \(\alpha\).
Consider now a point \(\mathbf{g}\in(\operatorname{Poly}_{\leq e})^{n}\). Then on evaluating the right-hand side at \(\mathbf{g}\), we get
\[\mathbf{L}^{-de-1}[\mathbf{A}_{k(\mathbf{g})}^{de+1},b_{1}c_{0}(\mathbf{g})+ \dots+b_{de+1}c_{de}(\mathbf{g})].\]
By Lemma 2.3, this is non-zero if and only if all of the coefficients \(c_{0}(\mathbf{g}),\dots,c_{de}(\mathbf{g})\) are zero. This occurs if and only if \(\mathbf{g}\in M_{e}\), and in this case the expression is equal to \(1\). We conclude using Lemma 2.1 with
\[\varphi=[M_{e}]-\mathbf{L}^{-de-1}[(\operatorname{Poly}_{\leq e})^{n}\times \mathbf{A}^{de+1},\operatorname{res}(\alpha f(\mathbf{g}))]\]
and \(S=(\operatorname{Poly}_{\leq e})^{n}\).
Adopting the notation in Section 2.4, we may also write
\[[M_{e}]=\int_{\alpha}[(\operatorname{Poly}_{\leq e})^{n}\times\mathbf{A}^{(-1,-de-2)},\operatorname{res}(\alpha f(g_{1},\dots,g_{n}))]\mathrm{d}\alpha\]
in \(\mathscr{E}xp\mathscr{M}_{\operatorname{Poly}_{\leq e}^{n}}\), in order to emphasise the similarity with the classical circle method. In fact, our plan is to write
\[[M_{e}]=N_{\operatorname{major}}+N_{\operatorname{minor}}, \tag{3.1}\]
where \(N_{\operatorname{major}}\) (resp. \(N_{\operatorname{minor}}\)) is the contribution of the major (resp. minor) arcs. We want to compute \(N_{\operatorname{major}}\) as precisely as possible, and find a weight bound on \(N_{\operatorname{minor}}\) of the form \(w(N_{\operatorname{minor}})<2\mu(e)\). Let
\[\tilde{\nu}:=\frac{n-2^{d}(d-1)}{2^{d-2}}>0. \tag{3.2}\]
Although we delay defining the precise major and minor arcs that we work with until the relevant sections, we proceed by recording the two main steps in the motivic circle method, whose proofs will occupy the bulk of this paper.
**Proposition 3.7**.: _Let \(\tilde{\nu}>0\) be given by (3.2) and recall the varieties defined in (1.2) and (1.3). Then_
\[N_{\rm major}={\bf L}^{\mu(e)}\left(\mathfrak{S}(f)\cdot\lim_{N\to\infty}{\bf L} ^{-(n-1)N}[\Lambda_{N}(f,\infty)]+R_{e}\right),\]
_where_
\[w(R_{e})\leq\begin{cases}4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2} \right\rfloor\right)&\text{ if }d\geq 3\text{,}\\ \max\left(-\frac{(e+1)(n-2)}{2},\ -\frac{n(e+2)}{2}+2e+8\right)&\text{ if }d=2. \end{cases}\]
_and_
\[\mathfrak{S}(f)=\prod_{x\in{\bf A}^{1}}\lim_{N\to\infty}{\bf L}^{-N(n-1)}[ \Lambda_{N}(f,x)].\]
**Proposition 3.8**.: _Let \(\tilde{\nu}>0\) be given by (3.2). Then_
\[w(N_{\rm minor})\leq 2\mu(e)+4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2} \right\rfloor\right).\]
Note that when \(e=1\), corresponding to the case of lines, we will have the bound \(w(N_{\rm minor})\leq 2\mu(1)-\frac{n-2^{d}d}{2^{d-2}}\), which is interesting as soon as \(n>2^{d}d\), if \(d\geq 3\).
Proof of Theorem 1.1.: This follows on combining Propositions 3.7 and 3.8 in (3.1).
## 4. The weight function
Our weight function will be constructed with the aim of being able to prove an analogue of the Weyl differencing argument [1, Proposition 5.5] in our setting, which we do in Proposition 5.3. Unfortunately, the weight function from [1] is not in an appropriate form to carry this out. In order to mimic the steps in the proof of [1, Proposition 5.5], which were done using the properties of cohomology, we need a function that in some sense measures the weights of the cohomology of the fibres, while the weight function from [1] used the global notion of weight of a mixed Hodge module. Thus, while over \({\bf C}\) it coincides with the weight function from [1], in the relative setting we define it rather using the weights of the underlying variations of Hodge structures (which introduces a shift by the dimension of the support).
Note that while most of the intermediary estimates in the implementation of the motivic circle method happen in relative Grothendieck rings and thus use our new weight, the final results are stated in the Grothendieck ring over \({\bf C}\), where our new weight coincides with the weight from [1].
In Section 4.1, we recall some definitions and properties concerning mixed Hodge modules and the corresponding Grothendieck rings. In Section 4.2 we define the weight function at the level of Grothendieck rings of mixed Hodge modules, and prove its fundamental properties. Then we pass to Grothendieck rings of varieties in Section 4.3 using the same procedure as in [1]. Finally, in Section 4.4 we explain what we mean by convergence of power series in the topology defined by the weight.
### Mixed Hodge modules
#### 4.1.1. The category of mixed Hodge modules
If \(S\) is a variety over \({\bf C}\), we denote by \({\rm MHM}_{S}\) the abelian category of mixed Hodge modules on \(S\) and by \(D^{b}({\rm MHM}_{S})\) its bounded derived category. A morphism \(f:S\to T\) between complex varieties induces functors \(f_{!}:D^{b}({\rm MHM}_{S})\to D^{b}({\rm MHM}_{T})\) and \(f^{*}:D^{b}({\rm MHM}_{T})\to D^{b}({\rm MHM}_{S})\).
In the case where \(S\) is a point, the category \({\rm MHM}_{\rm pt}\) is exactly the category of polarisable mixed Hodge structures. For any integer \(d\in{\bf Z}\), we denote by \({\bf Q}_{\rm pt}^{\rm Hdg}(d)\in{\rm MHM}_{\rm pt}\) the Hodge structure of type \((-d,-d)\) with underlying vector space \({\bf Q}\). For \(d=0\), it will be denoted simply by \({\bf Q}_{\rm pt}^{\rm Hdg}\).
Every mixed Hodge module \(M\) on a variety \(S\) has a well-defined notion of support, which will will be denoted by \(\operatorname{supp}(M)\).
#### 4.1.2. The complex \(\mathbf{Q}_{X}^{\operatorname{Hdg}}\)
For any complex variety \(X\), we denote by \(a_{X}:X\to\operatorname{Spec}\)\(\mathbf{C}\) its structural morphism, and by \(\mathbf{Q}_{X}^{\operatorname{Hdg}}\) the complex of mixed Hodge modules given by \(a_{X}^{\star}\mathbf{Q}_{\operatorname{pt}}^{\operatorname{Hdg}}\). In the case when \(X\) is smooth and connected, the complex of mixed Hodge modules \(\mathbf{Q}_{X}^{\operatorname{Hdg}}\) is concentrated in degree \(\dim X\), and \(\mathcal{H}^{\dim X}\mathbf{Q}_{X}^{\operatorname{Hdg}}\) is pure of weight \(\dim X\), given by the pure Hodge module associated to the constant rank one variation of Hodge structures of weight \(0\) on \(X\).
#### 4.1.3. Mixed Hodge modules with monodromy
We denote by \(\operatorname{MHM}_{X}^{\operatorname{mon}}\) the category of mixed Hodge modules \(M\) on a complex variety \(X\) endowed with commuting actions of a finite order operator \(T_{s}:M\to M\) and a locally nilpotent operator \(N:M\to M(-1)\). The category \(\operatorname{MHM}_{X}\) can be identified with a full subcategory of \(\operatorname{MHM}_{X}^{\operatorname{mon}}\) via the functor
\[\operatorname{MHM}_{X}\to\operatorname{MHM}_{X}^{\operatorname{mon}},\]
sending a mixed Hodge module \(M\) to itself with \(T_{s}=\operatorname{id}\) and \(N=0\). We refer to [23, Section 4.1.6] for a definition of the twisted external tensor product \(\overset{r}{\boxtimes}\).
#### 4.1.4. The weight filtration on Hodge modules
Each \(M\in\operatorname{MHM}_{S}\) has a finite increasing weight filtration \(W_{\bullet}M\), the graded parts of which will be denoted \(\operatorname{Gr}_{\bullet}^{W}\). For a bounded complex of mixed Hodge modules \(M^{\bullet}\), we say \(M^{\bullet}\) has weight \(\leq n\) if \(\operatorname{Gr}_{i}^{W}\mathcal{H}^{j}(M^{\bullet})=0\) for all integers \(i\) and \(j\) such that \(i>j+n\).
For varieties \(X\) and \(Y\) over \(\mathbf{C}\) we say that a functor \(F:D^{b}(\operatorname{MHM}_{X})\to D^{b}(\operatorname{MHM}_{Y})\) does not increase weights if for every \(n\in\mathbf{Z}\) and every \(M^{\bullet}\in D^{b}(\operatorname{MHM}_{X})\) with weight \(\leq n\), the complex \(F(M^{\bullet})\) is also of weight \(\leq n\). In particular, for any morphism of complex varieties \(f\), the functors \(f_{!}\) and \(f^{*}\) do not increase weights (see [10, (4.5.2)]).
#### 4.1.5. Grothendieck ring of mixed Hodge modules
We refer to [23, Section 4.1.7] for the definition of the Grothendieck rings \(K_{0}(\operatorname{MHM}_{X})\) and \(K_{0}(\operatorname{MHM}_{X}^{\operatorname{mon}})\). For every morphism \(f:X\to Y\) of complex varieties, the functors \(f_{!},f^{*}\) between the corresponding derived categories of mixed Hodge modules induce a group morphism
\[f_{!}:K_{0}(\operatorname{MHM}_{X}^{\operatorname{mon}})\to K_{0}( \operatorname{MHM}_{Y}^{\operatorname{mon}})\]
and a ring morphism
\[f^{*}:K_{0}(\operatorname{MHM}_{Y}^{\operatorname{mon}})\to K_{0}( \operatorname{MHM}_{X}^{\operatorname{mon}}).\]
_Remark 4.1_.: Let \(\operatorname{HM}_{X}\) be the subcategory of \(\operatorname{MHM}_{X}\) of Hodge modules of pure weight. By decomposing into graded pieces, one can see that the natural inclusion \(K_{0}(\operatorname{HM}_{X})\to K_{0}(\operatorname{MHM}_{X})\) induces an isomorphism.
#### 4.1.6. Hodge modules and variations of Hodge structures
Saito's structure theorem (see e.g. [10, Theorem 3.21]) gives a correspondence between pure polarisable Hodge modules on a variety \(Z\) with strict support in \(Z\) and direct systems of polarisable variations of Hodge structures (with quasi-unipotent local monodromies) over smooth open subsets of \(Z\), with a shift in weight (by the dimension of the support). Moreover, this category of Hodge modules is semi-simple, with simple objects coming, through this correspondence, from variations of Hodge structures for which the monodromy representation is irreducible [11, Theorem 14.37]. We will use these facts to give a particularly simple presentation of Grothendieck rings of mixed Hodge modules.
We denote by \(\operatorname{VHS}_{X}\) the disjoint union, over all irreducible closed subvarieties \(Z\subset X\), of the direct limit over smooth open dense \(U\subset Z\) of the sets of isomorphism classes of
polarisable variations of pure Hodge structures (satisfying the above condition of having quasi-unipotent local monodromies with irreducible action) on \(U\).
There is a well-defined morphism
\[\mathbf{Z}^{(\mathrm{VHS}_{X})}\to K_{0}(\mathrm{MHM}_{X})\]
sending a variation of Hodge structures over an open dense subset \(U\subset Z\) of an irreducible closed subvariety \(Z\subset X\) to the corresponding Hodge module with strict support in \(Z\).
**Lemma 4.2**.: _The above morphism is an isomorphism, i.e. \(K_{0}(\mathrm{MHM}_{X})\) is freely generated by the elements of \(\mathrm{VHS}_{X}\)._
Proof.: By remark 4.1, we may replace \(K_{0}(\mathrm{MHM}_{X})\) by \(K_{0}(\mathrm{HM}_{X})\). We construct the morphism in the other direction by using decomposition into simple objects and the above correspondence.
A similar property holds for \(K_{0}(\mathrm{MHM}_{X}^{\mathrm{mon}})\), considering the set \(\mathrm{VHS}_{X}^{\mathrm{mon}}\) where the additional datum of the operators \(T_{s}\) and \(N\) is added.
### Definition of the weight function
We may define a filtration \(W_{\leq\bullet}\) on the group \(K_{0}(\mathrm{MHM}_{X}^{\mathrm{mon}})\) in the following way: \(W_{\leq n}K_{0}(\mathrm{MHM}_{X}^{\mathrm{mon}})\) is the subgroup generated by
* classes of pure Hodge modules \((M,\mathrm{id},N)\) (i.e. with \(T_{s}=\mathrm{id}\)), with irreducible support, of weight \(m\) such that \(m-\mathrm{supp}(M)\leq n\), and
* classes of pure Hodge modules \((M,T_{s},N)\), with irreducible support, of weight \(m\) such that \(m-\mathrm{supp}(M)\leq n-1\).
This dichotomy depending on the form of the semisimple monodromy is of the same type as in the definition of the weight filtration in [2, Section 4.5.1], which was motivated in particular by the definition of the twisted exterior product.
Note that by 4.1.6, this corresponds to filtering \(K_{0}(\mathrm{MHM}_{X}^{\mathrm{mon}})\) by the weights of the underlying variations of Hodge structures, slightly twisted depending on whether the semi-simple part \(T_{s}\) of the monodromy is trivial or not. More precisely, we may describe the associated gradeds of this filtration in the following form:
**Lemma 4.3**.: _We have_
\[\mathrm{Gr}_{n}^{W}K_{0}(\mathrm{MHM}_{X}^{\mathrm{mon}})=W_{\leq n}K_{0}( \mathrm{MHM}_{X}^{\mathrm{mon}})/W_{\leq n-1}K_{0}(\mathrm{MHM}_{X}^{\mathrm{ mon}})=\mathbf{Z}^{(VHS_{n,X}^{\mathrm{mon}})}\]
_where \(\mathrm{VHS}_{n,X}^{\mathrm{mon}}\) is the subset of \(\mathrm{VHS}_{X}^{\mathrm{mon}}\) given by variations of Hodge structures which have weight \(n-1\) and correspond to Hodge modules with non-trivial \(T_{s}\), together with those of weight \(n\) corresponding to Hodge modules with trivial \(T_{s}\)._
Proof.: This follows from 4.1.6, using the fact that the shift in weight between Hodge modules and the corresponding variations of Hodge structures is of exactly the dimension of the support.
We then define
\[w_{X}(\mathfrak{a})=\min\{n:\mathfrak{a}\in W_{\leq n}K_{0}(\mathrm{MHM}_{X}^ {\mathrm{mon}})\}.\]
This weight function satisfies the following properties.
**Proposition 4.4**.: _Let \(S\) be a complex variety._
1. _Let_ \((\mathfrak{a}_{i})_{i\in I}\) _be a finite family of elements of_ \(K_{0}(\mathrm{MHM}_{S}^{\mathrm{mon}})\)_. Then_ \[w_{S}\left(\sum_{i\in I}\mathfrak{a}_{i}\right)\leq\max_{i\in I}w_{S}( \mathfrak{a}_{i}).\]
_._
2. _Let_ \(p:T\to S\) _be a dominant morphism with fibres of dimension_ \(\leq d\)_. Let_ \(\mathfrak{a}\in K_{0}(\operatorname{MHM}_{T}^{\operatorname{mon}})\)_. Then there exists an open subset_ \(S_{0}\subset S\) _such that, denoting by_ \(\mathfrak{b}\) _the pullback of_ \(\mathfrak{a}\) _to_ \(p^{-1}(S_{0})\)_, we have_ \[w_{S_{0}}(p_{!}\mathfrak{b})\leq w_{p^{-1}(S_{0})}(\mathfrak{b})+2d.\] _If_ \(S\) _is a point, we have_ \[w(p_{!}\mathfrak{a})\leq w_{T}(\mathfrak{a})+2d.\]
3. _Let_ \(T\subset S\) _be a locally closed subset and_ \(i:T\to S\) _the corresponding inclusion morphism. For every_ \(\mathfrak{a}\in K_{0}(\operatorname{MHM}_{T}^{\operatorname{mon}})\)_, we have_ \[w_{S}(i_{!}\mathfrak{a})\leq w_{T}(\mathfrak{a}).\]
_Remark 4.5_.: Property (2) differs from the corresponding property of the weight in [1, Lemma 4.5.1.3(d)], where the upper bound holds with \(d\) in place of \(2d\).
Proof of Proposition 4.4.:
1. This follows immediately from the definition.
2. For clarity, we first do the proof without the extra monodromy operators. Let \(V\) be a variation of Hodge structures of weight \(n\) over an open subset \(U\) of a subvariety \(Z\) of \(T\), and let \(M_{V}\) be the corresponding Hodge module over \(Z\). We know that \(p_{!}M_{V}\) is a complex of mixed Hodge modules over \(S\), of amplitude \(\leq d\). Since the functor \(p_{!}\) does not increase weights, this complex moreover has weight \(\leq n+\dim Z\), which means that \[\operatorname{Gr}_{i}^{W}\mathcal{H}^{j}(p_{!}M_{V})=0\quad\text{ if }\quad i>j+n+\dim Z.\] Taking classes in \(K_{0}(\operatorname{MHM}_{S}^{\operatorname{mon}})\), \[[p_{!}M_{V}]=\sum_{i\leq d}(-1)^{i}[\mathcal{H}^{i}p_{!}M_{V}]\] is a sum of Hodge modules of weight \(\leq n+\dim Z+d\). When \(S\) is a point, we can conclude directly here, since these Hodge modules turn out to be simply Hodge structures, of weights bounded by \(n+\dim Z+d\leq n+2d\). We now turn to the case where \(S\) is not a point. Assume first that \(p(U)\) is contained in a closed subvariety of \(S\). Then the result is trivially true by taking \(S_{0}\) to be the complement of that subvariety. Thus, we may now assume that \(p\) remains dominant when restricted to \(U\), and we reduce to proving the result for the induced map \(p^{\prime}:U\to S\). We pick an open subset \(S_{0}\) of \(S\) above which \(p^{\prime}\) is a topological fibration. Then above \(S_{0}\), the Hodge modules \(\mathcal{H}^{i}p_{!}M_{V}\) are variations of Hodge structures, with supports of dimensions \(\geq\dim Z-d\), whence the result. Now, assume there is also monodromy involved. We have shown that up to the above restriction to open subsets, the underlying variations of Hodge structures of \(p_{!}M_{V}\) had weights \(\leq n+2d\) if we start with \(V\) of weight \(n\). If \(M_{V}\) has non-trivial semisimple monodromy, then by definition \(w(M_{V})=n+1\) and in this situation the bound \(w_{S_{0}}(p_{!}^{\prime}M_{V})\leq n+1+2d\) holds. If now \(M_{V}\) has trivial monodromy, then so do all of the Hodge modules involved in \(p_{!}M_{V}\). We then have \(w(M_{V})=n\) and the bound \(w_{S_{0}}(p_{!}^{\prime}M_{V})\leq n+2d\) holds.
3. We write \(\mathfrak{a}=\sum a_{i}M_{i}\) where \(a_{i}\in\mathbf{Z}\) and the \(M_{i}\) are pure Hodge modules (with monodromy) supported inside \(S_{0}\), with corresponding variations of Hodge structures \(V_{i}\). Then \(w_{S}(i\mathfrak{a})\) is at most the weight of \(\sum a_{i}i_{!}M_{i}\) which has the same variations of Hodge structures occurring, hence the result.
### The weight function on Grothendieck rings of varieties
We have defined a weight filtration on the Grothendieck ring of mixed Hodge modules with monodromy \(K_{0}(\operatorname{MHM}_{S}^{\operatorname{mon}})\). We now use the same procedure as in [2] to deduce from it a weight function on the Grothendieck ring of varieties with exponentials \(\mathscr{E}xp\mathscr{M}_{S}\). We only give a brief overview of the construction here, referring to [2] and the references therein for the details.
#### 4.3.1. Motivic vanishing cycles
Let \(k\) be a field of characteristic \(0\) and let \(S\) be a variety over \(k\). We refer to [2, Section 2.3.5] for the definition of the _total vanishing cycles measure_
\[\Phi_{S}:\mathscr{E}xp\mathscr{M}_{S}\to(\mathscr{M}_{S}^{\hat{\mu}},*),\]
which is functorial in \(S\). Here \(\mathscr{M}_{S}^{\hat{\mu}}\) is the Grothendieck ring of varieties over \(S\) with \(\hat{\mu}\)-action (see [2, Section 2.1.5]), and \(*\) denotes Looijenga's convolution product (see [2, Section 2.2.1]). Recall moreover that the restriction of \(\Phi_{S}\) to \(\mathscr{M}_{S}\) coincides with the natural inclusion \(\mathscr{M}_{S}\to\mathscr{M}_{S}^{\hat{\mu}}\).
#### 4.3.2. The Hodge realisation
Let \(S\) be a complex variety. There is a group morphism
\[\begin{array}{ccc}\chi_{S}^{\operatorname{Hdg}}:&\mathscr{M}_{S}^{\hat{\mu} }&\to&K_{0}(\operatorname{MHM}_{S}^{\operatorname{mon}})\\ &[X\xrightarrow{f}S,\sigma]&\mapsto&\sum_{i\in\mathbf{Z}}(-1)^{i}[\mathcal{H} ^{i}(f_{i}\mathbf{Q}_{X}^{\operatorname{Hdg}}),T_{s}(\sigma),0]\end{array}\]
called the _Hodge realisation morphism_ (see [2, Section 4.4.1] for a discussion of its properties).
#### 4.3.3. Weight function on Grothendieck rings of varieties
Let \(S\) be a complex variety. The weight filtration on the ring \(\mathscr{E}xp\mathscr{M}_{S}\) is given by
\[W_{\leq n}\mathscr{E}xp\mathscr{M}_{S}:=(\chi_{S}^{\operatorname{Hdg}}\circ \Phi_{S})^{-1}(W_{\leq n}K_{0}(\operatorname{MHM}_{S}^{\operatorname{mon}})),\]
for every \(n\in\mathbf{Z}\). The completion with respect to this filtration is denoted by \(\widehat{\mathscr{E}xp\mathscr{M}_{S}}\). The weight function on \(\mathscr{E}xp\mathscr{M}_{S}\), again denoted \(w_{S}\), is given by the composition
\[\mathscr{E}xp\mathscr{M}_{S}\xrightarrow{\Phi_{S}}\mathscr{M}_{S}^{\hat{\mu} }\xrightarrow{\chi_{S}^{\operatorname{Hdg}}}K_{0}(\operatorname{MHM}_{S}^{ \operatorname{mon}})\xrightarrow{w_{S}}\mathbf{Z}.\]
#### 4.3.4. Weight and dimension
**Lemma 4.6**.: _Let \(S\) be a complex variety and \(X\) a variety over \(S\). One has the equality_
\[w_{S}(X)=2\dim_{S}X.\]
Proof.: Let \(p:X\to S\) be the structural morphism, which we may assume to be dominant by replacing \(S\) with the closure of \(p(X)\). Further, we pick a stratification \((S_{i})_{i}\) of \(S\) such that over every stratum \(p\) is topologically a fibration. For every \(i\), put \(X_{i}=p^{-1}(S_{i})\) and \(p_{i}=p_{|X_{i}}\). Up to refining our stratification, we may assume that over each \(S_{i}\), the Hodge modules involved in the complex \((p_{i})_{i}\mathbf{Q}_{X_{i}}^{\operatorname{Hdg}}\) correspond to variations of Hodge structures defined over \(S_{i}\). Now pick a stratum \(X_{i}\). By proper base change, for every \(s\in S_{i}\), the fibre \(((p_{i})_{i}\mathbf{Q}_{X_{i}}^{\operatorname{Hdg}})_{s}\) is given by a complex with cohomology given by \(H^{*}(X_{s},\mathbf{Q})\), which involves Hodge structures of weights at most \(\dim_{S_{i}}X_{i}\), with equality for the Hodge structure in top weight. From this we may deduce, using Lemma 4.3, that the weight of \(p_{i}\mathbf{Q}_{X}^{\operatorname{Hdg}}\) is \(\max\dim_{S_{i}}X_{i}=\dim_{S}X\).
**Lemma 4.7** (Triangular inequality for weights).: _Let \(S\) be a complex variety, \(X\) a variety over \(S\) and \(f:X\to\mathbf{A}^{1}\) a morphism. Then_
\[w_{S}([X,f])\leq w_{S}(X).\]
Proof.: By [11, Proposition 2.3.5.2], we have the triangular inequality for motivic vanishing cycles
\[\dim_{S}(\Phi_{S}([X,f]))\leq\dim_{S}X.\]
On the other hand, following the proof of Lemma 4.6.3.2 in [11], we see that
\[w_{S}(\Phi_{S}([X,f]))\leq 2\dim_{S}\Phi_{S}([X,f]).\]
(Here we consider the weight function on \(\mathscr{M}_{S}^{\#}\), as defined in [11].) We conclude by using Lemma 4.6.
**Lemma 4.8** (Cancellation of maximal weights).: _Let \(X\) and \(Y\) be complex varieties, both irreducible and of dimension \(d\). Then_
\[w([X]-[Y])\leq 2d-1.\]
Proof.: This follows from [11, Lemma 4.6.3.4], given that over a point, our weight and the weight in [11] coincide.
### Convergence of power series and evaluation
Let \(X\) be a variety over \(\mathbf{C}\), and consider a power series
\[F(T)=\sum_{i\geq 0}X_{i}T^{i}\in\mathscr{E}xp\mathscr{M}_{X}[[T]].\]
The radius of convergence of \(F\) is defined by
\[\sigma_{F}=\limsup_{i\to\infty}\frac{w_{X}(X_{i})}{2i}.\]
We say that \(F(T)\) converges for \(|T|<\mathbf{L}^{-r}\) if \(r\geq\sigma_{F}\). If \(F(T)\) converges for \(|T|<\mathbf{L}^{-r}\), then \(F(\mathbf{L}^{-m})\) exists as an element of \(\widehat{\mathscr{E}xp\mathscr{M}_{X}}\) for every \(m>r\).
If \(F(T)=\prod_{x\in X}\left(1+\sum_{i\geq 1}X_{i,x}T^{i}\right)\) is a motivic Euler product which converges for \(|T|<\mathbf{L}^{-r}\), then for any integer \(M>r\) we write \(F(\mathbf{L}^{-m})\) in the form
\[\prod_{x\in X}\left(1+\sum_{i\geq 1}X_{i,x}T^{i}\right)_{|T=\mathbf{L}^{-m}}.\]
_Remark 4.9_.: Special values of motivic Euler products should be handled with extreme care, as motivic Euler products do not behave well with respect to non-monomial substitutions. (See [11, Section 6.5] for a discussion of this fact.) In particular, in principle, when writing a special value of a motivic Euler product, one should always write out the formal product, and then specify at what value of \(T\) it has been evaluated.
With this caveat, to highlight the analogy with number theory, we will allow ourselves the following abuse of notation (which is already employed in [10] and [14]).
_Notation 4.10_.: We will denote by
\[\prod_{x\in X}a_{x}\]
the evaluation at \(T=1\) of the motivic Euler product
\[\prod_{x\in X}\left(1+(a_{x}-1)T\right).\]
## 5. Weyl differencing and a general exponential sum bound
Both the major and minor arc treatments in the circle method rely on a general upper bound for certain exponential sums, which in the \(\mathbf{F}_{q}(t)\)-setting take the form
\[\sum_{(g_{1},\dots,g_{n})\in\operatorname{Poly}_{<E}^{n}}\psi(\operatorname{res} (\alpha f(g_{1},\dots,g_{n})).\]
The aim of this section is to provide such bounds in the motivic setting. We start by proving a property of the weight in Section 5.1, which allows us to adapt to our setting a geometric version of the classical Weyl differencing argument provided in [1]. This is achieved in Proposition 5.3. The rest of the section is dedicated to proving our general bound. After an application of Proposition 5.3, we rewrite our bound in terms of the multilinear forms associated to the polynomial \(f\), and apply some inequalities coming from the geometry of numbers over \(\mathbf{C}(t)\). An interesting feature of our work is that we never have to rely on any point counting estimates, unlike for the procedure in [1]. Indeed, we circumvent the use of spreading out arguments and rely purely on estimates for the dimensions of various spaces.
### The fundamental property of the weight
**Lemma 5.1**.: _For every variety \(X\) over \(S\) and every morphism \(f:X\to\mathbf{A}^{1}\), we have_
\[\Phi_{S}([X,f])=\Phi_{S}([X,-f]).\]
Proof.: By Lemma 2.1 and [1, Theorem 2.3.5.1] we may reduce to the case where \(S\) is the spectrum of a field \(k\) of characteristic zero. Now, by Bittner's presentation of the Grothendieck ring of varieties, we know that \(\mathscr{E}xp\mathscr{M}_{k}\) (which is a quotient of \(K_{0}(\operatorname{Var}_{\mathbf{A}^{1}_{k}})\)) is generated by classes \([X,f]\) such that \(X\) is smooth and \(f\) is proper (as in [1, Theorem 5.1.3]). For such a class, using [1, Theorem 2.3.5.1] we may write
\[\Phi_{k}([X,f])=\epsilon_{l}(\varphi_{f}^{\mathrm{tot}}),\]
where \(\epsilon:\mathbf{A}^{1}\to k\) is the structural morphism and \(\varphi_{f}^{\mathrm{tot}}\in\mathscr{M}_{\mathbf{A}^{1}_{k}}^{\hat{\mu}}\) are the total motivic vanishing cycles defined in [1, Section 2.3.1]. We now remark that for a morphism \(g:X\to\mathbf{A}^{1}\) defined on a smooth variety \(X\), the motivic vanishing cycles \(\varphi_{g}\) only depend on a log-resolution of the zero locus of \(g\), and therefore we have \(\varphi_{g}=\varphi_{-g}\). Using this, we see that for every \(a\in\mathbf{A}^{1}_{k}\), we have
\[(\varphi_{f}^{\mathrm{tot}})_{a}=\varphi_{f-a}=\varphi_{-f-a}=(\varphi_{-f}^ {\mathrm{tot}})_{-a}\]
in \(\mathscr{M}_{k(a)}^{\hat{\mu}}\). We thus see that, denoting by \(i:\mathbf{A}^{1}\to\mathbf{A}^{1}\) the map \(a\mapsto-a\), we have the identity \(\varphi_{f}^{\mathrm{tot}}=i_{!}\varphi_{-f}^{\mathrm{tot}}\). Pushing to \(\operatorname{Spec}k\) via \(\epsilon\) we get the result.
Throughout this section, let \(S\) be a variety over \(\mathbf{C}\) and let \(G\in\mathcal{O}_{S}[x_{1},\dots,x_{N}]\), for an integer \(N\geq 1\). For every point \(s\in S\), we denote by \(G_{s}\) the pullback of \(G\) via \(s\).
**Lemma 5.2**.: _We have_
\[w_{S}([\mathbf{A}^{N}_{S},G][\mathbf{A}^{N}_{S},-G])=2w_{S}([\mathbf{A}^{N}_{ S},G]).\]
Proof.: We write
\[\chi^{\mathrm{Hdg}}_{S}\circ\Phi_{S}([\mathbf{A}^{N}_{S},G])=\sum_{Z_{i}\subset S }a_{i}H_{i},\]
where each \(H_{i}\) is a simple pure polarisable variation of Hodge structures with monodromy over \(Z_{i}\). Up to stratifying, we may assume that for every \(i,j\), either \(Z_{i}=Z_{j}\) or \(Z_{i}\cap Z_{j}=\varnothing\).
Since \(H_{i}\) is polarisable, denoting by \(n_{i}\) its weight we have the selfduality
\[H_{i}\simeq H_{i}^{\vee}(-n_{i}).\]
From this, using Lemma 5.1 and the fact that \(\chi_{S}^{\mathrm{Hdg}}\) and \(\Phi_{S}\) are ring morphisms, we get
\[\chi_{S}^{\mathrm{Hdg}}\circ\Phi_{S}([\mathbf{A}_{S}^{N},G][\mathbf{ A}_{S}^{N},-G]) =\chi_{S}^{\mathrm{Hdg}}\circ\Phi_{S}([\mathbf{A}_{S}^{N},G])\chi_{ S}^{\mathrm{Hdg}}\circ\Phi_{S}([\mathbf{A}_{S}^{N},-G])\] \[=\left(\sum_{Z_{i}\subset S}a_{i}H_{i}\right)\left(\sum_{Z_{i} \subset S}a_{i}H_{i}^{\vee}(-n_{i})\right)\] \[=\sum_{i,j}a_{i}a_{j}\mathrm{Hom}(H_{i},H_{j})(-n_{j}).\]
Because of the assumption on \(Z_{i},Z_{j}\) for \(i\neq j\), and because of the simplicity of the \(H_{i}\), we get that
\[\chi_{S}^{\mathrm{Hdg}}\circ\Phi_{S}([\mathbf{A}_{S}^{N},G][\mathbf{A}_{S}^{N },-G])=\sum_{i}a_{i}^{2}\mathbf{Q}_{S}^{\mathrm{Hdg}}(-n_{i}),\]
where \(\mathbf{Q}_{S}^{\mathrm{Hdg}}\) denotes the constant rank one variation of Hodge structures of weight \(0\) over \(S\). We get a sum of variations of Hodge structure with positive coefficients: by Lemma 4.3, its weight is therefore \(2\max_{i}n_{i}=2w_{S}([\mathbf{A}_{S}^{N},G])\).
### The inductive argument
We define \(V(G)\) to be the variety over \(S\) given by
\[(\mathbf{y}^{(1)},\dots,\mathbf{y}^{(d-1)},s)\in(\mathbf{A}^{N})_{S}^{d-1}\]
such that
\[\sum_{\epsilon_{1},\dots,\epsilon_{d-1}\in\{0,1\}}(-1)^{\epsilon_{1}+\dots+ \epsilon_{d-1}}G_{s}(\mathbf{x}+\epsilon_{1}\mathbf{y}^{(1)}+\dots+\epsilon_{d -1}\mathbf{y}^{(d-1)})\]
is a constant function of \(\mathbf{x}\).
**Proposition 5.3**.: _Let \(G\in\mathcal{O}_{S}[x_{1},\dots,x_{N}]\) be a polynomial of degree \(\leq d\) in \(N\) variables. The weight function \(w_{S}:\mathscr{E}xp\mathscr{M}_{S}\to\mathbf{Z}\) satisfies_
\[w_{S}([\mathbf{A}_{S}^{N},G])\leq\frac{\dim_{S}V(G)+N(2^{d-1}-(d-1))}{2^{d-2}}.\]
Proof.: We essentially follow the proof of [2, Proposition 5.5], working by induction on \(d\). Assume first that \(d=1\). Then \(V(G)\) is a subvariety of \(S\) given by the \(s\in S\) such that \(G_{s}(\mathbf{x})\) is constant. Assume first that \(V(G)\) is empty. Then for every \(s\in S\), the polynomial \(G_{s}(\mathbf{x})\) is of degree exactly \(1\), and so \([\mathbf{A}_{\kappa(s)}^{N},G_{s}]=0\) by Lemma 2.3. Lemma 2.1 therefore yields \([\mathbf{A}_{S}^{N},G]=0\), and so both sides of the inequality in the statement are equal to \(-\infty\). On the other hand, if \(V(G)\) is nonempty, then \(\dim_{S}V(G)=0\). By the triangular inequality of Lemma 4.7, we have
\[w_{S}([\mathbf{A}_{S}^{N},G])\leq 2N,\]
which is exactly the inequality that needed to be proved.
We now assume that the result is already known for polynomials of degree \(\leq d-1\). Let \(G\) be a polynomial of degree \(d\geq 2\). We begin with an application of Lemma 5.2, which yields
\[2w_{S}([\mathbf{A}_{S}^{N},G]) \leq w_{S}([\mathbf{A}_{S}^{N},G][\mathbf{A}_{S}^{N},-G])\] \[=w_{S}([\mathbf{A}_{S}^{N}\times_{S}\mathbf{A}_{S}^{N},G(\mathbf{ x})-G(\mathbf{y})])\] \[=w_{S}([\mathbf{A}_{S}^{N}\times_{S}\mathbf{A}_{S}^{N},G(\mathbf{ x})-G(\mathbf{x}+\mathbf{y})]),\]
since \(G(\mathbf{x})-G(\mathbf{y})\) is related to \(G(\mathbf{x})-G(\mathbf{x}+\mathbf{y})\) via an invertible change of variables.
For any \(\mathbf{y}_{0}=(y_{1},\dots,y_{N})\), the difference \(G(\mathbf{x})-G(\mathbf{x}+\mathbf{y}_{0})\) is a polynomial of degree \(d-1\) in \(x_{1},\dots,x_{N}\). The variety \(V(G)\) admits a map \(p:V(G)\to\mathbf{A}_{S}^{N}\) along the coordinates \(\mathbf{y}\), whose fibre over a point \(\mathbf{y}_{0}\in\mathbf{A}_{S}^{N}\) is the variety in the definition of \(V(G(\mathbf{x})-G(\mathbf{x}+\mathbf{y}_{0}))\).
Choose a stratification of \(\mathbf{A}_{S}^{N}\), with strata \(W_{j}\) being varieties such that the fibre dimension of this map is constant on each stratum \(W_{j}\). Thus
\[\dim_{W_{j}}V(G(\mathbf{x})-G(\mathbf{x}+\mathbf{y}_{0}))=\dim_{S}p^{-1}(W_{j}) -\dim_{S}W_{j}\leq\dim_{S}V(G)-\dim_{S}W_{j}, \tag{5.1}\]
for any \(\mathbf{y}_{0}\in W_{j}\). On appealing to part (1) of Proposition 4.4, it follows that
\[2w_{S}([\mathbf{A}_{S}^{N},G])\leq\max_{j}w_{S}([\mathbf{A}_{S}^{N}\times_{S}W _{j},G(\mathbf{x})-G(\mathbf{x}+\mathbf{y})]).\]
Fix a stratum \(W_{j}\) and denote by \(q_{j}:W_{j}\to S\) the restriction to \(W_{j}\) of the projection map \(\mathbf{A}_{S}^{N}\to S\). Up to refining our stratification, we may assume that \(q_{j}:W_{j}\to S_{j}\) (where \(S_{j}=q_{j}(W_{j})\)) has fibres of constant dimension \(\dim_{S}W_{j}=r\in[0,N]\), and that
\[w_{S_{j}}([\mathbf{A}_{S_{j}}^{N}\times_{S_{j}}W_{j},G(\mathbf{x})-G(\mathbf{ x}+\mathbf{y})])\leq w_{W_{j}}([\mathbf{A}_{W_{j}}^{N},G(\mathbf{x})-G( \mathbf{x}+\mathbf{y}_{0})])+2r,\]
by part (2) of Proposition 4.4. It follows from (5.1) and the inductive hypothesis that
\[w_{W_{j}}([\mathbf{A}_{W_{j}}^{N},G(\mathbf{x})-G(\mathbf{x}+\mathbf{y}_{0})] )\leq\frac{\dim_{S}V(G)-r+N(2^{d-2}-(d-2))}{2^{d-3}},\]
for any \(\mathbf{y}_{0}\in W_{j}\). Hence by part (3) of Proposition 4.4,
\[w_{S}([\mathbf{A}_{S}^{N}\times_{S}W_{j},G(\mathbf{x})-G(\mathbf{x}+\mathbf{y} )])\leq\frac{\dim_{S}V(G)+(2^{d-2}-1)r+N(2^{d-2}-(d-2))}{2^{d-3}}.\]
Taking the maximum over the different strata, we finally deduce that
\[2w_{S}([\mathbf{A}_{S}^{N},G]) \leq\max_{0\leq r\leq N}\frac{\dim_{S}V(G)+(2^{d-2}-1)r+N(2^{d-2} -(d-2))}{2^{d-3}}\] \[=\frac{\dim_{S}V(G)+N(2^{d-1}-(d-1))}{2^{d-3}},\]
which suffices to complete the proof.
### A general bound for exponential sums
In this section, we want to prove a general upper bound for an exponential sum of the form
\[[\operatorname{Poly}_{<E}^{n}\times S,\operatorname{res}(\alpha f(g_{1},\dots, g_{n}))],\]
when \(S\) is a variety parameterising values of \(\alpha\). We write our polynomial \(f\) in the form
\[f(x_{1},\dots,x_{n})=\sum_{j_{1},\dots,j_{d}=1}^{n}c_{\mathbf{j}}x_{j_{1}} \dots x_{j_{d}},\]
for symmetric coefficients \(c_{\mathbf{j}}\) (i.e. \(c_{\mathbf{j}}=c_{\sigma(\mathbf{j})}\) for any \(\sigma\in S_{d}\)). Associated to \(f\) are the multilinear forms
\[\Psi_{j}(\mathbf{h}^{(1)},\dots,\mathbf{h}^{(d-1)})=d!\sum_{j_{1},\dots,j_{d-1 }=1}^{n}c_{j_{1},\dots,j_{d-1},j}h_{j_{1}}^{(1)}\dots h_{j_{d-1}}^{(d-1)},\]
for \(1\leq j\leq n\). Recall that for a Laurent series \(h=\sum_{i\leq M}h_{i}t^{i}\in k((t^{-1}))\) we denote by \(\{h\}=\sum_{i\leq-1}h_{i}t^{i}\) its fractional part. The main goal of this section is to prove the following general bound for the weight of the exponential sum.
**Proposition 5.4**.: _We have_
\[w_{S}([\operatorname{Poly}_{<E}^{n}\times S,\operatorname{res}(\alpha f(g_{1}, \dots,g_{n}))])\leq\frac{\max_{\alpha\in S}\dim N(\alpha)+En(2^{d-1}-(d-1))}{ 2^{d-2}},\]
_where_
\[N(\alpha)=\left\{(\mathbf{u}^{(1)},\dots,\mathbf{u}^{(d-1)})\in(\operatorname {Poly}_{<E}^{n})^{d-1}:\begin{array}{l}\operatorname{ord}\{\alpha\Psi_{j}( \mathbf{u}^{(1)},\dots,\mathbf{u}^{(d-1)})\}<-E\\ \forall j\in\{1,\dots,n\}\end{array}\right\}. \tag{5.2}\]
#### 5.3.1. Weyl differencing
Denote by \(G_{\alpha}\) the polynomial
\[G_{\alpha}=\operatorname{res}\left(\alpha f(g_{1},\dots,g_{n})\right),\]
for \(\alpha\in S,\) considered as a function in the coefficients of \(g_{1},\dots,g_{n}\). Writing
\[g_{j}(t)=a_{0,j}+a_{1,j}t+\dots+a_{E-1,j}t^{E-1},\]
we see that \(G_{\alpha}\) is a polynomial of degree \(d\) in the \(N=En\) variables \((a_{i,j})\) for \(0\leq i<E\) and \(1\leq j\leq n\). In this notation we have
\[[\operatorname{Poly}_{<E}^{n}\times S,\operatorname{res}(\alpha f(g_{1},\dots, g_{n})]=[\mathbf{A}_{S}^{N},G_{\alpha}].\]
Hence, on applying Proposition 5.3, we get
\[w_{S}([\mathbf{A}_{S}^{N},G_{\alpha}])\leq\frac{\dim_{S}V(G_{\alpha})+N(2^{d-1 }-(d-1))}{2^{d-2}}.\]
Thus, to bound our exponential sum, it suffices to obtain a bound on \(\dim V(G_{\alpha})\) for every \(\alpha\in S\).
#### 5.3.2. Reformulation using multilinear forms
The polynomial \(G_{\alpha}\) is the coefficient of \(t^{-1}\) in
\[(b_{1}t^{-1}+b_{2}t^{-2}+\dots)\sum_{j_{1},\dots,j_{d}=1}^{n}c_{\mathbf{j}} \prod_{i=1}^{d}(a_{0,j_{i}}+a_{1,j_{i}}t+\dots+a_{E-1,j_{i}}t^{E-1}),\]
\[=(b_{1}t^{-1}+b_{2}t^{-2}+\dots)\sum_{j_{1},\dots,j_{d}=1}^{n}c_{\mathbf{j}} \sum_{i_{1},\dots,i_{d}=0}^{E-1}a_{i_{1},j_{1}}\dots a_{i_{d},j_{d}}t^{i_{1}+ \dots+i_{d}},\]
which is given by
\[\sum_{j_{1},\dots,j_{d}=1}^{n}\sum_{i_{1},\dots,i_{d}=0}^{E-1}b_{i_{1}+\dots+i _{d}+1}c_{\mathbf{j}}a_{i_{1},j_{1}}\dots a_{i_{d},j_{d}}.\]
Using this expression, \(V(G_{\alpha})\) is the set of \((\mathbf{y}^{(1)},\dots,\mathbf{y}^{(d-1)})\in(\mathbf{A}^{N})^{d-1}\) such that
\[\sum_{\epsilon_{1},\dots,\epsilon_{d-1}\in\{0,1\}}(-1)^{\epsilon_{1}+\dots+ \epsilon_{d-1}}\sum_{\mathbf{i}\mathbf{j}}b_{i_{1}+\dots+i_{d}+1}c_{\mathbf{j }}\prod_{k=1}^{d}(a_{i_{k},j_{k}}+\epsilon_{1}y_{i_{k},j_{k}}^{(1)}+\dots+ \epsilon_{d-1}y_{i_{k},j_{k}}^{(d-1)})\]
is a constant function in the \((a_{i,j})_{0\leq i\leq E-1}\). Since we know that the latter is at most linear in the \((a_{i,j})_{\begin{subarray}{c}0\leq i\leq E-1\\ 1\leq j\leq n\end{subarray}}\), it suffices to ensure that the coefficient of each \(a_{i,j}\) vanishes, which gives us equations
\[\sum_{i_{1},\dots,i_{d-1}=0}^{E-1}\sum_{j_{1},\dots,j_{d-1}=1}^{n}b_{i_{1}+ \dots+i_{d-1}+i+1}c_{j_{1},\dots,j_{d-1},j}y_{i_{1},j_{1}}^{(1)}\dots y_{i_{d- 1},j_{d-1}}^{(d-1)}=0, \tag{5.3}\]
for every \(i\in\{0,\dots,E-1\}\) and every \(j\in\{1,\dots,n\}\). The statement of Proposition 5.4 is now a direct consequence of the following result.
**Lemma 5.5**.: _There is a canonical identification \(V(G_{\alpha})=N(\alpha),\) where \(N(\alpha)\) is given by (5.2)._
Proof.: Writing \(\mathbf{u}^{(i)}=\sum_{k=0}^{E-1}\mathbf{h}^{(i)}_{k}t^{k}\), with \(\mathbf{h}_{k}=(h_{k,1},\ldots,h_{k,n})\in\mathbf{A}^{n}\), we see that
\[\alpha \Psi_{j}(\mathbf{u}^{(1)},\ldots,\mathbf{u}^{(d-1)})\] \[=\left(\sum_{r\geq 1}b_{r}t^{-r}\right)d!\sum_{j_{1},\ldots,j_{d-1 }=1}^{n}c_{j_{1},\ldots,j_{d-1},j}u^{(1)}_{j_{1}}\ldots u^{(d-1)}_{j_{d-1}}\] \[=d!\left(\sum_{r\geq 1}b_{r}t^{-r}\right)\sum_{j_{1},\ldots,j_{d-1 }=1}^{n}c_{j_{1},\ldots,j_{d-1},j}\sum_{i_{1},\ldots,j_{d-1}=0}^{E-1}h^{(1)}_{ i_{1},j_{1}}\ldots h^{(d-1)}_{i_{d-1},j_{d-1}}t^{i_{1}+\cdots+i_{d-1}}\] \[=d!\sum_{r\geq 1}b_{r}\sum_{j_{1},\ldots,j_{d-1}=1}^{n}c_{j_{1}, \ldots,j_{d-1},j}\sum_{i_{1},\ldots,i_{d-1}=0}^{E-1}h^{(1)}_{i_{1},j_{1}} \ldots h^{(d-1)}_{i_{d-1},j_{d-1}}t^{i_{1}+\cdots+i_{d-1}-r}.\]
The space \(N(\alpha)\) is defined by the vanishing of the coefficients of \(t^{-1},\ldots,t^{-E}\). For \(i\in\{0,\ldots,E-1\}\), the coefficient of degree \(t^{-i-1}\) is given by
\[d!\sum_{i_{1},\ldots,i_{d-1}=0}^{E-1}\sum_{j_{1},\ldots,j_{d-1}=1}^{n}b_{i_{1 }+\cdots+i_{d-1}+i+1}c_{j_{1},\ldots,j_{d-1},j}h^{(1)}_{i_{1},j_{1}}\ldots h^{ (d-1)}_{i_{d-1},j_{d-1}}.\]
Comparing with equation (5.3), we get the required identification.
Thus, it now remains to find a bound on \(\dim N(\alpha)\). To be able to apply it in both the major and in the minor arc setting, it will rely on parameters associated to a rational approximation of \(\alpha\).
#### 5.3.3. A dimension bound on zero-sets defined by multilinear forms
For given \(E\geq 1\), let \(V_{E}\subset\mathbf{A}^{(d-1)En}\) denote the variety of points
\[(\mathbf{h}^{(1)}_{k},\ldots,\mathbf{h}^{(d-1)}_{k})_{0\leq k<E}\in\mathbf{A }^{(d-1)En},\]
for which
\[\Psi_{j}\left(\mathbf{h}^{(1)}_{0}+t\mathbf{h}^{(1)}_{1}+\cdots+t^{E-1} \mathbf{h}^{(1)}_{E-1},\ldots,\mathbf{h}^{(d-1)}_{0}+t\mathbf{h}^{(d-1)}_{1}+ \cdots+t^{E-1}\mathbf{h}^{(d-1)}_{E-1}\right)\]
vanishes identically in \(t\), for \(1\leq j\leq n\). We prove the following result.
**Lemma 5.6**.: _We have \(\dim V_{E}\leq(d-2)En\)._
Proof.: The condition in the definition of \(V_{E}\) is equivalent to the system of polynomial equations
\[\sum_{\begin{subarray}{c}0\leq i_{1},\ldots,i_{d-1}<E\\ i_{1}+\cdots+i_{d-1}=\ell\end{subarray}}\Psi_{j}(\mathbf{h}^{(1)}_{i_{1}}, \ldots,\mathbf{h}^{(d-1)}_{i_{d-1}})=0,\]
for \(0\leq\ell\leq(d-1)(E-1)\) and \(1\leq j\leq n\). Define the diagonal \(D\subset\mathbf{A}^{(d-1)En}\), via
\[D=\{\mathbf{h}^{(1)}_{k}=\mathbf{h}^{(2)}_{k}=\cdots=\mathbf{h}^{(d-1)}_{k}: 0\leq k<E\}.\]
Then \(D\) has affine dimension \(En\). Moreover, \(D\cap V_{E}\) consists precisely of the points \((\mathbf{h}_{0},\ldots,\mathbf{h}_{E-1})\in\mathbf{A}^{En}\) for which
\[\sum_{\begin{subarray}{c}0\leq i_{1},\ldots,i_{d-1}<E\\ i_{1}+\cdots+i_{d-1}=\ell\end{subarray}}\Psi_{j}(\mathbf{h}_{i_{1}},\ldots, \mathbf{h}_{i_{d-1}})=0,\]
for \(0\leq\ell\leq(d-1)(E-1)\) and \(1\leq j\leq n\). When \(\ell=0\) we deduce that
\[\Psi_{j}(\mathbf{h}_{0},\ldots,\mathbf{h}_{0})=0,\]
for \(1\leq j\leq n\). Since \(f\) is non-singular, the only solution to this system of equations is \(\mathbf{h}_{0}=\mathbf{0}\). When \(\ell=d-1\) we obtain the system of equations
\[\Psi_{j}(\mathbf{h}_{1},\ldots,\mathbf{h}_{1})=0,\]
for \(1\leq j\leq n\), since \(\mathbf{h}_{0}=\mathbf{0}\) and the \(\Psi_{j}\) are multilinear. Thus also \(\mathbf{h}_{1}=\mathbf{0}\). Proceeding in this way, for \(\ell\) running through multiples of \(d-1\), one ultimately concludes that \(\mathbf{h}_{0}=\cdots=\mathbf{h}_{E-1}=\mathbf{0}\), whence \(\dim(D\cap V_{E})=0\). It now follows from the affine dimension theorem that
\[0 =\dim(D\cap V_{E})\] \[\geq\dim V_{E}+\dim D-(d-1)En\] \[\geq\dim V_{E}-(d-2)En,\]
from which the lemma follows.
#### 5.3.4. Application of the shrinking lemma
Recall the definition (5.2) of \(N(\alpha)\). We are now ready to prove the following general result, which gives us a means of bounding the remaining term in Proposition 5.4.
**Lemma 5.7**.: _Let \(\alpha=\frac{h_{1}}{h_{2}}+\theta\) such that \(\operatorname{ord}(\theta)<0\). Let \(\rho=\deg(h_{2})\) and \(\psi=\operatorname{ord}(\theta)\). Assume \(s\geq 0\) to be an integer chosen such that_
1. _We have_ \(-E-1-(d-1)s<-\rho\) _and_ \((d-1)(E-1-s)+\psi<-\rho\)_._
2. _We either have_ \((d-1)(E-1-s)<\rho\) _or_ \(-E-(d-1)s-\psi\leq\rho\)_._
_Then_
\[\dim N(\alpha)\leq(d-2)En+sn.\]
Proof.: We shall adopt the notation in (A.3) and Section A.2 in the proof of this result. To begin with, the variety \(N(\alpha)\) admits a projection \(N(\alpha)\to\mathbf{A}^{E(d-2)}\), along the coordinates \((\mathbf{u}^{(1)},\ldots,\mathbf{u}^{(d-2)})\). The fibre above such a point is equal to the space whose dimension is taken in the definition of \(\nu(\bigwedge_{E,E}(\mathbf{U}),0)\), for an appropriate symmetric \(n\times n\) matrix \(\mathbf{U}\) depending on \((\mathbf{u}^{(1)},\ldots,\mathbf{u}^{(d-2)})\). An application of Lemma A.3 now yields
\[\dim N(\alpha)\leq ns+\dim N^{\prime}_{s}(\alpha),\]
for any integer \(s\geq 0\), where
\[N^{\prime}_{s}(\alpha)=\left\{\underline{\mathbf{u}}\in\left(\operatorname{ Poly}^{n}_{<E}\right)^{d-2}\times\operatorname{Poly}^{n}_{<E-s}:\operatorname{ ord}\left\{\alpha\Psi_{j}(\underline{\mathbf{u}})\right\}<-E-s,\forall j \in\left\{1,\ldots,n\right\}\right\}.\]
Repeating this argument \(d-2\) times, we finally conclude that
\[\dim N(\alpha)\leq(d-1)ns+\dim N_{s}(\alpha), \tag{5.4}\]
for any integer \(s\geq 0\), where
\[N_{s}(\alpha)=\left\{\underline{\mathbf{u}}\in\left(\operatorname{Poly}^{n}_{ <E-s}\right)^{d-1}:\operatorname{ord}\left\{\alpha\Psi_{j}(\underline{ \mathbf{u}})\right\}<-E-(d-1)s,\forall j\in\left\{1,\ldots,n\right\}\right\}.\]
Our goal is to choose \(s\), depending on \(\rho\) and \(\psi\), such that any \(\underline{\mathbf{u}}\) appearing in the definition of \(N_{s}(\alpha)\) must in fact satisfy \(\Psi_{j}(\underline{\mathbf{u}})=0\) for all \(j\in\left\{1,\ldots,n\right\}\). Let \(\underline{\mathbf{u}}\in N_{s}(\alpha)\) and let \(j\in\left\{1,\ldots,n\right\}\). Put \(m=\Psi_{j}(\underline{\mathbf{u}})\in k[t]\). The conditions on the degrees of the components of \(\underline{\mathbf{u}}\) give us that \(\deg(m)\leq(d-1)(E-s-1).\) Write \(\left\{\alpha m\right\}=\left\{\frac{h_{1}}{h_{2}}m+\theta m\right\}\), and note that we have the upper bound
\[\operatorname{ord}\{\theta m\}\leq\deg(m)+\operatorname{ord}(\theta)\leq(d-1 )(E-s-1)+\psi.\]
We will proceed in two steps. First we will find conditions on \(s\) such that \(h_{2}\) should divide \(m\). For this, it suffices to bound \(\operatorname{ord}\left\{\frac{h_{1}}{h_{2}}m\right\}\) by something smaller than \(-\rho\). We write
\[\operatorname{ord}\left\{\frac{h_{1}}{h_{2}}m\right\}=\operatorname{ord}\{ \alpha m-\theta m\}\leq\max\left(\operatorname{ord}\{\alpha m\},\operatorname{ ord}\{\theta m\}\right).\]
Making use of the upper bounds we have on these quantities, we get
\[\operatorname{ord}\left\{\frac{h_{1}}{h_{2}}m\right\}\leq\max\left(-E-1-(d-1)s,(d- 1)(E-s-1)+\psi\right).\]
Thus, we see that condition (1) in the statement of the lemma forces \(\operatorname{ord}\{\frac{h_{1}}{h_{2}}m\}<-\rho\), and therefore forces \(h_{2}\) to divide \(m\).
The second step consists in finding conditions which would ensure that \(m=0\). Given that \(h_{2}\) divides \(m\), one possibility is just to force \(\deg(m)<\rho\). This is implied by the condition
\[(d-1)(E-1-s)<\rho,\]
which is the first inequality in part (2) of the lemma. Alternatively, note that since \(h_{2}\) divides \(m\), we have
\[\operatorname{ord}\{\theta m\}=\operatorname{ord}\{\alpha m\}<-E-(d-1)s.\]
On the other hand, \(\operatorname{ord}(\theta m)\leq(d-1)(E-s-1)+\psi\), and the latter is negative since condition (1) is satisfied. This means that
\[\operatorname{ord}(\theta m)=\operatorname{ord}\{\theta m\}<-E-(d-1)s\]
and therefore \(\deg(m)<-E-(d-1)s-\psi.\) Thus, if \(-E-(d-1)s-\psi\leq\rho\), which is the second inequality in part (2) of the lemma, then \(m=0\).
We may conclude that if \(s\) is chosen so that conditions (1) and (2) in the lemma are satisfied, then
\[N_{s}(\alpha)=\left\{\underline{\mathbf{u}}\in\left(\operatorname{Poly}_{<E-s }^{n}\right)^{d-1}:\Psi_{j}(\underline{\mathbf{u}})=0,\forall j\in\{1,\ldots, n\}\right\}.\]
Lemma 5.6 gives \(\dim N_{s}(\alpha)\leq(d-2)(E-s)n\), and so the statement follows from (5.4).
_Remark 5.8_.: Assume \(\frac{h_{1}}{h_{2}}=0\), so that \(\rho=0\). Then we may take
\[s=\max\left(0,E-1+\left\lceil\frac{1+\psi}{d-1}\right\rceil,\min\left(E,- \left\lfloor\frac{E+\psi}{d-1}\right\rfloor\right)\right).\]
_Remark 5.9_.: Assume that \(\alpha\in k(t)\), so that \(\theta=0\) and \(\psi=-\infty\). Then we may take
\[s=\max\left(0,\left\lceil\frac{\rho-E}{d-1}\right\rceil,E-1-\left\lfloor\frac {\rho-1}{d-1}\right\rfloor\right).\]
## 6. The motivic major arcs
### Definition
Recall that the motivic analogue of the space \(\mathbf{T}\) occurring in the function field circle method is the space \(\mathbf{A}^{(-1,-de-2)}\). It is an affine space of dimension \(de+1\), representing elements of \(\mathbf{T}\) modulo \(t^{-de-2}k[[t^{-1}]]\). Fix a positive parameter \(\gamma>0\). For every element \(\beta=(b_{1},\ldots,b_{de+1})\in\mathbf{A}^{(-1,-de-2)}\) and every integer \(m\geq 0\), we define the matrices
\[A_{\beta,m}=\left(\begin{array}{ccc}b_{1}&\ldots&b_{m+1}\\ \vdots&\ddots&\vdots\\ b_{de+1-\gamma-m}&\ldots&b_{de+1-\gamma}\end{array}\right)\]
and
\[A_{\beta,m}^{*}=\left(\begin{array}{ccc}b_{1}&\ldots&b_{m}\\ \vdots&\ddots&\vdots\\ b_{de+1-\gamma-m}&\ldots&b_{de-\gamma}\end{array}\right),\]
the latter being obtained from the former by removing the last column. Note that \(\operatorname{Ker}A_{\beta,m}\) is generated by a vector \((c_{0},\ldots,c_{m})\) satisfying \(c_{m}\neq 0\) if and only if \(A_{\beta,m}^{*}\) has full rank.
We define the subspaces \(M_{m,\gamma}\) of \(\mathbf{A}^{(-1,-de-2)}\) for \(m\geq-1\) inductively in the following way. Let \(M_{-1,\gamma}=\varnothing\), and let
\[M_{m,\gamma}=M_{m-1,\gamma}\cup\left\{(b_{1},\dots,b_{de+1}):\mathrm{rk}A_{ \beta,m}\leq m,\ \mathrm{rk}A_{\beta,m}^{*}=m\right\}.\]
Denote by \(M_{m,\gamma}^{*}\) the second set in this union. By the discussion in Section 3.1, we see that \((b_{1},\dots,b_{de+1})\) is an element of \(M_{m,\gamma}^{*}\) if and only if there exists \(h_{2}\) of degree exactly \(m\) and \(h_{1}\) of degree \(<m\) such that \(\alpha=b_{1}t^{-1}+\dots+b_{de+1}t^{-de-1}\) satisfies
\[\mathrm{ord}(h_{2}\alpha-h_{1})\leq-de-2+\gamma+m.\]
In other words, \(\alpha=\frac{h_{1}}{h_{2}}+\theta\) with \(\mathrm{ord}\,\theta\leq-de-2+\gamma\). Moreover, the coefficients of \(h_{1},h_{2}\) and \(\theta\) are algebraic functions of the coefficients of \(\alpha\).
**Lemma 6.1**.: _For any integers \(\gamma>0\) and \(m\geq 0\), we have_
\[M_{m,\gamma}=\left\{\begin{array}{rl}&b_{1}t^{-1}+\dots+b_{de+1}t^{-de-1}= \frac{h_{1}}{h_{2}}+\theta\\ (b_{1},\dots,b_{de+1}):&\gcd(h_{1},h_{2})=1\text{ and }h_{2}\text{ monic}\\ &\deg h_{1}<\deg h_{2}\leq m\\ &\mathrm{ord}\ \theta\leq-de-2+\gamma\end{array}\right\}.\]
Proof.: We proceed by induction on \(m\). The statement is clearly true for \(m=0\). Assume it is true for \(m-1\). Then by the above remark the inclusion of \(M_{m,\gamma}\) in the right hand side is obvious. Conversely, assume that \(b_{1}t^{-1}+\dots+b_{de+1}t^{-de-1}=\frac{h_{1}}{h_{2}}+\theta\) as in the statement of the lemma. If \(\deg h_{2}\leq m-1\), then we may conclude by induction that \((b_{1},\dots,b_{de+1})\in M_{m-1,\gamma}\subset M_{m,\gamma}\). If \(\deg h_{2}=m\), then we have \((b_{1},\dots,b_{de+1})\in M_{m,\gamma}^{*}\subset M_{m,\gamma}\), and so the inverse inclusion holds as well.
This characterisation of \(M_{m,\gamma}\) is what we will use in practice. It is similar to the set \(A_{m}^{de+1}\) that was discussed in Remark 3.4, the chief difference being that our bound on \(\mathrm{ord}\,\theta\) doesn't depend on the degree of \(h_{2}\).
The set \(M_{m,\gamma}\) can be stratified into subsets \(M_{m,m^{\prime},\gamma}\), where \(h_{2}\) has degree exactly \(m^{\prime}\). (In fact, there is an identification \(M_{m,m^{\prime},\gamma}=M_{m^{\prime},\gamma}^{*}\).) We are now ready to define the contribution from our major arcs. Bearing in mind Lemma 3.6, we shall take
\[N_{\mathrm{major}}=\sum_{0\leq m^{\prime}\leq e+1-\gamma}N_{\mathrm{major}}(m^ {\prime}), \tag{6.1}\]
where
\[N_{\mathrm{major}}(m^{\prime})=\mathbf{L}^{-de-1}\left[\mathrm{Poly}_{\leq e }^{n}\times M_{m,m^{\prime},\gamma},\mathrm{res}\left(\left(\frac{h_{1}}{h_{2 }}+\theta\right)f(g_{1},\dots,g_{n})\right)\right], \tag{6.2}\]
which is viewed as an element of \(\mathscr{E}xp\mathscr{M}_{\mathrm{Poly}_{\leq e}^{n}\times M_{m,m^{\prime}, \gamma}}.\) (We shall see that the condition \(m^{\prime}\leq e+1-\gamma\) arises naturally during the course of the argument.)
We shall ultimately take
\[\gamma=\left\lceil\frac{e+1}{2}\right\rceil, \tag{6.3}\]
although we'll often use \(\gamma\) to ease notation.
For any \(m\geq 0\), let
\[B_{m}=(\mathrm{Poly}_{<m}\times\mathrm{MPoly}_{m})_{*} \tag{6.4}\]
denote the space of pairs \((h_{1},h_{2})\) of coprime polynomials such that \(\deg h_{1}<\deg h_{2}=m\), with \(h_{2}\) monic. Then we may define the "exponential sum"
\[S_{m}(f)=\left[\mathrm{Poly}_{<m}^{n}\times B_{m},\mathrm{res}\left(\left( \frac{h_{1}}{h_{2}}\right)f(\overline{g}_{1},\dots,\overline{g}_{n})\right) \right]. \tag{6.5}\]
There is a piecewise isomorphism
\[M_{m,m^{\prime},\gamma}\simeq B_{m^{\prime}}\times\mathbf{A}^{(-de-2+\gamma,-de-2)},\]
where \(B_{m^{\prime}}\) is given by (6.4), which is obtained by sending \(\alpha\) to \((h_{1},h_{2},\theta)\). This induces natural morphisms
\[\mathscr{E}xp\mathscr{M}_{B_{m^{\prime}}}\to\mathscr{E}xp\mathscr{M}_{M_{m,m^ {\prime},\gamma}} \tag{6.6}\]
and
\[\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{(-de-2+\gamma,-de-2)}}\to\mathscr{E}xp \mathscr{M}_{M_{m,m^{\prime},\gamma}}. \tag{6.7}\]
With these, the sum \(S_{m^{\prime}}(f)\) can be viewed as an element of \(\mathscr{E}xp\mathscr{M}_{M_{m,m^{\prime},\gamma}}\), for each \(m^{\prime}\leq m\).
### Contribution of \(M_{m,m^{\prime},\gamma}\)
We will start by computing the contribution (6.2) for fixed \(m^{\prime}\leq m\leq e+1\). Note that \(N_{\mathrm{major}}(m^{\prime})\) is viewed as an element of \(\mathscr{E}xp\mathscr{M}_{\mathrm{Poly}_{\leq e}^{n}\times M_{m,m^{\prime}, \gamma}}\), but we will allow ourselves the slight abuse of notation of also denoting by \(N_{\mathrm{major}}(m^{\prime})\) the image of this element in some other Grothendieck rings. By Euclidean division, for each \(1\leq i\leq n\), we may write \(g_{i}=h_{2}q_{i}+\overline{g}_{i}\), which gives us an isomorphism
\[\mathrm{Poly}_{\leq e}^{n}\simeq\mathrm{Poly}_{\leq e-m^{\prime}}^{n}\times \mathrm{Poly}_{<m^{\prime}}^{n},\]
sending \((g_{1},\dots,g_{n})\) to \((q_{1},\dots,q_{n},\overline{g}_{1},\dots\overline{g}_{n})\). In terms of classes in the Grothendieck ring, this implies
\[N_{\mathrm{major}}(m^{\prime}) =\mathbf{L}^{-de-1}\left[\mathrm{Poly}_{<m^{\prime}}^{n}\times M_{ m,m^{\prime},\gamma},\mathrm{res}\left(\left(\frac{h_{1}}{h_{2}}\right)f( \overline{g}_{1},\dots,\overline{g}_{n})\right)\right]\] \[\quad\cdot\left[\mathrm{Poly}_{\leq e-m^{\prime}}^{n}\times \mathrm{Poly}_{<m^{\prime}}^{n}\times M_{m,m^{\prime},\gamma},\mathrm{res} \left(\theta f(\overline{g}_{1}+h_{2}q_{1},\dots,\overline{g}_{n}+h_{2}q_{n}) \right)\right],\]
where the product is taken in \(\mathscr{E}xp\mathscr{M}_{\mathrm{Poly}_{<m^{\prime}}^{n}\times M_{m,m^{\prime},\gamma}}\). The following result shows that there is no dependence on \(\overline{g}_{1},\dots,\overline{g}_{n}\) in the second factor, under our assumption on \(m^{\prime}\).
**Lemma 6.2**.: _Assume that \(m^{\prime}\leq e+1-\gamma\). Then_
\[\mathrm{ord}(\theta(f(\overline{g}_{1}+h_{2}q_{1},\dots,\overline{g}_{n}+h_{2} q_{n})-f(h_{2}q_{1},\dots,h_{2}q_{n})))<-1.\]
Proof.: We note that the difference \(f(\overline{g}_{1}+h_{2}q_{1},\dots,\overline{g}_{n}+h_{2}q_{n})-f(h_{2}q_{1}, \dots,h_{2}q_{n})\) is a sum of monomials of degree \(d\) in \(\overline{g}_{1},\dots,\overline{g}_{n},h_{2}q_{1},\dots,h_{2}q_{n}\). Moreover, the degree in \(\overline{g}_{1},\dots,\overline{g}_{n}\) is at least \(1\). We know that \(\mathrm{ord}\,h_{2}q_{i}\leq e\) and \(\mathrm{ord}\,\overline{g}_{i}\leq m^{\prime}-1\leq m-1\leq e\). Thus, using the bound \(m^{\prime}-1\) for at least one of the \(\overline{g}_{i}\)-factors of each monomial, and the bound \(e\) for all the other factors, we get
\[\mathrm{ord}(f(\overline{g}_{1}+h_{2}q_{1},\dots,\overline{g}_{n}+h_{2}q_{n}) -f(h_{2}q_{1},\dots,h_{2}q_{n}))\leq(d-1)e+m^{\prime}-1.\]
Moreover, we know that \(\mathrm{ord}(\theta)\leq-de-2+\gamma\). Combining the two estimates, we get
\[\mathrm{ord}\left(\theta(f(\overline{g}_{1}+h_{2}q_{1},\dots, \overline{g}_{n}+h_{2}q_{n})-f(h_{2}q_{1},\dots,h_{2}q_{n}))\right) \leq(d-1)e+m^{\prime}-1-de-2+\gamma\] \[\leq-e-3+m^{\prime}+\gamma.\]
This is bounded by \(-2\) if \(m^{\prime}+\gamma\leq e+1\).
We may now write the contribution of \(M_{m.m^{\prime},\gamma}\) as
\[N_{\mathrm{major}}(m^{\prime})=\mathbf{L}^{-de-1}S_{m^{\prime}}(f)\cdot[ \mathrm{Poly}_{\leq e-m^{\prime}}^{n}\times M_{m,m^{\prime},\gamma},\mathrm{ res}(\theta f(h_{2}q_{1},\dots,h_{2}q_{n})],\]
now viewed in \(\mathscr{E}xp\mathscr{M}_{M_{m,m^{\prime},\gamma}}\), where \(S_{m^{\prime}}(f)\) is given by (6.5). We now apply an averaging argument to the rightmost factor, in order to remove the dependence in \(m^{\prime}\) completely. We first of all re-introduce a factor \(\mathrm{Poly}_{<m^{\prime}}^{n}\simeq\mathbf{A}^{nm^{\prime}}\) to obtain
\[[\mathrm{Poly}_{\leq e-m^{\prime}}^{n}\times M_{m,m^{\prime}, \gamma},\mathrm{res}(\theta f(h_{2}q_{1},\dots,h_{2}q_{n})]\] \[\qquad\qquad\qquad=\mathbf{L}^{-nm^{\prime}}[\mathrm{Poly}_{\leq e -m^{\prime}}^{n}\times\mathrm{Poly}_{<m^{\prime}}^{n}\times M_{m,m^{\prime}, \gamma},\mathrm{res}(\theta f(h_{2}q_{1},\dots,h_{2}q_{n}))].\]
Now we use Lemma 6.2, together with the isomorphism coming from Euclidean division, in order to get that the right hand side is
\[=\mathbf{L}^{-nm^{\prime}}[\operatorname{Poly}^{n}_{\leq e-m^{\prime }}\times\operatorname{Poly}^{n}_{<m^{\prime}}\times M_{m,m^{\prime},\gamma}, \operatorname{res}(\theta f(\overline{g}_{1}+h_{2}q_{1},\dots,\overline{g}_{2 }+h_{2}q_{n}))]\] \[=\mathbf{L}^{-nm^{\prime}}[\operatorname{Poly}^{n}_{\leq e}\times M _{m,m^{\prime},\gamma},\operatorname{res}(\theta f(g_{1},\dots,g_{n}))]\] \[=\mathbf{L}^{-nm^{\prime}}[\operatorname{Poly}^{n}_{\leq e}\times \mathbf{A}^{(-de-2+\gamma,-de-2)},\operatorname{res}(\theta f(g_{1},\dots,g_{n} ))],\]
where the latter is an element of \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{(-de-2+\gamma,-de-2)}}\), which we view as an element of \(\mathscr{E}xp\mathscr{M}_{M_{m,m^{\prime},\gamma}}\) via the morphism (6.7). We end up with
\[N_{\operatorname{major}}(m^{\prime}) =\mathbf{L}^{-de-1}S_{m^{\prime}}(f)\mathbf{L}^{-nm^{\prime}}[ \operatorname{Poly}^{n}_{\leq e}\times\mathbf{A}^{(-de-2+\gamma,-de-2)}, \operatorname{res}(\theta f(g_{1},\dots,g_{n}))]\] \[=S_{m^{\prime}}(f)\mathbf{L}^{-nm^{\prime}}\int_{\theta}[ \operatorname{Poly}^{n}_{\leq e}\times\mathbf{A}^{(-de-2+\gamma,-de-2)}, \operatorname{res}(\theta f(g_{1},\dots,g_{n}))],\]
where the integral is the one from Section 2.4. Inserting this into (6.1), we are therefore led to the following result.
**Proposition 6.3**.: _We have_
\[N_{\operatorname{major}}=\left(\sum_{0\leq m^{\prime}\leq e+1-\gamma}S_{m^{ \prime}}(f)\mathbf{L}^{-nm^{\prime}}\right)\int_{\theta}U(\theta),\]
_in \(\mathscr{E}xp\mathscr{M}_{\mathbf{C}}\), where_
\[U(\theta):=[\operatorname{Poly}^{n}_{\leq e}\times\mathbf{A}^{(-de-2+\gamma,- de-2)},\operatorname{res}(\theta f(g_{1},\dots,g_{n}))]\in\mathscr{E}xp\mathscr{M}_{ \mathbf{A}^{(-de-2+\gamma,-de-2)}}. \tag{6.8}\]
The aim is now to analyse the two remaining factors to obtain motivic analogues of the singular series and the singular integral.
### Singular series
It is now time to treat the truncated singular series in Proposition 6.3. Recalling the definition (6.3) of \(\gamma\), we have
\[\sum_{0\leq m^{\prime}\leq e+1-\gamma}S_{m^{\prime}}(f)\mathbf{L}^{-nm^{ \prime}}=\sum_{0\leq m^{\prime}\leq\lfloor\frac{e+1}{2}\rfloor}S_{m^{\prime}} (f)\mathbf{L}^{-nm^{\prime}}.\]
We begin with a useful upper bound, which will also prove valuable in the treatment of the minor arcs.
**Lemma 6.4**.: _Let \(\Delta=\lfloor\frac{e+1}{2}\rfloor\). Then_
\[\max_{m\geq\Delta}\left(2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n\right) \leq\left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{d}(d-1).\]
Proof.: To begin with, we note that
\[2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n \leq 2^{d}\left(\left\lfloor\frac{m}{d-1}\right\rfloor+1\right)(d-1 )-\left\lfloor\frac{m}{d-1}\right\rfloor n\] \[\leq\left\lfloor\frac{m}{d-1}\right\rfloor(2^{d}(d-1)-n)+2^{d}(d -1).\]
Since \(n>2^{d}(d-1)\), the maximum is reached when \(m=\Delta=\lfloor\frac{e+1}{2}\rfloor\) is minimal.
If \(2\mid e+1\), it now follows that
\[\max_{m\geq\Delta}\left(2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n\right) \leq\left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{d}(d-1),\]
as claimed in the lemma. Suppose next that \(2\nmid e+1\), so that \(\Delta=\frac{e}{2}\). Then
\[2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n \leq\left\lfloor\frac{e+2}{2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{ d}(d-1)\] \[\leq\left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{ d}(d-1),\]
if \(m\geq\Delta+1\), whereas
\[2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n=2^{d-1}e-\left\lfloor\frac{e}{2d -2}\right\rfloor n\]
if \(m=\Delta\). In order to complete the proof, it remains to prove that
\[2^{d-1}e-\left\lfloor\frac{e}{2d-2}\right\rfloor n\leq\left\lfloor\frac{e+1}{ 2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{d}(d-1). \tag{6.9}\]
Writing \(e=(2d-2)q+r\) for \(r<2d-2\), we must have \(r\leq 2d-4\), since \(e\) is even. But then
\[\left\lfloor\frac{e}{2d-2}\right\rfloor=\left\lfloor\frac{e+1}{2d-2}\right\rfloor =q,\]
whence (6.9) is equivalent to \(2^{d-1}r\leq 2^{d}(d-1)\), which is self-evident.
#### 6.3.1. Convergence of the series
Recall from (6.5) that
\[S_{m}(f)=\left[\operatorname{Poly}_{<m}^{n}\times B_{m},\operatorname{res} \left(\left(\frac{h_{1}}{h_{2}}\right)f(\overline{g}_{1},\ldots,\overline{g}_ {n})\right)\right],\]
where \(B_{m}\) is given by (6.4). We can use Proposition 5.4 to bound this sum, obtaining
\[w_{B_{m}}(S_{m}(f))\leq\frac{\max_{(h_{1},h_{2})\in B_{m}}\dim N(h_{1}/h_{2})+ nm(2^{d-1}-(d-1))}{2^{d-2}}.\]
We apply Lemma 5.7 with \(\theta=0\) (so that \(\psi=-\infty\)) and \(\rho=E=m\). It follows from Remark 5.9 that we can take
\[s=m-1-\left\lfloor\frac{m-1}{d-1}\right\rfloor.\]
Using this, it therefore follows that
\[\dim N(h_{1}/h_{2}) \leq(d-2)mn+\left(m-1-\left\lfloor\frac{m-1}{d-1}\right\rfloor \right)n\] \[\leq(d-1)mn-\left(1+\left\lfloor\frac{m-1}{d-1}\right\rfloor \right)n,\]
whence
\[w_{B_{m}}(S_{m}(f))\leq\frac{2^{d-1}nm-(1+\lfloor\frac{m-1}{d-1}\rfloor)n}{2^ {d-2}}=2nm-\frac{(1+\lfloor\frac{m-1}{d-1}\rfloor)n}{2^{d-2}}.\]
From this, noticing that \(\dim B_{m}=2m\), property (2) of Proposition 4.4 yields
\[w(S_{m}(f))-2nm\leq 4m-\frac{(1+\lfloor\frac{m-1}{d-1}\rfloor)n}{2^{d-2}}. \tag{6.10}\]
We now compute the radius of convergence of the series \(\sum_{m\geq 0}S_{m}(f)T^{m}\), in the sense of Section 4.4. We have \(1+\left\lfloor\frac{m-1}{d-1}\right\rfloor\geq\frac{m}{d-1}\), for any \(m\geq 0\). Hence
\[\frac{w(S_{m}(f))}{2m}\leq n+\left(\frac{2^{d}(d-1)-n}{2^{d-1}(d-1)}\right),\]
and so as soon as the condition \(n>2^{d}(d-1)\) is satisfied, we get
\[\limsup_{m\to\infty}\frac{w(S_{m}(f))}{2m}\leq n+\text{negative constant}.\]
We may deduce that the radius of convergence of the series \(\sum_{m\geq 0}S_{m}(f)T^{m}\) is bounded by \(n-\epsilon\) for some \(\epsilon>0\), and the series converges for \(|T|<\mathbf{L}^{-n+\epsilon}\). In particular, the _motivic singular series_
\[\mathfrak{S}(f):=\sum_{m\geq 0}S_{m}(f)\mathbf{L}^{-nm} \tag{6.11}\]
is well-defined as an element of \(\widehat{\mathscr{Exp}\mathscr{M}_{\mathbf{C}}}\)
#### 6.3.2. Replacing by full series
These bounds also allow us to evaluate the error made by replacing the finite sum
\[\sum_{0\leq m\leq\lfloor\frac{e+1}{2}\rfloor}S_{m}(f)\mathbf{L}^{-nm}\]
by the full sum \(\mathfrak{S}(f)\) defined in (6.11). For this, it suffices to bound the weight of a term \(S_{m}(f)\mathbf{L}^{-mn}\) for \(m\geq\lfloor\frac{e+1}{2}\rfloor+1\). It follows from (6.10) that
\[w(S_{m}(f)\mathbf{L}^{-mn})\leq\frac{2^{d}(m-1)-\lfloor\frac{m-1}{d-1}\rfloor n }{2^{d-2}}+4-\frac{n}{2^{d-2}}.\]
Hence Lemma 6.4 implies that
\[\max_{m\geq\lfloor\frac{e+1}{2}\rfloor+1}w(S_{m}(f)\mathbf{L}^{-mn})\leq\frac {\left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n)+2^{d}(d-1)}{2^{d-2}} +4-\frac{n}{2^{d-2}}.\]
Property (1) of Proposition 4.4 allows us to conclude as follows.
**Lemma 6.5**.: _Recall the definition (3.2) of \(\tilde{\nu}\). Then_
\[w\left(\mathfrak{S}(f)-\sum_{0\leq m\leq\lfloor\frac{e+1}{2}\rfloor}S_{m}(f) \mathbf{L}^{-nm}\right)\leq 4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2} \right\rfloor\right).\]
_Remark 6.6_.: We can also conclude from our argument that \(\mathfrak{S}(f)\) has weight zero, equal to \(1\) plus an element of \(\widehat{\mathscr{Exp}\mathscr{M}_{\mathbf{C}}}\) of negative weight. To see this we use \(1+\left\lfloor\frac{m-1}{d-1}\right\rfloor\geq\frac{m}{d-1}\), for any \(m\geq 1\), so that (6.10) yields
\[w(S_{m}(f)\mathbf{L}^{-nm})\leq 4m-\frac{nm}{2^{d-2}(d-1)}=\left(\frac{2^{d}(d -1)-n}{2^{d-2}(d-1)}\right)m. \tag{6.12}\]
The condition \(n>2^{d}(d-1)\) therefore implies that
\[w(\mathfrak{S}(f)-1)=w\left(\sum_{m\geq 1}S_{m}(f)\mathbf{L}^{-mn}\right)\leq-\nu,\]
for a constant \(\nu>0\).
#### 6.3.3. Motivic Euler product decomposition
**Proposition 6.7** (Factorisation property).: _Let \(m_{1},m_{2}\) be positive integers, and let \(V_{m_{1},m_{2}}\) be the space of pairs of coprime monic polynomials \(l_{1},l_{2}\) such that \(\deg(l_{i})=m_{i}\) for \(i=1,2\). Let \(\pi_{i}:V_{m_{1},m_{2}}\to\mathrm{MPoly}_{m_{i}}\) be given by \((l_{1},l_{2})\mapsto l_{i}\), and let \(\pi_{12}:V_{m_{1},m_{2}}\to\mathrm{MPoly}_{m_{1}+m_{2}}\) be given by \((l_{1},l_{2})\mapsto l_{1}l_{2}\). Then, there is an isomorphism_
\[\pi_{12}^{*}S_{m_{1}+m_{2}}(f)\simeq\pi_{1}^{*}S_{m_{1}}(f)\times_{V_{m_{1}, m_{2}}}\pi_{2}^{*}S_{m_{2}}(f)\]
_of varieties with exponentials over \(V_{m_{1},m_{2}}\). Moreover, when \(m_{1}=m_{2}\), this isomorphism commutes with switching \(l_{1}\) and \(l_{2}\)._
Proof.: We follow the argument in [10, Lemma 3.5]. Recall from (6.5) that
\[S_{m}(f)=\left[\mathrm{Poly}_{<m}^{n}\times B_{m},\mathrm{res}\left(\left( \frac{h_{1}}{h_{2}}\right)f(g_{1},\ldots,g_{n})\right)\right],\]
for any positive integer \(m\), where \(B_{m}=(\mathrm{Poly}_{<m}\times\mathrm{MPoly}_{m})_{*}\).
To begin with, there is an isomorphism
\[(\mathrm{Poly}_{<m_{1}+m_{2}})^{n}\to(\mathrm{Poly}_{<m_{1}})^{n}\times(\mathrm{ Poly}_{<m_{2}})^{n}, \tag{6.13}\]
which is given by sending a tuple of polynomials \((g_{1},\ldots,g_{n})\) of degree \(<m_{1}+m_{2}\) to the pair of tuples
\[((g_{1,1},\ldots,g_{n,1}),(g_{1,2},\ldots,g_{n,2})),\]
where each \(g_{i,1}\) has degree \(<m_{1}\) (respectively, each \(g_{i,2}\) has degree \(<m_{2}\)), and such that \(g_{i}\equiv g_{i,1}\bmod l_{1}\) and \(g_{i}\equiv g_{i,2}\bmod l_{2}\), for \(1\leq i\leq n\). The inverse map is given by
\[((g_{1,1},\ldots,g_{n,1}),(g_{1,2},\ldots,g_{n,2}))\mapsto\left(l_{2}(g_{i,1}l _{2}^{-1}\bmod l_{1})+l_{1}(g_{i,2}l_{1}^{-1}\bmod l_{2})\right)_{1\leq i \leq n},\]
where the inverses are understood to be modulo \(l_{1}\) and \(l_{2}\), respectively. Since \(l_{1}\) and \(l_{2}\) are coprime, their inverses modulo each other are polynomial. Moreover, since they are monic it follows from Euclid's algorithm that the modulo operation is also polynomial.
We also obtain an isomorphism of \(V_{m_{1},m_{2}}\)-varieties
\[B_{m_{1}+m_{2}}\times_{\pi_{12}}V_{m_{1},m_{2}}\to B_{m_{1}}\times_{\pi_{1}}V_ {m_{1},m_{2}}\times_{\pi_{2}}B_{m_{2}}, \tag{6.14}\]
given by
\[((h_{1},l_{1}l_{2}),(l_{1},l_{2}))\to\left((h_{1}\bmod l_{1},l_{1}),(l_{1},l_ {2}),(h_{2}\bmod l_{2},l_{2})\right),\]
and with inverse
\[((h_{1,1},l_{1}),(l_{1},l_{2}),(h_{1,2},l_{2}))\to\left((h_{1,1}l_{2}+h_{1,2}l _{1},l_{1}l_{2}),(l_{1},l_{2})\right).\]
To check that the isomorphisms (6.13) and (6.14) induce isomorphisms of varieties with exponentials, we compute, using the same notation:
\[\operatorname{res} \left(\frac{h_{1,1}}{l_{1}}f\left(g_{1,1},\ldots,g_{n,1}\right)+ \frac{h_{1,2}}{l_{2}}f\left(g_{1,2},\ldots,g_{n,2}\right)\right)\] \[=\operatorname{res}\left(\frac{h_{1,1}}{l_{1}}f\left(g_{1},\ldots,g_{n}\right)+\frac{h_{1,2}}{l_{2}}f\left(g_{1},\ldots,g_{n}\right)\right)\] \[=\operatorname{res}\left(\frac{h_{1,1}l_{2}+h_{1,2}l_{1}}{l_{1}l_ {2}}f\left(g_{1},\ldots,g_{n}\right)\right)\] \[=\operatorname{res}\left(\frac{h}{l_{1}l_{2}}f\left(g_{1},\ldots,g_{n}\right)\right).\]
The lemma follows on putting these together.
We can now write the series \(\sum_{m\geq 0}S_{m}(f)T^{m}\) as a motivic Euler product. For this, for every \(m\geq 0\), we need to write \(S_{m}(f)\) as the \(m\)-th coefficient of a motivic Euler product; that is, as the sum of some configuration spaces. For \(i\geq 1\), define
\[U_{i}(f)=\left[\operatorname{Poly}_{<i}^{n}\times C_{i},\operatorname{res} \left(\frac{h}{(t-x)^{i}}f(g_{1},\ldots,g_{n})\right)\right], \tag{6.15}\]
where \(C_{i}=\{(h,x)\in\operatorname{Poly}_{<i}\times\mathbf{A}^{1}:h(x)\neq 0\}.\) We view this as an element of \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{1}}\) via the second projection \(C_{i}\to\mathbf{A}^{1}\). Note that if \(S_{i}(f)\) is viewed as an element of \(\mathscr{E}xp\mathscr{M}_{\operatorname{MPoly}_{i}}\) via the projection morphism \(B_{i}\to\operatorname{MPoly}_{i}\), given by \((h_{1},h_{2})\mapsto h_{2}\), then \(U_{i}(f)\) is the restriction of \(S_{i}(f)\) to the locus \(\Delta_{i}\simeq\mathbf{A}^{1}\) of polynomials of the form \((t-x)^{i}\).
Recalling the definition of motivic Euler products from Section 2.5, we have the following result.
**Proposition 6.8**.: _We have the motivic Euler product decomposition_
\[\mathfrak{S}(f)=\prod_{x\in\mathbf{A}^{1}}\left(1+\sum_{i\geq 1}U_{i}(f)_{x}T^{i }\right)\bigg{|}_{T=\mathbf{L}^{-n}}. \tag{6.16}\]
Proof.: Recalling the definition (6.11) of \(\mathfrak{S}(f)\), it suffices to prove that
\[1+\sum_{m\geq 1}S_{m}(f)T^{m}=\prod_{x\in\mathbf{A}^{1}}\left(1+\sum_{i\geq 1}U_{i}( f)_{x}T^{i}\right).\]
The element \(S_{m}(f)\) comes from a variety with exponential over \(\mathrm{MPoly}_{m}\). On the other hand, there is an identification \(\mathrm{MPoly}_{m}\simeq\mathrm{Sym}^{m}\mathbf{A}^{1}\), giving rise to a stratification of \(\mathrm{MPoly}_{m}\) into pieces \(\mathrm{Conf}^{\omega}(\mathbf{A}^{1})\) indexed by partitions \(\omega=(m_{i})_{i}\) of \(m\). We are going to exhibit an isomorphism
\[S_{m}(f)_{|\mathrm{Conf}^{\omega}(\mathbf{A}^{1})}\to\mathrm{Conf}^{\omega}(U_ {i}(f))_{i\geq 1}.\]
We define the variety
\[V_{\omega}=\{(l_{i,1},\ldots,l_{i,m_{i}})_{i\geq 1}:l_{i,j}\text{ relatively prime, monic, }\deg l_{i,j}=i\}=\left(\prod_{i\geq 1}( \mathrm{MPoly}_{i})^{m_{i}}\right)_{*}\]
where we denote by \(()_{*}\) the fact that we take only tuples of relatively prime polynomials. We also define \(W_{\omega}\subset V_{\omega}\) by
\[W_{\omega}=\{(l_{i,j})_{i,j}\in V_{\omega}:l_{i,j}\text{ is of the form }(T-x_{i,j})^{i}\}=\left(\prod_{i\geq 1}\Delta_{i}^{m_{i}} \right)_{*}.\]
Finally, we consider the morphisms
\[\pi_{i,j}:V_{\omega}\to\mathrm{MPoly}_{i},\quad(l_{p,q})_{p,q}\mapsto l_{i,j},\]
which restrict to \(\pi_{i,j}:W_{\omega}\to\Delta_{i}\), as well as
\[\pi_{\omega}:V_{\omega}\to\mathrm{MPoly}_{m},\quad(l_{p,q})_{p,q}\mapsto\prod_ {i,j}l_{i,j}.\]
By iterating the factorisation property in Proposition 6.7, we obtain an isomorphism
\[\pi_{\omega}^{*}S_{m}(f)\simeq\prod_{i\geq 1}\prod_{1\leq j\leq m_{i}}\pi_{i,j}^{ *}S_{i}(f),\]
which is compatible with any permutation of the \(l_{i,j}\) with fixed \(i\). Restriction to \(W_{\omega}\) now gives
\[\pi_{\omega}^{*}S_{m}(f)_{|W_{\omega}}\simeq\prod_{i\geq 1}\prod_{1\leq j\leq m _{i}}\pi_{i,j}^{*}U_{i}(f),\]
There is a natural permutation action of \(\mathfrak{S}_{\omega}=\prod_{i}\mathfrak{S}_{m_{i}}\) on \(V_{\omega}\), which restricts to \(W_{\omega}\), and such that via the isomorphism \(\Delta_{i}\simeq\mathbf{A}^{1}\), the quotient \(W_{\omega}\) is naturally identified with \(\mathrm{Conf}^{\omega}(\mathbf{A}^{1})\). Taking the quotient by this permutation action, we get the result.
#### 6.3.4. Interpretation as local densities
**Lemma 6.9**.: _Let \(x\in\mathbf{A}^{1}\). Then, for every \(N\geq 1\), we have_
\[1+\sum_{i=1}^{N}U_{i}(f)_{x}\mathbf{L}^{-in}=\mathbf{L}^{-N(n-1)}[\Lambda_{N}( f,x)],\]
_where \(\Lambda_{N}(f,x)\) is given by (1.2)._
Proof.: Recalling (6.15), we begin by observing the relation
\[U_{i}(f)_{x}=\mathbf{L}^{i-1}(\mathbf{L}-1)[\Lambda_{i}(f,x)]-\mathbf{L}^{i-1 }[\Lambda_{i}^{\prime}(f,x)],\]
in \(\mathscr{E}xp\mathscr{M}_{\mathrm{Poly}_{<i}^{n}}\), where
\[\Lambda_{i}^{\prime}(f,x)=\{\mathbf{g}\in\mathrm{Poly}_{<i}^{n}:f(\mathbf{g}) \equiv 0\text{ mod }(t-x)^{i-1}\text{ but }f(\mathbf{g})\not\equiv 0\text{ mod }(t-x)^{i}\}.\]
Next, on observing that \([\Lambda^{\prime}_{i}(f,x)]=\mathbf{L}^{n}[\Lambda_{i-1}(f,x)]-[\Lambda_{i}(f,x)]\), the left-hand side in the lemma leads to a telescopic sum, and the only term remaining is the one on the right-hand side.
_Remark 6.10_.: Recall the motivic Euler product introduced in Proposition 6.8. Consider the companion motivic Euler product
\[\prod_{x\in\mathbf{A}^{1}}\left(1+\sum_{i\geq 1}U_{i}(f)_{x}\mathbf{L}^{-in}ST^{ i}\right), \tag{6.17}\]
with an extra variable \(S\). Then, by definition, the motivic Euler product \(\mathfrak{S}(f)\) is equal to the product (6.17) evaluated at \(S=1\) and then at \(T=1\). On the other hand, the exponential sum estimates from Section 5.3 apply in the same way as in the proof of (6.12), and allow us to deduce the bound
\[w(U_{i}(f)_{x}\mathbf{L}^{-in})\leq\frac{(2^{d-1}(d-1)-n)i}{2^{d-2}(d-1)}.\]
On assuming that \(n>2^{d-1}(d-1)\), this shows that we may evaluate the product (6.17) at \(T=1\) and then Lemma 6.9 shows that we obtain the motivic Euler product
\[\prod_{x\in\mathbf{A}^{1}}\left(1+\left(\lim_{N\to\infty}\mathbf{L}^{-N(n-1)} [\Lambda_{N}(f,x)]-1\right)S\right).\]
Hence, on evaluating at \(S=1\), we get that
\[\mathfrak{S}(f)=\prod_{x\in\mathbf{A}^{1}}\left(1+\left(\lim_{N\to\infty} \mathbf{L}^{-N(n-1)}[\Lambda_{N}(f,x)]-1\right)S\right)_{|S=1}.\]
Bearing in mind Notation 4.10, the following result is a consequence of Remark 6.10.
**Corollary 6.11**.: _We have_
\[\mathfrak{S}(f)=\prod_{x\in\mathbf{A}^{1}}\lim_{N\to\infty}\mathbf{L}^{-N(n-1 )}[\Lambda_{N}(f,x)],\]
_where \(\Lambda_{N}(f,x)\) is given by (1.2)._
### Singular integral
Returning to Proposition 6.3, we proceed by analysing the term
\[\int_{\theta}U(\theta),\]
where \(U(\theta)\) is given by (6.8). We want to rewrite it in a way that draws an obvious comparison with the singular integral in the classical circle method.
#### 6.4.1. Rewriting the sum (over polynomials) as an integral (over power series in \(t^{-1}\))
For this, we show that the function \((g_{1},\ldots,g_{n})\mapsto\operatorname{res}(\theta f(g_{1},\ldots,g_{n}))\) is invariant modulo \(\mathbf{T}^{n}\); i.e., for every \(\mathbf{x}\in t^{e}k[[t^{-1}]]\) and every \(\sigma\in\mathbf{T}^{n}\), we claim that
\[\operatorname{ord}(\theta(f(\mathbf{x}+\sigma)-f(\mathbf{x})))<-1.\]
The polynomial \(f(\mathbf{x}+\sigma)-f(\mathbf{x})\) is a sum of monomials of global degree \(d\), with the degree in \(\mathbf{x}\) being at most \(d-1\). Thus, bounding the order of the components of \(\mathbf{x}\) by \(e\) and the order of the components of \(\sigma\) by \(-1\), we get
\[\operatorname{ord}((f(\mathbf{x}+\sigma)-f(\mathbf{x})))\leq(d-1)e-1,\]
whence
\[\operatorname{ord}(\theta(f(\mathbf{x}+\sigma)-f(\mathbf{x})))\leq-de-2+ \gamma+(d-1)e-1=-e-3+\gamma<-1,\]
since \(\gamma\leq e\). Thus, it makes sense to consider the element
\[[\mathbf{A}^{n(e,-1)}\times\mathbf{A}^{(-de-2+\gamma,-de-2)},(\mathbf{x},\theta) \mapsto\operatorname{res}(\theta f(\mathbf{x}))]\in\mathscr{E}xp\mathscr{M}_{ \mathbf{A}^{n(e,-1)}\times\mathbf{A}^{(-de-2+\gamma,-de-2)}},\]
and moreover, since there is an obvious isomorphism \(\mathbf{A}^{n(e,-1)}\simeq\operatorname{Poly}_{\leq e}^{n}\), we get the equality
\[U(\theta)=\int_{\mathbf{x}}[\mathbf{A}^{n(e,-1)}\times\mathbf{A}^{(-de-2+ \gamma,-de-2)},\operatorname{res}(\theta f(\mathbf{x}))]\]
in \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{(-de-2+\gamma,-de-2)}}\).
#### 6.4.2. Change of variables to get an integral over \(\mathbf{T}^{n}\)
Using Remark 2.6, the change of variables \(\mathbf{x}=t^{e+1}\mathbf{y}\), together with the homogeneity of the polynomial \(f\), gives the equality
\[U(\theta)=\mathbf{L}^{n(e+1)}\int_{\mathbf{y}}[\mathbf{A}^{n(-1,-e-2)}\times \mathbf{A}^{(-de-2+\gamma,-de-2)},\operatorname{res}(\theta t^{(e+1)d}f( \mathbf{y}))],\]
in \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{(-de-2+\gamma,-de-2)}}\).
#### 6.4.3. Integrating over the space of \(\theta\), and a change of variable
Now, using the change of variable \(\beta=t^{(e+1)d}\theta\) and Remark 2.6, we get
\[\int_{\theta}U(\theta)=\mathbf{L}^{n(e+1)}\mathbf{L}^{-d(e+1)}\int_{\mathbf{y }}\mathcal{J}(\mathbf{y}), \tag{6.18}\]
where
\[\mathcal{J}(\mathbf{y})=\int_{\beta}[\mathbf{A}^{n(-1,-e-2)}\times\mathbf{A}^{ (d-2+\gamma,d-2)},\operatorname{res}(\beta f(\mathbf{y}))].\]
We may now establish the following result.
**Proposition 6.12**.: _We have_
\[\int_{\theta}U(\theta)=\mathbf{L}^{-de-1+\gamma}[V_{d+\gamma-1}],\]
_where_
\[V_{d+\gamma-1}=\{\mathbf{y}\in\mathbf{A}^{n(-1,-e-2)}:\operatorname{ord}f( \mathbf{y})<-d-\gamma+1\}.\]
Proof.: By definition, we have
\[\mathcal{J}(\mathbf{y})=\mathbf{L}^{d-1}[\mathbf{A}^{n(-1,-e-2)}\times \mathbf{A}^{(d-2+\gamma,d-2)},\operatorname{res}(\beta f(\mathbf{y}))]\]
in \(\mathscr{E}xp\mathscr{M}_{\mathbf{A}^{n(-1,-e-2)}}\). We write
\[\beta=b_{d-2+\gamma}t^{d-2+\gamma}+\cdots+b_{d-1}t^{d-1}+t^{d-2}k[[t^{-1}]]\]
and
\[f(\mathbf{y})=c_{d}(\mathbf{y})t^{-d}+c_{d+1}(\mathbf{y})t^{-d-1}+\cdots,\]
where we have used the fact that \(f\) is homogeneous of degree \(d\) to deduce that \(f(\mathbf{y})\) only has terms of degrees \(\leq-d\). Then
\[\operatorname{res}(\beta f(\mathbf{y}))=b_{d-1}c_{d}(\mathbf{y})+b_{d}c_{d+1}( \mathbf{y})+\cdots+b_{d+\gamma-2}c_{d+\gamma-1}(\mathbf{y}).\]
Thus, by Lemma 2.3, for every \(\mathbf{y}\in\mathbf{A}^{n(-1,-e-2)}\), the value of \(\mathcal{J}(\mathbf{y})\) at \(\mathbf{y}\) is \(\mathbf{L}^{d-1+\gamma}\) if and only if
\[c_{d}(\mathbf{y})=c_{d+1}(\mathbf{y})=\cdots=c_{d+\gamma-1}(\mathbf{y})=0,\]
that is, if and only if \(\operatorname{ord}f(\mathbf{y})<-d-\gamma+1\), and \(0\) otherwise. Using Lemma 2.1, it now follows from (6.18) that
\[\int_{\theta}U(\theta) =\mathbf{L}^{n(e+1)}\mathbf{L}^{-de-1}\mathbf{L}^{-d+1}\int_{ \mathbf{y}}[V_{d+\gamma-1}]\mathbf{L}^{d-1+\gamma}\] \[=\mathbf{L}^{n(e+1)}\mathbf{L}^{-de-1}\mathbf{L}^{-d+1}\mathbf{L }^{-n(e+1)}[V_{d+\gamma-1}]\mathbf{L}^{d-1+\gamma},\]
with \(V_{d+\gamma-1}\) as in the statement. The proposition easily follows.
### Connection to jet spaces
For any integer \(N\), recall that the \(N\)th _jet space_ of \(X\) is given by
\[\mathcal{L}_{N}(X)=\{\mathbf{x}=\mathbf{x}_{0}+\mathbf{x}_{1}t+\cdots+\mathbf{x }_{N}t^{N}:f(\mathbf{x})\equiv 0\bmod t^{N+1}\}, \tag{6.19}\]
where \(\mathbf{x}_{0},\ldots,\mathbf{x}_{N}\) run over \(\mathbf{A}^{n}\). We take \(\mathcal{L}_{N}(X)=\varnothing\) when \(N<0\), and note that \(\mathcal{L}_{0}(X)=X\).
Jet spaces of smooth varieties are very well understood. In our setting, the variety \(X\) has one singular point at \(0\), so we decided it would be useful to give a few details on the structure of jet spaces for hypersurfaces defined by non-singular homogeneous polynomials \(f\in\mathbf{C}[x_{1},\ldots,x_{n}]\) of degree \(d\).
**Lemma 6.13**.: _For any \(N\geq 1\), we have_
\[\frac{[\mathcal{L}_{N}(X)]}{\mathbf{L}^{(N+1)(n-1)}}=\frac{[\mathcal{L}_{N-1 }(X)]}{\mathbf{L}^{N(n-1)}}+\mathbf{L}^{N-n-n[N/d]}\cdot\begin{cases}\mathbf{L }-1&\text{ if }d\nmid N\text{,}\\ \mathbf{L}\left([X]-\mathbf{L}^{n-1}\right)&\text{ if }d\mid N\text{.}\end{cases}\]
_in \(\mathscr{E}xp\mathscr{M}_{\mathbf{C}}\)._
Proof.: We shall write \(Z_{N}\) for the locus of points in \(\mathcal{L}_{N}(X)\) with \(\mathbf{x}_{0}=\mathbf{0}\) and we let \(U_{N}=\mathcal{L}_{N}(X)\setminus Z_{N}\). We claim that
\[Z_{N}\simeq\begin{cases}\mathbf{A}^{Nn}&\text{ if }0\leq N\leq d-1,\\ \mathcal{L}_{N-d}(X)\times\mathbf{A}^{(d-1)n}&\text{ if }N\geq d.\end{cases}\]
To see this, we note that \(Z_{N}\) is the set of \(\mathbf{x}_{1}t+\cdots+\mathbf{x}_{N}t^{N}\) such that \(f(\mathbf{x}_{1}t+\cdots+\mathbf{x}_{N}t^{N})\equiv 0\bmod t^{N+1}\). When \(N\leq d-1\) it is clear that \(Z_{N}\simeq\mathbf{A}^{Nn}\). Suppose that \(N\geq d\). On relabelling the variables, \(Z_{N}\) is equal to space of \(\mathbf{x}=\mathbf{x}_{0}+\mathbf{x}_{1}t+\cdots+\mathbf{x}_{N-1}t^{N-1}\) such that \(f(\mathbf{x})\equiv 0\bmod t^{N+1-d}\), since \(f\) is homogeneous of degree \(d\). The claim is now obvious.
Now let \(N\geq 1\) and consider the truncation morphism \(\pi:U_{N}\to U_{N-1}.\) For each \(\mathbf{x}=\mathbf{x}_{0}+\mathbf{x}_{1}t+\cdots+\mathbf{x}_{N-1}t^{N-1}\in U _{N-1}\), the fibre \(\pi^{-1}(\mathbf{x})\) is the set of \(\mathbf{x}_{N}\in\mathbf{A}^{n}\) such that \(f(\mathbf{x}+\mathbf{x}_{N}t^{N})\equiv 0\bmod t^{N+1}\). Note that \(f(\mathbf{x})\equiv 0\bmod t^{N}\) and
\[f(\mathbf{x}+\mathbf{x}_{N}t^{N})=f(\mathbf{x})+t^{N}\mathbf{x}_{N}\cdot \nabla f(\mathbf{x}_{0})+O(t^{N+1}),\]
by Taylor's theorem. Since \(f\) is non-singular, \(\nabla f(\mathbf{x}_{0})\) doesn't vanish for \(\mathbf{x}_{0}\neq\mathbf{0}\). Hence \(\pi^{-1}(\mathbf{x})\simeq\mathbf{A}^{n-1}\). Hence it follows that
\[[U_{N}]=[U_{N-1}]\mathbf{L}^{n-1}, \tag{6.20}\]
for any \(N\geq 1\), since \(\pi\) is a Zariski locally trivial fibration with fibre \(\mathbf{A}^{n-1}\).
To prove the lemma we write
\[\Psi_{N}=\frac{[\mathcal{L}_{N}(X)]}{\mathbf{L}^{(N+1)(n-1)}}-\frac{[\mathcal{ L}_{N-1}(X)]}{\mathbf{L}^{N(n-1)}},\]
for any \(N\geq 1\). To begin with, we assume that \(N>d\). Then
\[[\mathcal{L}_{N}(X)]=[U_{N}]+[Z_{N}]=[U_{N}]+[\mathcal{L}_{N-d}(X)]\mathbf{L}^ {(d-1)n},\]
and the same expression holds with \(N-1\) in place of \(N\). Thus
\[\Psi_{N}=\mathbf{L}^{-(N+1)(n-1)}\left([U_{N}]-[U_{N-1}]\mathbf{L}^{n-1} \right)+\mathbf{L}^{-(n-d)}\Psi_{N-d}=\mathbf{L}^{-(n-d)}\Psi_{N-d},\]
by (6.20). We next calculate \(\Psi_{N}\) when \(N\leq d\), for which we observe that
\[[Z_{N}]=\begin{cases}\mathbf{L}^{Nn}&\text{ if }0\leq N<d,\\ [X]\mathbf{L}^{(d-1)n}&\text{ if }N=d,\end{cases}\]
since \(\mathcal{L}_{0}(X)=X\). It easily follows from (6.20) that
\[\Psi_{d} =\mathbf{L}^{-(d+1)(n-1)}\left([Z_{d}]-[Z_{d-1}]\mathbf{L}^{n-1}\right)\] \[=\mathbf{L}^{-(d+1)(n-1)}\left([X]\mathbf{L}^{(d-1)n}-\mathbf{L}^ {dn-1}\right)\] \[=\mathbf{L}^{-(2n-d-1)}\left([X]-\mathbf{L}^{n-1}\right).\]
Similarly,
\[\Psi_{N} =\mathbf{L}^{-(N+1)(n-1)}\left([Z_{N}]-[Z_{N-1}]\mathbf{L}^{n-1}\right)\] \[=\mathbf{L}^{-(N+1)(n-1)}\left(\mathbf{L}^{Nn}-\mathbf{L}^{Nn-1}\right)\] \[=\mathbf{L}^{-(n-N)}\left(\mathbf{L}-1\right),\]
if \(1\leq N\leq d-1\).
We are now ready to complete the proof of the lemma. Any \(N\geq 1\) can be written \(N=qd+r\), where \(q=\lfloor N/d\rfloor\) and \(0\leq r<d\). If \(r>0\) then we may put the above calculations together to deduce that
\[\Psi_{N}=\mathbf{L}^{-(n-d)q}\Psi_{r}=\mathbf{L}^{N-n(q+1)}(\mathbf{L}-1)= \mathbf{L}^{N-n-n\lfloor N/d\rfloor}(\mathbf{L}-1)\]
On the other hand, if \(r=0\) then \(q\geq 1\) and
\[\Psi_{N}=\mathbf{L}^{-(n-d)(q-1)}\Psi_{d}=\mathbf{L}^{N-n+1-nN/d}\left([X]- \mathbf{L}^{n-1}\right).\]
This completes the proof of the lemma.
_Remark 6.14_.: Let \(N\geq 1\). If \(d\nmid N\) then
\[\dim\left(\mathbf{L}^{N-n-n\lfloor N/d\rfloor}\cdot(\mathbf{L}-1)\right)\leq( N+1)(1-n/d),\]
since \(\lfloor N/d\rfloor\geq N/d-1+1/d\). Moreover, if \(d\mid N\) then
\[\dim\left(\mathbf{L}^{N-n-n\lfloor N/d\rfloor}\cdot\mathbf{L}\left([X]- \mathbf{L}^{n-1}\right)\right)\leq N(1-n/d).\]
**Corollary 6.15**.: _Assume that \(n>d\). Then the \(N\)-th jet space \(\mathcal{L}_{N}(X)\) is irreducible and has dimension \((n-1)(N+1)\)._
Proof.: Since we are working over a field of characteristic \(0\), and since \(X\) is irreducible, the irreducibility of \(\mathcal{L}_{N}(X)\) follows from a theorem of Kolchin [13, Proposition 10 in Chapter IV]. Turning to the dimension, the lower bound \(\dim\mathcal{L}_{N}(X)\geq(n-1)(N+1)\) is trivial. It remains to show that \(\dim\mathcal{L}_{N}(X)\leq(n-1)(N+1)\), which we shall do by induction on \(N\). When \(N=0\) we have \(\mathcal{L}_{0}(X)=X\) and the claim is obvious. If \(N\geq 1\) it follows from Lemma 6.13, the induction hypothesis and Remark 6.14 that
\[\dim\left(\frac{\mathcal{L}_{N}(X)}{\mathbf{L}^{(N+1)(n-1)}}\right)\leq(N+1)( 1-n/d)<0,\]
since \(n>d\). This confirms that \(\dim\mathcal{L}_{N}(X)\leq(n-1)(N+1)\).
We now proceed by relating the jet spaces to the spaces that we met in Corollary 6.11 and Proposition 6.12.
**Lemma 6.16**.: _Let \(x\in\mathbf{A}^{1}\) and let \(N\geq 1\). Then we have the relation_
\[[\Lambda_{N}(f,x)]=[\Lambda_{N}(f,\infty)]=[\mathcal{L}_{N-1}(X)]\]
_in the Grothendieck ring of varieties, where \(\Lambda_{N}(f,x)\) is given by (1.2) and \(\Lambda_{N}(f,\infty)\) is given by (1.3)._
Proof.: The first equality is clear and so it suffices to prove that \(\Lambda_{N}(f,x)\simeq\mathcal{L}_{N-1}(X)\), for any \(x\in\mathbf{A}^{1}\). According to (1.2), \(\Lambda_{N}(f,x)\) is the space of \(\mathbf{g}=\mathbf{g}_{0}+\mathbf{g}_{1}t+\cdots+\mathbf{g}_{N-1}t^{N-1}\) such that \(f(\mathbf{g})\equiv 0\bmod(t-x)^{N}\), for \(\mathbf{g}_{0},\ldots,\mathbf{g}_{N-1}\in\mathbf{A}^{n}\). This is isomorphic to the space of \(\mathbf{g}^{\prime}=\mathbf{g}_{0}^{\prime}+\mathbf{g}_{1}^{\prime}(t-x)+ \cdots+\mathbf{g}_{N-1}^{\prime}(t-x)^{N-1}\) such that \(f(\mathbf{g}^{\prime})\equiv 0\bmod(t-x)^{N}\), via the map \(\mathbf{g}\mapsto\mathbf{g}^{\prime}\) given by
\[\mathbf{g}_{0}^{\prime} =\mathbf{g}_{0}+x\mathbf{g}_{1}+\cdots+x^{N-1}\mathbf{g}_{N-1},\] \[\mathbf{g}_{1}^{\prime} =\mathbf{g}_{1}+2x\mathbf{g}_{2}+3x^{2}\mathbf{g}_{3}+\cdots+(N- 1)x^{N-1}\mathbf{g}_{N-1},\] \[\mathbf{g}_{2}^{\prime} =\mathbf{g}_{2}+\binom{3}{2}x\mathbf{g}_{3}+\binom{4}{2}x^{2} \mathbf{g}_{4}+\cdots+\binom{N-1}{2}x^{N-1}\mathbf{g}_{N-1},\] \[\vdots\] \[\mathbf{g}_{N-1}^{\prime} =\mathbf{g}_{N-1}.\]
But this is just the jet space \(\mathcal{L}_{N-1}(X)\), which thereby completes the proof.
Turning to the space \(V_{d+\gamma-1}\) that appears in Proposition 6.12, we shall prove the following result, relating it to the variety \(\Lambda_{N}(f,\infty)\) in (1.3) for a suitable choice of \(N\).
**Lemma 6.17**.: _We have the relation_
\[[V_{d+\gamma-1}]=[\Lambda_{\gamma}(f,\infty)]\mathbf{L}^{n(e+1-\gamma)}\]
_in the Grothendieck ring of varieties._
Proof.: For any \(\mathbf{y}=\mathbf{y}_{1}t^{-1}+\cdots+\mathbf{y}_{e+1}\mathbf{t}^{-e-1}\in V_ {d+\gamma-1}\), the condition \(\operatorname{ord}f(\mathbf{y})<-d-\gamma+1\) says that the coefficients of the terms in \(f(\mathbf{y})\) of degrees \(-d,\ldots,-d-\gamma+1\) should all be zero. This does not affect the coordinates \(\mathbf{y}_{\gamma+1},\ldots,\mathbf{y}_{e+1}\), because \(f\) is homogeneous of degree \(d\). Thus, there is an isomorphism
\[V_{d+\gamma-1}\simeq\{\mathbf{y}\in\mathbf{A}^{n(-1,-\gamma)}:\operatorname{ ord}f(\mathbf{y})<-d-\gamma+1\}\times\mathbf{A}^{n(e+1-\gamma)},\]
sending \(\mathbf{y}\) to \((\mathbf{y}_{1}t^{-1}+\cdots+\mathbf{y}_{\gamma}t^{-\gamma},(\mathbf{y}_{ \gamma+1},\ldots,\mathbf{y}_{e+1})).\) Writing \(\mathbf{y}_{1}t^{-1}+\cdots+\mathbf{y}_{\gamma}t^{-\gamma}=t^{-1}\mathbf{g}\), where \(\mathbf{g}=\mathbf{y}_{1}+\mathbf{y}_{2}t^{-1}+\cdots+\mathbf{y}_{\gamma}t^{ -(\gamma-1)}\), we see that \(\operatorname{ord}f(\mathbf{y})<-d-\gamma+1\) if and only if \(f(\mathbf{g})\in t^{-\gamma}\mathbf{C}[t^{-1}]\). Hence \(V_{d+\gamma-1}\simeq\Lambda_{\gamma}(f,\infty)\times\mathbf{A}^{n(e+1-\gamma)}\), in the notation of (1.3).
### Major arcs: conclusion
We are now ready to conclude the proof of Proposition 3.7. We return to Proposition 6.3, where \(\gamma\geq 1\) is given by (6.3). It follows from Proposition 6.12 and Lemma 6.17 that
\[N_{\mathrm{major}}=\mathbf{L}^{\mu(e)}\left(\sum_{0\leq m\leq\lfloor\frac{e+1 }{2}\rfloor}S_{m}(f)\mathbf{L}^{-nm}\right)\mathbf{L}^{-(n-1)\gamma}[\Lambda_ {\gamma}(f,\infty)],\]
where \(\mu(e)\) is given by (1.1).
Next, we observe that \(w(\mathbf{L}^{-(n-1)\gamma}[\Lambda_{\gamma}(f,\infty)])\leq 0\), as follows from Lemma 6.16 and Corollary 6.15. Appealing to Lemma 6.5, we conclude that
\[N_{\mathrm{major}}=\mathbf{L}^{\mu(e)}\left(\mathfrak{S}(f)\cdot\mathbf{L}^{- (n-1)\gamma}[\Lambda_{\gamma}(f,\infty)]+R_{e}^{\prime}\right),\]
where \(\mathfrak{S}(f)\) is the motivic Euler product described in Corollary 6.11 and
\[w\left(R_{e}^{\prime}\right)\leq 4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2} \right\rfloor\right),\]
with \(\tilde{\nu}\) given by (3.2). Inserting Lemma 6.16 into Lemma 6.13, it therefore follows from Remark 6.14 that
\[\mathbf{L}^{-(n-1)\gamma}[\Lambda_{\gamma}(f,\infty)]=\lim_{N\to\infty} \mathbf{L}^{-(n-1)N}[\Lambda_{N}(f,\infty)]+E,\]
in \(\widehat{\mathscr{E}xp\mathscr{M}_{\mathbf{C}}}\), where
\[w(E)\leq\begin{cases}-2\gamma(n/d-1)&\text{ if }d\mid\gamma,\\ -2(\gamma+1)(n/d-1)&\text{ if }d\nmid\gamma.\end{cases}\]
We now return to our expression for the major arcs and we recall from Remark 6.6 that \(w(\mathfrak{S}(f))=0\). This allows us to write
\[N_{\text{major}}=\mathbf{L}^{\mu(e)}\left(\mathfrak{S}(f)\cdot\lim_{N\to \infty}\mathbf{L}^{-(n-1)N}[\Lambda_{N}(f,\infty)]+R_{e}^{\prime\prime}\right),\]
where
\[w\left(R_{e}^{\prime\prime}\right)\leq\max\left(-\frac{(e+1+\kappa_{d,e})(n-d) }{d},\ 4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2}\right\rfloor\right)\right) \tag{6.21}\]
and
\[\kappa_{d,e}=\begin{cases}0&\text{ if }d\mid\gamma,\\ 2&\text{ if }d\nmid\gamma.\end{cases}\]
Suppose first that \(d\geq 3\). Then we claim that the second term always exceeds the first term in the maximum (6.21), as claimed in Proposition 3.7. To check the claim, we note that it is equivalent to
\[\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2d-2}\right\rfloor\right)\leq\frac{ (e+1+\kappa_{d,e})(n-d)}{d}+4.\]
On inserting the definition (3.2) of \(\tilde{\nu}\), and collecting together the coefficients of \(n\), we see that this holds if \(A_{d,e}\leq nB_{d,e}\), where \(A_{d,e}=\kappa_{d,e}-(e+1)-4\) and
\[B_{d,e}=\frac{e+1+\kappa_{d,e}}{d}-\frac{1}{2^{d-2}}\left(1+\left\lfloor\frac{ e+1}{2d-2}\right\rfloor\right).\]
Since \(A_{d,e}<0\), the claim will follow if we can prove that \(B_{d,e}\geq 0\) for \(e\geq 1\) and \(d\geq 3\). If \(d\nmid\gamma\) then \(\kappa_{d,e}=2\) and
\[B_{d,e}\geq(e+1)\left(\frac{1}{d}-\frac{1}{2^{d-1}(d-1)}\right)+\frac{2}{d}- \frac{1}{2^{d-2}}\geq\frac{4}{d}-\frac{d}{2^{d-2}(d-1)},\]
on taking \(e\geq 1\). Thus we deduce that \(B_{d,e}\geq 0\) for all \(d\geq 2\). If \(d\mid\gamma\) then \(\kappa_{d,e}=0\) and we must have \(e+1\geq 2d-1\geq 5\), if \(d\geq 3\). But then it follows that
\[B_{d,e}\geq(e+1)\left(\frac{1}{d}-\frac{1}{2^{d-1}(d-1)}\right)-\frac{1}{2^{d- 2}}\geq\frac{5}{d}-\frac{2d+3}{2^{d-1}(d-1)}.\]
This is non-negative for all \(d\geq 3\).
Finally, we suppose that \(d=2\). We may also suppose that \(2\mid\gamma\), since the previous paragraph suffices to handle the case \(2\nmid\gamma\), even when \(d=2\). Thus (6.21) becomes
\[w\left(R_{e}^{\prime\prime}\right) \leq\max\left(-\frac{(e+1)(n-2)}{2},\ 4-\tilde{\nu}\left(1+\left\lfloor\frac{e+1}{2}\right\rfloor\right)\right)\] \[\leq\max\left(-\frac{(e+1)(n-2)}{2},\ -\frac{n(e+2)}{2}+2e+8\right)\]
since \(\kappa_{2,e}=0\), \(\tilde{\nu}=n-4\) and \(1+\left\lfloor\frac{e+1}{2}\right\rfloor\geq e/2+1\). This completes the proof of Proposition 3.7.
## 7. The motivic minor arcs
Recall the expression for \(M_{m,\gamma}\) in Lemma 6.1 and put \(\Delta=\left\lfloor\frac{e+1}{2}\right\rfloor.\) In the previous section we took the full set of major arcs to be \(\mathfrak{M}:=M_{\Delta,\Delta}\). Our aim in this section is therefore to get a bound for
\[N_{\mathrm{minor}}=\mathbf{L}^{-de-1}[\mathrm{Poly}_{\leq e}^{n}\times S, \mathrm{res}(\alpha f(g_{1},\dots,g_{n}))].\]
where \(S=\mathbf{A}^{de+1}-\mathfrak{M}\). Now it follows from Proposition 5.4 that
\[w_{S}([\mathrm{Poly}_{\leq e}^{n}\times S,\mathrm{res}(\alpha f(g_{1},\dots,g_{ n})])\leq\frac{\max_{\alpha\in S}\dim N(\alpha)+N(2^{d-1}-(d-1))}{2^{d-2}}, \tag{7.1}\]
where \(N=(e+1)n\) and
\[N(\alpha)=\left\{(\mathbf{u}^{(1)},\dots,\mathbf{u}^{(d-1)})\in\left( \mathrm{Poly}_{\leq e}^{n}\right)^{d-1}:\begin{array}{l}\mathrm{ord}\{\alpha \Psi_{j}(\mathbf{u}^{(1)},\dots,\mathbf{u}^{(d-1)})\}<-e-1\\ \forall j\in\{1,\dots,n\}\end{array}\right\}.\]
### The minor arc bound
We recall the definition of \(A_{m}^{de+1}\) from Remark 3.4. In particular \(A_{m0}^{de+1}=\mathbf{A}^{de+1},\) where \(m_{0}=\lfloor\frac{de+1}{2}\rfloor\). Bearing this in mind, we begin by proving the following result.
**Lemma 7.1**.: _Let \(\alpha\in A_{m+1}^{de+1}-A_{m}^{de+1}\), with \(m\leq m_{0}-1\). Then_
\[\dim N(\alpha)\leq\left(de+d-e-2-\left\lfloor\frac{m}{d-1}\right\rfloor\right)n.\]
Proof.: We seek to apply Lemma 5.7. Since \(\alpha\in A_{m+1}^{de+1},\) there exist coprime \(h_{1},h_{2}\) with \(\deg h_{1}<\deg h_{2}\leq m+1\) and \(\theta\) such that \(\mathrm{ord}\,\theta\leq-de-2+m+1-\deg h_{2}\). Moreover, since \(\alpha\not\in A_{m}^{de+1},\) we have the following dichotomy: either \(\deg h_{2}=m+1,\) or if \(\deg h_{2}\leq m,\) then we can't have \(\mathrm{ord}\,\theta\leq-de-2+m-\deg h_{2}\). Thus
\[\mathrm{ord}\,\theta=-de-2+m+1-\deg h_{2}\]
if \(\deg h_{2}\leq m\). Using this, we now prove a bound on \(N(\alpha)\) using Lemma 5.7 with \(E=e+1.\)
We place ourselves in the first case, where, using the notation of the lemma, \(\rho=m+1\) and \(\psi\leq-de-2.\) We start by checking condition (1) in the lemma. The first inequality will be satisfied provided that \((d-1)s\geq m-e.\) Using \(\psi\leq-de-2\), we see that the second inequality is also implied by \((d-1)s\geq m-e\). We now check condition (2) in the lemma. In fact, as we have no longer have a useful lower bound on \(\psi\) in this setting, we will find a condition for the first inequality to hold, which is equivalent to
\[(d-1)e-m\leq(d-1)s.\]
We now treat the second case, where \(\rho\leq m\) and \(\psi=-de-2+m+1-\rho=-de-1+m-\rho\). We start by checking condition (1) in the lemma. The first inequality is implied by \((d-1)s\geq m-e\). The second inequality is equivalent to \((d-1)s\geq m-e\). We now check condition (2) in the lemma, noting that we have no useful lower bound on \(\rho\) in this setting. But we see that the second inequality holds if and only if
\[(d-1)e-m\leq(d-1)s.\]
In conclusion, we are able to apply Lemma 5.7 provided that \(s\) is an integer chosen to satisfy
\[(d-1)s\geq\max\left\{0,m-e,(d-1)e-m\right\}.\]
Since \(d\geq 2\), one checks that the maximum is \((d-1)e-m\) if \(m\leq m_{0}-1\). Hence, taking \(s=e-\left\lfloor\frac{m}{d-1}\right\rfloor\), we arrive at the lower bound
\[\dim N(\alpha)\leq(d-2)(e+1)n+\left(e-\left\lfloor\frac{m}{d-1}\right\rfloor \right)n=\left(de+d-e-2-\left\lfloor\frac{m}{d-1}\right\rfloor\right)n,\]
as claimed.
The bound in Lemma 7.1 becomes stronger as \(m\) increases. Thus, to optimise our minor arc estimate, we will stratify the minor arcs. We note from Remark 3.4 and Lemma 6.1 that \(A_{\Delta}^{de+1}\subset M_{\Delta,\Delta}\subset\mathfrak{M}\) and \(A_{m_{0}}^{de+1}=\mathbf{A}^{de+1}\), where \(m_{0}=\lfloor\frac{de+1}{2}\rfloor\). Since the spaces \(A_{m}^{de+1}\) form an increasing sequence, we may therefore cut up the complement of the major arcs as
\[\mathbf{A}^{de+1}-\mathfrak{M}=\left(\bigsqcup_{\Delta\leq m\leq m_{0}-1} \left(A_{m+1}^{de+1,\star}-A_{m}^{de+1,\star}\right)\right),\]
where \(A_{m}^{de+1,\star}:=A_{m}^{de+1}-\mathfrak{M}\).
We are now ready to produce our final bound for the minor arc contribution
\[N_{\text{minor}}=\mathbf{L}^{-de-1}\left[\text{Poly}_{\leq e}^{n}\times\left( \mathbf{A}^{de+1}-\mathfrak{M}\right),\text{res}(\alpha f(g_{1},\dots,g_{n}) \right].\]
Thus, using property (1) in Proposition 4.4, we get
\[w(N_{\text{minor}}) \leq-2de-2+w\left(\sum_{m=\Delta}^{m_{0}-1}[\text{Poly}_{\leq e} ^{n}\times(A_{m+1}^{de+1}-A_{m}^{de+1}),\text{res}(\alpha f(g_{1},\dots,g_{n}) )]\right)\] \[\leq-2de-2+\max_{\Delta\leq m\leq m_{0}-1}w([\text{Poly}_{\leq e} ^{n}\times(A_{m+1}^{de+1}-A_{m}^{de+1}),\text{res}(\alpha f(g_{1},\dots,g_{n}) )]).\]
According to Remark 3.5 we have \(\dim(A_{m+1}^{de+1}-A_{m}^{de+1})\leq 2(m+1)\). We now use successively property (2) of Proposition 4.4 and the inequality (7.1), to write
\[w([\text{Poly}_{\leq e}^{n}\times(A_{m+1}^{de+1}-A_{m}^{de+1}), \text{res}(\alpha f(g_{1},\dots,g_{n}))])\] \[\quad\leq w_{A_{m+1}^{de+1}-A_{m}^{de+1}}\left([\text{Poly}_{\leq e }^{n}\times(A_{m+1}^{de+1}-A_{m}^{de+1}),\text{res}(\alpha f(g_{1},\dots,g_{n} ))]\right)+4(m+1)\] \[\quad\leq\frac{\dim_{A_{m+1}^{de+1}-A_{m}^{de+1}}N(\alpha)+(e+1) n(2^{d-1}-(d-1))}{2^{d-2}}+4(m+1),\]
for any \(m\). It now follows from Lemma 7.1 that \(w(N_{\text{minor}})\) is
\[\leq 2-2de+\max_{\Delta\leq m\leq m_{0}-1}\left(\frac{\left(de+d-e -2-\left\lfloor\frac{m}{d-1}\right\rfloor\right)n+(e+1)n(2^{d-1}-(d-1)))}{2^{d- 2}}+4m\right)\] \[\leq 2-2de-\frac{n}{2^{d-2}}+2n(e+1)+\frac{1}{2^{d-2}}\max_{ \Delta\leq m\leq m_{0}-1}\left(2^{d}m-\left\lfloor\frac{m}{d-1}\right\rfloor n \right).\]
We now appeal to Lemma 6.4, to conclude that
\[w(N_{\text{minor}}) \leq 2-2de-\frac{n}{2^{d-2}}+2n(e+1)+4(d-1)+\frac{1}{2^{d-2}} \left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n).\] \[=\underbrace{2n(e+1)-2(de+1)}_{2\times\text{expected dimension}}+4d-\frac{n}{2^{d-2}}+\frac{1}{2^{d-2}}\left\lfloor\frac{e+1}{2d-2}\right\rfloor(2^{d}(d-1)-n).\] \[=2\mu(e)+\frac{1}{2^{d-2}}\left((2^{d}d-n)+\left\lfloor\frac{e+ 1}{2d-2}\right\rfloor(2^{d}(d-1)-n)\right)\]
The statement of Proposition 3.8 now follows.
## 8. The space of morphisms
In this section we prove Theorem 1.3, which deals with the space
\[\text{Mor}_{e}(\mathbf{P}^{1},\tilde{X})=\left\{(g_{1},\dots,g_{n})\in(\text{ Poly}_{\leq e}^{n}-\{0\})/\mathbf{C}^{\times}:\begin{array}{l}\max\deg g_{i}=e\\ \gcd(g_{1},\dots,g_{n})=1\\ f(g_{1},\dots,g_{n})=0\end{array}\right\},\]
where \((\mathrm{Poly}_{\leq e}^{n}-\{0\})/\mathbf{C}^{\times}\) is the space of non-zero \(n\)-tuples of polynomials of degree \(\leq e\), viewed modulo the multiplication by a non-zero scalar. We begin with the following result, which allows us to pass from the classes of the naive moduli spaces \(M_{e}\) to the class of the moduli space \(\mathrm{Mor}_{e}(\mathbf{P}^{1},\tilde{X})\).
**Lemma 8.1**.: _Let \(e\geq 1\). Then_
\[[\mathrm{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]=\frac{[M_{e}]-(\mathbf{L}+1)[M_{e -1}]+\mathbf{L}[M_{e-2}]}{\mathbf{L}-1},\]
_with the convention that \(M_{-1}\) is a point._
Proof.: For every \(e\geq 1\), note that \(M_{e-1}\) is a subset of \(M_{e}\), and that a point \(g=(g_{1},\ldots,g_{n})\in M_{e}-M_{e-1}\) may be written in the form
\[g=(hg_{1}^{\prime},\ldots,hg_{n}^{\prime})\]
where \(h=\gcd(g_{1},\ldots,g_{n})\) and \((g_{1}^{\prime},\ldots,g_{n}^{\prime})\), defines, up to multiplication by a non-zero scalar multiple, an element of \(\mathrm{Mor}_{e-\deg h}(\mathbf{P}^{1},\tilde{X})\). Thus, we have the decomposition
\[\left(M_{e}-M_{e-1}\right)/\mathbf{C}^{\times}=\bigsqcup_{i=0}^{e}\mathrm{ MPoly}_{i}\times\mathrm{Mor}_{e-i}(\mathbf{P}^{1},\tilde{X}),\]
where we recall that \(\mathrm{MPoly}_{i}\) is the space of degree \(i\) monic polynomials. Thus, in the Grothendieck ring, we have the relation
\[[M_{e}]-[M_{e-1}]=(\mathbf{L}-1)\sum_{i=0}^{e}\left([\mathrm{MPoly}_{i}]\times [\mathrm{Mor}_{e-i}(\mathbf{P}^{1},\tilde{X})]\right),\]
for every \(e\geq 1\).
In terms of generating series, we get the relation
\[\sum_{e\geq 1}([M_{e}]-[M_{e-1}])T^{e}=(\mathbf{L}-1)\sum_{e\geq 1}\left(\sum_{ i=0}^{e}[\mathrm{MPoly}_{i}]\times[\mathrm{Mor}_{e-i}(\mathbf{P}^{1},\tilde{X})] \right)T^{e}.\]
The left-hand side may be rewritten as
\[\sum_{e\geq 1}[M_{e}]T^{e}-\sum_{e\geq 0}[M_{e}]T^{e+1}=(1-T)\sum_{e\geq 0}[M_{e} ]T^{e}-[X],\]
using the fact that \(M_{0}=X\). As for the right-hand side, we rewrite it as
\[(\mathbf{L}-1)\left(\left(\sum_{i\geq 0}[\mathrm{MPoly}_{i}]T^{i}\right)\left( \sum_{i\geq 0}[\mathrm{Mor}_{i}(\mathbf{P}^{1},\tilde{X})]T^{i}\right)-[\tilde{X} ]\right),\]
since \(\mathrm{Mor}_{0}(\mathbf{P}^{1},\tilde{X})=\tilde{X}\). Noting that \([\mathrm{MPoly}_{i}]=\mathbf{L}^{i}\), we get that
\[\sum_{i\geq 0}[\mathrm{MPoly}_{i}]T^{i}=\sum_{i\geq 0}\mathbf{L}^{i}T^{i}=\frac{ 1}{1-\mathbf{L}T}.\]
We have \([\tilde{X}]=(\mathbf{L}-1)^{-1}([X]-1)\), from which it follows that
\[(1-T)(1-\mathbf{L}T)\left(\sum_{e\geq 0}[M_{e}]T^{e}\right)=(\mathbf{L}-1)\sum_{ e\geq 0}[\mathrm{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]T^{e}+1-\mathbf{L}T.\]
We expand and deduce that the left-hand side is
\[=(1-(1+\mathbf{L})T+\mathbf{L}T^{2})\left(\sum_{e\geq 0}[M_{e}]T^{e}\right)\] \[=\sum_{e\geq 0}[M_{e}]T^{e}-(\mathbf{L}+1)\sum_{e\geq 0}[M_{e}]T^{e+ 1}+\mathbf{L}\sum_{e\geq 0}[M_{e}]T^{e+2}\] \[=\sum_{e\geq 2}[M_{e}]T^{e}-(\mathbf{L}+1)\sum_{e\geq 2}[M_{e-1}]T^ {e}+\mathbf{L}\sum_{e\geq 2}[M_{e-2}]T^{e}+[M_{0}]+[M_{1}]T-(\mathbf{L}+1)[M_{0}]T\] \[=[M_{0}]+([M_{1}]-(\mathbf{L}+1)[M_{0}])T+\sum_{e\geq 2}([M_{e}]-( \mathbf{L}+1)[M_{e-1}]+\mathbf{L}[M_{e-2}])T^{e}.\]
The statement of the lemma easily follows.
Our next result expresses the product of local densities in a more convenient form.
**Lemma 8.2**.: _Assume that \(n>d\) and let \(x\in\mathbf{A}^{1}\). Then_
\[\lim_{N\to\infty}\mathbf{L}^{-N(n-1)}[\Lambda_{N}(f,x)] =\lim_{N\to\infty}\mathbf{L}^{-N(n-1)}[\Lambda_{N}(f,\infty)]\] \[=\mathbf{L}^{-(n-2)}[\tilde{X}](1-\mathbf{L}^{-1})(1-\mathbf{L}^ {-(n-d)})^{-1}.\]
Proof.: We prove the part involving \(\Lambda_{N}(f,x)\), the proof for \(\Lambda_{N}(f,\infty)\) being identical. Given \(x\in\mathbf{A}^{1}\) and integer \(N\geq 0\) we define
\[\Lambda_{N}^{*}(f,x)=\left\{\mathbf{g}\in\mathbf{C}[t]^{n}:\begin{array}{l} \deg(g_{1}),\ldots,\deg(g_{n})<N,\ f(\mathbf{g})\equiv 0\bmod(t-x)^{N}\\ (g_{1},\ldots,g_{n})\not\equiv(0,\ldots,0)\bmod(t-x)\end{array}\right\},\]
as an analogue of (1.2). Next, let us put
\[\Lambda_{N,i}(f,x)=\left\{\mathbf{g}\in\mathbf{C}[t]^{n}:\begin{array}{l} \deg(g_{1}),\ldots,\deg(g_{n})<N,\ f(\mathbf{g})\equiv 0\bmod(t-x)^{N}\\ \mathbf{g}\equiv\mathbf{0}\bmod(t-x)^{i},\ \mathbf{g}\not\equiv\mathbf{0} \bmod(t-x)^{i+1}\end{array}\right\},\]
for any \(0\leq i<N.\) If \(N>id+i\) then \(\Lambda_{N,i}(f,x)\) is isomorphic to the set of polynomials \(\mathbf{g}\in\mathbf{C}[t]^{n}\) of degree \(<N-i\) such that \(f(\mathbf{g})\equiv 0\bmod(t-x)^{N-di}\) and \(\mathbf{g}\not\equiv\mathbf{0}\bmod(t-x)\). Hence \(\Lambda_{N,i}(f,x)\simeq\mathbf{A}^{ni(d-1)}\times\Lambda_{N-i}^{*}(f,x)\) if \(N>id+i\). We conclude that
\[[\Lambda_{N}(f,x)]=\mathbf{L}^{n(N-\lceil\frac{N-1}{d}\rceil)}+\sum_{0\leq i \leq(N-1)/d}\mathbf{L}^{n(d-1)i}[\Lambda_{N-di}^{*}(f,x)],\]
from which it follows that
\[\mathbf{L}^{-N(n-1)}[\Lambda_{N}(f,x)]=\mathbf{L}^{N-n\lceil\frac{N-1}{d} \rceil}+\sum_{0\leq i\leq(N-1)/d}\mathbf{L}^{-(n-d)i}\mathbf{L}^{-(N-di)(n-1) }[\Lambda_{N-di}^{*}(f,x)]. \tag{8.1}\]
The proof of Lemma 6.16 goes through and yields \([\Lambda_{N}^{*}(f,x)]=[U_{N-1}]\), where we recall that
\[U_{N-1}=\{\mathbf{x}=\mathbf{x}_{0}+\mathbf{x}_{1}t+\cdots+\mathbf{x}_{N-1}t ^{N-1}:f(\mathbf{x})\equiv 0\bmod t^{N},\ \mathbf{x}_{0}\neq\mathbf{0}\}.\]
It now follows from (6.20) that
\[\mathbf{L}^{-(N+1)(n-1)}[\Lambda_{N+1}^{*}(f,x)]=\mathbf{L}^{-N(n-1)}[ \Lambda_{N}^{*}(f,x)],\]
for any \(N\geq 1\), whence
\[\mathbf{L}^{-(N-di)(n-1)}[\Lambda_{N-di}^{*}(f,x)]=\mathbf{L}^{-(n-1)}[ \Lambda_{1}^{*}(f,x)]=\mathbf{L}^{-(n-1)}(\mathbf{L}-1)[\tilde{X}]\]
in (8.1). Observing that
\[N-n\left\lceil\frac{N-1}{d}\right\rceil\leq N-\frac{n(N-1)}{d}=-N(n/d-1)+n/d,\]
and taking the limit \(N\to\infty\), it follows that
\[\lim_{N\to\infty}\mathbf{L}^{-N(n-1)}[\Lambda_{N}(f,x)] =\mathbf{L}^{-(n-1)}(\mathbf{L}-1)[\tilde{X}]\sum_{i\geq 0} \mathbf{L}^{-(n-d)i}\] \[=\mathbf{L}^{-(n-2)}[\tilde{X}](1-\mathbf{L}^{-1})(1-\mathbf{L}^ {-(n-d)})^{-1}.\]
The statement of the lemma follows.
Proof of Theorem 1.3.: Assume that \(d\geq 3\), \(e\geq 1\) and \(n>2^{d}(d-1)\). It will ease notation if we put
\[\sigma_{\infty}(f)=\lim_{N\to\infty}\mathbf{L}^{-(n-1)N}[\Lambda_{N}(f,\infty)],\]
where \(\Lambda_{N}(f,\infty)\) is given by (1.3). Then it follows from Theorem 1.1 that
\[[M_{e}]- (\mathbf{L}+1)[M_{e-1}]+\mathbf{L}[M_{e-2}]\] \[=\mathbf{L}^{\mu(e)}(\mathfrak{S}(f)\sigma_{\infty}(f)+R_{e})-( \mathbf{L}+1)\mathbf{L}^{\mu(e-1)}(\mathfrak{S}(f)\sigma_{\infty}(f)+R_{e-1})\] \[\quad+\mathbf{L}^{\mu(e-2)+1}(\mathfrak{S}(f)\sigma_{\infty}(f) +R_{e-2})\] \[=\mathbf{L}^{n-1+e(n-d)}\mathfrak{S}(f)\sigma_{\infty}(f)\;(1-( \mathbf{L}+1)\mathbf{L}^{-(n-d)}+\mathbf{L}^{-2(n-d)+1})\] \[\quad+\mathbf{L}^{n-1+e(n-d)}(R_{e}-(\mathbf{L}+1)\mathbf{L}^{-( n-d)}R_{e-1}+\mathbf{L}^{-2(n-d)+1}R_{e-2}).\]
Dividing by \(\mathbf{L}-1\), it follows from Lemma 8.1 that
\[[\mathrm{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]=\mathbf{L}^{\mu(e)-1}\left( \mathfrak{S}(f)\sigma_{\infty}(f)\frac{(1-\mathbf{L}^{-(n-d)})(1-\mathbf{L}^ {-(n-d)+1})}{1-\mathbf{L}^{-1}}+S_{e}\right), \tag{8.2}\]
where
\[S_{e}=\frac{R_{e}-(\mathbf{L}+1)\mathbf{L}^{-(n-d)}R_{e-1}+\mathbf{L}^{-2(n-d )+1}R_{e-2}}{1-\mathbf{L}^{-1}}.\]
We analyse the main term using Lemma 8.2. Denoting by \(F_{v}(T)\) the local factors of the motivic Euler product (6.16) of \(\mathfrak{S}(f)\), by multiplicativity of motivic Euler products and the fact that Kapranov's zeta function for \(\mathbf{A}^{1}\) is
\[Z_{\mathbf{A}^{1}}(T)=\prod_{v\in\mathbf{A}^{1}}(1-T)^{-1}=\frac{1}{(1- \mathbf{L}T)},\]
we therefore obtain
\[\mathfrak{S}(f)(1-\mathbf{L}^{-(n-d)+1}) =\prod_{v\in\mathbf{A}^{1}}F_{v}(\mathbf{L}^{-n})\prod_{v\in \mathbf{A}^{1}}(1-\mathbf{L}^{-n-d})^{-1}\] \[=\left(\prod_{x\in\mathbf{A}^{1}}F_{v}(\mathbf{L}^{-n}T)(1- \mathbf{L}^{-(n-d)}T)\right)_{|T=1}.\]
Also used here is the compatibility of Euler products with transformations of the form \(T\mapsto\mathbf{L}^{r}T.\) By Remark 6.10 and Lemma 8.2, each local factor of the latter motivic Euler product is equal to \(\mathbf{L}^{-(n-2)}[\tilde{X}](1-\mathbf{L}^{-1})\). Combining this with an application of Lemma 8.2 to the contribution of \(\sigma_{\infty}\) and compatibility of motivic Euler products with finite products, we may write
\[[\mathrm{Mor}_{e}(\mathbf{P}^{1},\tilde{X})]=\frac{\mathbf{L}^{\mu(e)-1}}{1- \mathbf{L}^{-1}}\left(\prod_{v\in\mathbf{P}^{1}}c_{v}+S_{e}\right),\]
where \(c_{v}=(1-\mathbf{L}^{-1})\mathbf{L}^{-(n-2)}[\tilde{X}].\)
Turning to the error term, we deduce from Theorem 1.1 that
\[w(R_{e}) \leq 4-\frac{n-2^{d}(d-1)}{2^{d-2}}\left(1+\left\lfloor\frac{e+1}{2 d-2}\right\rfloor\right)\] \[\leq 4-\frac{n-2^{d}(d-1)}{2^{d-1}(d-1)}(e+1),\]
for any \(e\geq 1\). By convention \(R_{e}=\varnothing\) if \(e\leq 0\). Hence
\[w(S_{e})\leq\max\left(4-\nu(e+1),6-2(n-d)-\nu e,6-4(n-d)-\nu(e-1)\right)=4-\nu (e+1),\]
where
\[\nu=\frac{n-2^{d}(d-1)}{2^{d-1}(d-1)}>0.\]
The statement of Theorem 1.3 is now clear.
## 9. The variety of lines
Our task in this section is to prove Theorem 1.5. It follows from Lemma 8.1 that
\[[\operatorname{Mor}_{1}(\mathbf{P}^{1},\tilde{X})]=\frac{[M_{1}]-(\mathbf{L}+ 1)[M_{0}]+\mathbf{L}}{\mathbf{L}-1},\]
where \([M_{0}]=[X]=(\mathbf{L}-1)[\tilde{X}]+1\). The variety of lines \(F_{1}(\tilde{X})\) is obtained as a quotient of \(\operatorname{Mor}_{1}(\mathbf{P}^{1},\tilde{X})\) by the action of the automorphism group of \(\mathbf{P}^{1}\). The quotient map
\[\operatorname{Mor}_{1}(\mathbf{P}^{1},\tilde{X})\to F_{1}(\tilde{X})\]
is a Zariski-locally trivial fibration with fibre \(\operatorname{PGL}_{2}\). Since \([\operatorname{PGL}_{2}]=\mathbf{L}^{3}-\mathbf{L}\), it follows that
\[\begin{split}[F_{1}(\tilde{X})]=\frac{[\operatorname{Mor}_{1}( \mathbf{P}^{1},\tilde{X})]}{\mathbf{L}^{3}-\mathbf{L}}&=\frac{[M _{1}]-(\mathbf{L}^{2}-1)[\tilde{X}]-1}{(\mathbf{L}^{3}-\mathbf{L})(\mathbf{L} -1)}\\ &=\frac{[M_{1}]-1}{(\mathbf{L}^{3}-\mathbf{L})(\mathbf{L}-1)}- \frac{[\tilde{X}]}{\mathbf{L}(\mathbf{L}-1)}.\end{split} \tag{9.1}\]
We proceed by summarising what our work says about the class of \(M_{1}\). Let \(d\geq 3\) and assume that \(n>2^{d}(d-1)\). Rather than Theorem 1.1, we shall invoke a version in which the truncated singular series appears. This amounts to combining Proposition 3.8 with the contents of Section 6.6 in (3.1). Thus
\[[M_{1}]=\mathbf{L}^{\mu(1)}\left((1+S_{1}(f)\mathbf{L}^{-n})\mathbf{L}^{-(n-1 )}[X]+R_{1}\right),\]
where \(R_{1}\) is an error term satisfying
\[w(R_{1})\leq 4-\frac{n-2^{d}(d-1)}{2^{d-2}}.\]
Recalling that \(\mu(1)=2n-d-1\), by (1.1), we deduce that
\[[M_{1}]=\mathbf{L}^{n-d}(\mathbf{L}-1)[\tilde{X}]\left(1+S_{1}(f)\mathbf{L}^{ -n}\right)+\widetilde{R_{1}}, \tag{9.2}\]
where
\[\begin{split} w(\widetilde{R_{1}})&\leq\max\left(2 n-2d,4n-2d+2-\frac{n-2^{d}(d-1)}{2^{d-2}}\right)\\ &=4n-2d+2-\frac{n-2^{d}(d-1)}{2^{d-2}}.\end{split} \tag{9.3}\]
We proceed by computing \(S_{1}(f)\). Note that in this case \(B_{1}\simeq\mathbf{G}_{m}\times\mathbf{A}^{1}\) is just the space of pairs \((h,T-x)\) where \(h\) is a nonzero constant, and the polynomials \(g_{1},\dots,g_{n}\in\operatorname{Poly}_{<1}\) are just constants. Thus
\[\operatorname{res}\left(\frac{h}{T-x}f(g_{1},\dots,g_{n})\right)=hf(g_{1},\dots,g_{n}).\]
We therefore get
\[S_{1}(f) =\left[\operatorname{Poly}_{<1}^{n}\times B_{1}:\operatorname{ res}\left(\frac{h}{T-x}f(g_{1},\dots,g_{n})\right)\right]\] \[=[\operatorname{Poly}_{<1}^{n}\times\mathbf{G}_{m}\times\mathbf{A }^{1},hf(g_{1},\dots,g_{n})].\]
As an element of \(\mathscr{E}xp\mathscr{M}_{B_{1}}\), this is the pullback of
\[[\operatorname{Poly}_{<1}^{n}\times\mathbf{G}_{m},hf(g_{1},\dots,g_{n})]\in \mathscr{E}xp\mathscr{M}_{\mathbf{G}_{m}}\]
via \(B_{1}\to\mathbf{G}_{m}\) given by the projection \((h,T-x)\mapsto h\).
Applying the orthogonality relation, we see that
\[[\operatorname{Poly}_{<1}^{n}\times\mathbf{G}_{m},hf(g_{1},\dots,g_{n})] =[\operatorname{Poly}_{<1}^{n}\times\mathbf{A}^{1},hf(g_{1},\dots,g_{n})]-[\operatorname{Poly}_{<1}^{n}\times\{0\},0]\] \[=[X]\mathbf{L}-\mathbf{L}^{n},\]
whence
\[S_{1}(f)=[X]\mathbf{L}^{2}-\mathbf{L}^{n+1}. \tag{9.4}\]
_Remark 9.1_.: Applying (6.10) with \(m=1\), it follows that our work provides the bound
\[w([X]\mathbf{L}^{2}-\mathbf{L}^{n+1})\leq 2n+4-\frac{n}{2^{d-2}}.\]
That is, for any hypersurface \(X\subset\mathbf{A}^{n}\) defined by a non-singular form of degree \(d\), we have the weight bound
\[w([X]-\mathbf{L}^{n-1})\leq 2(n-1)-\frac{n-2^{d-1}}{2^{d-2}}.\]
When working over a finite field \(k=\mathbf{F}_{q}\), this should be compared with the bound
\[\#X(\mathbf{F}_{q})-q^{n-1}=O_{d,n}(q^{n/2}),\]
that follows from Deligne's resolution of the Weil conjectures.
Proof of Theorem 1.5.: It follows from (9.4) that \(S_{1}(f)=\mathbf{L}^{2}(\mathbf{L}-1)[\tilde{X}]+\mathbf{L}^{2}-\mathbf{L}^{ n+1}\). Combining this with (9.2), we deduce that
\[[M_{1}] =\mathbf{L}^{n-d}(\mathbf{L}-1)[\tilde{X}]\left(1+\mathbf{L}^{-n +2}(\mathbf{L}-1)[\tilde{X}]+\mathbf{L}^{-n+2}-\mathbf{L}\right)+\widetilde{ R_{1}}\] \[=(\mathbf{L}-1)^{2}\left(\mathbf{L}^{-d+2}[\tilde{X}]^{2}-\mathbf{ L}^{n-d}[\tilde{X}]\right)+\widetilde{\widetilde{R_{1}}},\]
where \(\widetilde{\widetilde{R_{1}}}\) has the same weight as \(\widetilde{R_{1}}\) in (9.3). Inserting this into (9.1), it follows that
\[\mathbf{L}^{2}[F_{1}(\tilde{X})] =\frac{\mathbf{L}([M_{1}]-1)}{(\mathbf{L}^{2}-1)(\mathbf{L}-1)}- \frac{\mathbf{L}[\tilde{X}]}{\mathbf{L}-1}\] \[=\frac{\mathbf{L}^{-d+2}[\tilde{X}]^{2}-\mathbf{L}^{n-d}[\tilde{X }]}{1+\mathbf{L}^{-1}}-\frac{[\tilde{X}]}{1-\mathbf{L}^{-1}}-\frac{1}{( \mathbf{L}^{2}-1)(1-\mathbf{L}^{-1})}+\frac{\mathbf{L}\widetilde{\widetilde{R_ {1}}}}{(\mathbf{L}^{2}-1)(\mathbf{L}-1)}.\]
The term \((1-\mathbf{L}^{-1})^{-1}[\tilde{X}]\) has weight \(2n-4\) and the third term has weight \(-4\). Both of these are dominated by the upper bound we have for the weight of the last term on
the right hand side. Dividing both sides by \(\mathbf{L}^{2}\), we thereby arrive at the statement of Theorem 1.5.
Proof of Corollary 1.6.: As described in [1, Corollary 17.2.2], for example, it is well-known that the Hodge-Deligne polynomial of the hypersurface \(\tilde{X}\) takes the shape
\[\operatorname{HD}(\tilde{X})=(uv)^{n-2}(1+(uv)^{-1}+(uv)^{-2}+\cdots)+g_{1}(u, v),\]
where \(g_{1}\in\mathbf{Z}[u,v][[(uv)^{-1}]]\) has only terms of total degree at most \(n-2\) in \(u\) and \(v\). Noting that
\[(uv)^{n-2}(1+(uv)^{-1}+(uv)^{-2}+\cdots)=\frac{(uv)^{n-2}}{1-(uv)^{-1}},\]
we thereby deduce that
\[\operatorname{HD}([\tilde{X}]^{2})=\frac{(uv)^{2n-4}}{(1-(uv)^{-1})^{2}}+g_{2 }(u,v),\]
where \(g_{2}\in\mathbf{Z}[u,v][[(uv)^{-1}]]\) only has terms of total degree at most \(2(n-2)+n-2=3n-6\).
Thus, there exists \(g_{3}\in\mathbf{Z}[u,v][[(uv)^{-1}]]\) with terms of total degree at most \(3n-2d-6\) such that
\[\operatorname{HD}\left(\frac{\mathbf{L}^{-d}[\tilde{X}]^{2}- \mathbf{L}^{n-d-2}[\tilde{X}]}{1+\mathbf{L}^{-1}}\right) =\frac{(uv)^{2n-d-5}}{(1-(uv)^{-1})^{2}(1+(uv)^{-1})}+g_{3}(u,v)\] \[=\frac{(uv)^{2n-d-5}}{(1-(uv)^{-1})(1-(uv)^{-2})}+g_{3}(u,v)\] \[=(uv)^{2n-d-5}\sum_{k,l\geq 0}(uv)^{-k-2l}+g_{3}(u,v)\] \[=\sum_{m\geq 0}\left(\left\lfloor\frac{m}{2}\right\rfloor+1 \right)(uv)^{2n-d-5-m}+g_{3}(u,v).\]
Using Theorem 1.5, we see that this should agree with the Hodge-Deligne polynomial of \(F_{1}(\tilde{X})\) up to terms \(u^{p}v^{q}\), with \(p+q\leq 4n-2d-6-\frac{n-2^{d}(d-1)}{2^{d-2}}\), which thereby concludes the proof.
_Remark 9.2_.: We now assume \(d=3\), so that we are in the case of a smooth cubic hypersurface \(\tilde{X}\subset\mathbf{P}^{n-1}\). Recall Galkin and Shinder's relation (1.4). We can use work of Burillo to calculate \(\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X}))\) and so show that Corollary 1.6 is consistent with it. Indeed, Burillo's formula [1, (2.4)] states that \(\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X}))\) is given by the coefficient of \(t^{2}\) in
\[\prod_{p,q\geq 0}(1-(-1)^{p+q}u^{p}v^{q}t)^{(-1)^{p+q+1}h^{p,q}(\tilde{X})}.\]
Since \(h^{p,q}(\tilde{X})=\delta_{p,q}\), for \(p+q>n-2\) and \(p+q\leq 2(n-2)\), we see that modulo terms of total degree at most \(n-2\) in \(u,v\), the polynomial \(\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X}))\) is the coefficient of \(t^{2}\) in the expansion
\[\prod_{0\leq p\leq n-2}(1-(uv)^{p}t)^{-1} =(1+(uv)^{n-2}t+(uv)^{2(n-2)}t^{2}+\cdots)\] \[\qquad\times(1+(uv)^{n-3}t+(uv)^{2(n-3)}t^{2}+\cdots)\cdots.\]
Thus there exists a polynomial \(g_{1}\in\mathbf{Z}[u,v]\) of degree \(\leq n-2\) such that
\[\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X}))=\sum_{n-2\geq p\geq p^{ \prime}\geq 0}(uv)^{p+p^{\prime}}+g_{1}(u,v).\]
This may be rewritten
\[\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X})) =\sum_{0\leq q\leq q^{\prime}}(uv)^{2(n-2)-q-q^{\prime}}+g_{2}(u,v)\] \[=\sum_{m\geq 0}\left(\left\lfloor\frac{m}{2}\right\rfloor+1 \right)(uv)^{2n-4-m}+g_{2}(u,v),\]
where \(g_{2}(u,v)\in\mathbf{Z}[u,v][[(uv)^{-1}]]\) has terms of total degree at most \(n-2\) in \(u\) and \(v\).
Taking \(d=3\) and \(n\geq 17\) in Corollary 1.6, we compute the Hodge-Deligne polynomial of \(\mathbf{L}^{2}[F_{1}(\tilde{X})]+(1+\mathbf{L}^{n-2})[\tilde{X}]\), modulo degree \(\leq\frac{7}{2}n\) in \(u,v\). This shows that there exists \(h\in\mathbf{Z}[u,v][[(uv)^{-1}]]\) with terms of degrees \(\leq\frac{7}{2}n\) such that
\[\operatorname{HD}(\mathbf{L}^{2}[F_{1}(\tilde{X})]+(1+\mathbf{L }^{n-2})[\tilde{X}])= \sum_{m\geq 0}\left(\left\lfloor\frac{m}{2}\right\rfloor+1 \right)(uv)^{2n-6-m}\] \[+(uv)^{2(n-2)}(1+(uv)^{-1}+(uv)^{-2}+\cdots)+h(u,v).\]
This is easily seen to agree with our expression for \(\operatorname{HD}(\operatorname{Sym}^{2}(\tilde{X}))\) above.
## Appendix A Geometry of numbers over function fields
### Basic facts
Let \(k\) be a field. In this note we discuss basic facts from the geometry of numbers in the setting of the function field \(k(t)\). A non-archimedean absolute value \(|\cdot|:k(t)\to\mathbf{R}_{\geq 0}\) is given by taking \(|0|=0\) and \(|x|=2^{\deg(p)-\deg(q)}\), if \(x=p/q\) for polynomials \(p,q\in k[t]\), with \(q\neq 0\). The completion of \(k(t)\) with respect to this absolute value is the field of Laurent series \(K_{\infty}=k((t^{-1}))\), whose elements take the shape
\[x=\sum_{-\infty<i\leq M}x_{i}t^{i},\]
for \(x_{i}\in k\) and \(M\in\mathbf{Z}\). The absolute value is extended to \(K_{\infty}\) by taking
\[|x|=\begin{cases}0&\text{ if }x=0,\\ 2^{M}&\text{ if }x=\sum_{-\infty<i\leq M}x_{i}t^{i}\in K_{\infty}\text{ with }x_{M}\neq 0. \end{cases}\]
We shall extend this to vectors by setting
\[|\mathbf{x}|=\max(|x_{1}|,\dots,|x_{n}|),\]
for any \(\mathbf{x}\in K_{\infty}^{n}\). This provides a distance function on \(K_{\infty}^{n}\).
Mahler [10] initiated an extensive investigation of lattices \(\mathsf{\Lambda}\subset K_{\infty}^{n}\), proving analogues of many results from the classical setting of lattices in \(\mathbf{R}^{n}\). In this section we summarise some of the basic facts that will be needed in our application. (In doing so, we shall recover work of Lee [11], which is concerned with the special case \(k=\mathbf{F}_{q}\).)
A (full rank) _lattice_\(\mathsf{\Lambda}\subset K_{\infty}^{n}\) is defined to be a set of the form
\[\mathsf{\Lambda}=\{u_{1}\mathbf{x}_{1}+\dots+u_{n}\mathbf{x}_{n}:u_{1},\dots, u_{n}\in k[t]\},\]
where \(\mathbf{x}_{1},\dots,\mathbf{x}_{n}\in K_{\infty}^{n}\) are \(k(t)\)-linearly independent vectors. This set of vectors is called the _basis_ of the lattice. Let \(\mathbf{M}=(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\) be the associated \(n\times n\) matrix of basis vectors. The _determinant_ of the lattice is defined to be
\[\det(\mathsf{\Lambda})=|\det\mathbf{M}|.\]
This does not depend on the choice of basis for \(\mathsf{\Lambda}\), as proved in [10, Section 8]. Next, the _successive minima_\(\sigma_{1},\dots,\sigma_{n}\) associated to \(\mathsf{\Lambda}\) are defined as follows. We take \(2^{\sigma_{1}}\) to be minimum of \(|\mathbf{x}_{1}|\), for non-zero \(\mathbf{x}_{1}\in\mathsf{\Lambda}\). Next, \(2^{\sigma_{2}}\) is the minimum of \(|\mathbf{x}_{2}|\), for \(\mathbf{x}_{2}\in\mathsf{\Lambda}\setminus\operatorname{Span}_{k(t)}(\mathbf{x }_{1})\). One continues in this way, ultimately defining \(2^{\sigma_{n}}\) to be the
minimum of \(|\mathbf{x}_{n}|\), for \(\mathbf{x}_{n}\in\mathsf{\Lambda}\setminus\operatorname{Span}_{k(t)}(\mathbf{x}_{ 1},\ldots,\mathbf{x}_{n-1})\). It is clear from the construction that \(\sigma_{1},\ldots,\sigma_{n}\) are integers satisfying
(A.1) \[-\infty<\sigma_{1}\leq\cdots\leq\sigma_{n}.\]
The analogue of Minkowski's theorem is
(A.2) \[\det(\mathsf{\Lambda})=2^{\sigma_{1}+\cdots+\sigma_{n}},\]
which is established in [14, Section 9].
For lattices in \(\mathbf{R}^{n}\) there is a wealth of literature around the problem of counting the number of lattice points that are constrained to lie in Euclidean balls of growing radius in \(\mathbf{R}^{n}\). In the function field setting, we shall be interested in the "size" of the set
\[\left\{\mathbf{x}\in\mathsf{\Lambda}:|\mathbf{x}|<2^{R}\right\}.\]
This set forms a finitely generated \(k\)-vector space and we may consider its _dimension_ as a \(k\)-vector space. The key object of interest is then the quantity
(A.3) \[\nu(\mathsf{\Lambda},R)=\dim\left\{\mathbf{x}\in\mathsf{\Lambda}:|\mathbf{x}| <2^{R}\right\},\]
as \(R\to\infty\). For example, when \(\mathsf{\Lambda}=k[t]^{n}\) we see that the monomials \(1,t,t^{2},\ldots,t^{R-1}\) are linearly independent over \(k\), whence \(\nu(k[t]^{n},R)=nR\).
Our starting point for the analysis of \(\nu(\mathsf{\Lambda},R)\) is the identification of a suitable basis for the lattice \(\mathsf{\Lambda}\). The following result is adapted from an argument of Davenport [13, Lemma 12.3] (as already adapted to the setting \(k=\mathbf{F}_{q}\) by Lee [10]).
**Lemma A.1**.: _Let \(\mathsf{\Lambda}\subset K_{\infty}^{n}\) be a lattice and let \(\sigma_{1},\ldots,\sigma_{n}\) be the successive minima of \(\mathsf{\Lambda}\). Then there exists a basis \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) of \(\mathsf{\Lambda}\) with_
\[\mathbf{x}_{i}=(x_{i,1},\ldots,x_{i,i},0,\ldots,0),\quad\text{for $1\leq i\leq n $},\]
_such that \(|\mathbf{x}_{i}|=|x_{i,i}|=2^{\sigma_{i}},\) for \(1\leq i\leq n\)._
Proof.: Choose vectors \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\in\mathsf{\Lambda}\) such that \(|\mathbf{y}_{i}|=2^{\sigma_{i}}\), for \(1\leq i\leq n\). On applying a suitable unimodular linear transformation we can assume that these vectors take the shape
\[\mathbf{y}_{i}=(y_{i,1},\ldots,y_{i,i},0,\ldots,0),\]
for \(1\leq i\leq n\). We take \(\mathbf{x}_{1}=\mathbf{y}_{1}\). Next, we choose \(\mathbf{x}_{2}\in\mathsf{\Lambda}\cap\operatorname{Span}_{k(t)}(\mathbf{y}_{ 1},\mathbf{y}_{2})\) such that
\[\operatorname{Span}_{k[t]}(\mathbf{x}_{1},\mathbf{x}_{2})=\mathsf{\Lambda} \cap\operatorname{Span}_{k(t)}(\mathbf{y}_{1},\mathbf{y}_{2}).\]
It is clear that there exists \(q,u_{1},u_{2}\in k[t]\), with \(q\neq 0\), such that
(A.4) \[q\mathbf{x}_{2}=u_{1}\mathbf{y}_{1}+u_{2}\mathbf{y}_{2}.\]
This implies that \(qx_{2,2}=u_{2}y_{2,2}\), from which it follows that \(u_{2}\neq 0\), since \(q,x_{2,2},y_{2,2}\) are all non-zero.
We claim that the choice of \(\mathbf{x}_{2}\) can be made in such a way that \(|u_{1}|,|u_{2}|\leq|q|\), which will then imply that
\[|\mathbf{x}_{2}|\leq|q|^{-1}\max\left\{|u_{1}||\mathbf{y}_{1}|,|u_{2}|| \mathbf{y}_{2}|\right\}\leq|\mathbf{y}_{2}|=2^{\sigma_{2}},\]
by the ultrametric inequality. To prove the claim, we note that
\[\mathbf{y}_{2}\in\operatorname{Span}_{k[t]}(\mathbf{x}_{1},\mathbf{x}_{2})= \operatorname{Span}_{k[t]}(\mathbf{y}_{1},\mathbf{x}_{2}).\]
Hence, there exist \(v_{1},v_{2}\in k[t]\) such that \(\mathbf{y}_{2}=v_{1}\mathbf{y}_{1}+v_{2}\mathbf{x}_{2}.\) But then (A.4) implies that
\[(q-u_{2}v_{2})\mathbf{x}_{2}=(u_{1}+u_{2}v_{1})\mathbf{y}_{1},\]
whence \((q-u_{2}v_{2})x_{2,1}=(u_{1}+u_{2}v_{1})y_{1,1}\) and \((q-u_{2}v_{2})x_{2,2}=0\), since \(y_{1,2}=0\). Thus it follows that \(q=u_{2}v_{2}\) and \(u_{1}=-u_{2}v_{1}\). Once inserted into (A.4) and recalling that \(u_{2}\neq 0\), we obtain
\[v_{2}\mathbf{x}_{2}=-v_{1}\mathbf{y}_{1}+\mathbf{y}_{2}.\]
Moreover, \(v_{2}\neq 0\). Writing \(-v_{1}=w_{1}v_{2}+r_{1}\), for \(w_{1},r_{1}\in k[t]\) such that \(|r_{1}|<|v_{2}|\), we therefore obtain
\[v_{2}(\mathbf{x}_{2}-w_{1}\mathbf{y}_{1})=r_{1}\mathbf{y}_{1}+\mathbf{y}_{2}.\]
The claim follows on redefining \(\mathbf{x}_{2}-w_{1}\mathbf{y}_{1}=\mathbf{x}_{2}-w_{1}\mathbf{x}_{1}\) to be \(\mathbf{x}_{2}\).
In a similar way, for \(3\leq i\leq n\), \(\mathbf{x}_{i}\in\mathsf{\Lambda}\cap\operatorname{Span}_{k(t)}(\mathbf{y}_{ 1},\ldots,\mathbf{y}_{i})\) can be chosen so that
\[\operatorname{Span}_{k[t]}(\mathbf{x}_{1},\ldots,\mathbf{x}_{i})=\mathsf{ \Lambda}\cap\operatorname{Span}_{k(t)}(\mathbf{y}_{1},\ldots,\mathbf{y}_{i}),\]
with \(|\mathbf{x}_{i}|\leq 2^{\sigma_{i}}.\) Proceeding in this way, we have
\[|x_{i,i}|\leq|\mathbf{x}_{i}|\leq 2^{\sigma_{i}},\]
for \(1\leq i\leq n\).
It remains to prove the lower bound
\[|x_{i,i}|\geq 2^{\sigma_{i}},\]
for \(1\leq i\leq n\). The vectors \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) form a basis for \(\mathsf{\Lambda}\). Thus
\[\det(\mathsf{\Lambda})=|\det(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})|=|x_{1,1} \ldots x_{n,n}|.\]
It follows that
\[\det(\mathsf{\Lambda})\leq 2^{\sigma_{1}+\cdots+\sigma_{i-1}}|x_{i,i}|2^{ \sigma_{i+1}+\cdots+\sigma_{n}},\]
for any \(i\in\{1,\ldots,n\}\). Appealing to (A.2), we deduce that \(|x_{i,i}|\geq 2^{\sigma_{i}}\), as required.
We are now ready to prove our key lattice point counting result.
**Lemma A.2**.: _Let \(\mathsf{\Lambda}\subset K_{\infty}^{n}\) be a lattice and let \(\sigma_{1},\ldots,\sigma_{n}\) be the successive minima of \(\mathsf{\Lambda}\). Then for any \(R\in\mathbf{Z}_{>0}\), we have_
\[\nu(\mathsf{\Lambda},R)=\sum_{i=1}^{n}\max\{0,R-\sigma_{i}\}.\]
Proof.: Let \(\mathsf{\Lambda}\subset K_{\infty}^{n}\) be a lattice and let \(\sigma_{1},\ldots,\sigma_{n}\) be the successive minima of \(\mathsf{\Lambda}\). Choose a basis \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) for \(\mathsf{\Lambda}\), as in Lemma A.1. Thus
\[\mathbf{x}_{i}=(x_{i,1},\ldots,x_{i,i},0,\ldots,0),\quad\text{for $1\leq i\leq n $},\]
with \(|\mathbf{x}_{i}|=|x_{i,i}|=2^{\sigma_{i}}\), for \(1\leq i\leq n\). Any \(\mathbf{x}\in\mathsf{\Lambda}\) has the form
\[\mathbf{x}=u_{1}\mathbf{x}_{1}+\cdots+u_{n}\mathbf{x}_{n},\]
for \(u_{1},\ldots,u_{n}\in k[t]\). The condition \(|\mathbf{x}|<2^{R}\) is then equivalent to the system of inequalities
\[|u_{1}+u_{2}x_{2,1}/x_{1,1}+\cdots+u_{n}x_{n,1}/x_{1,1}| <2^{R-\sigma_{1}},\] \[\vdots\] \[|u_{n-1}+u_{n}x_{n,n-1}/x_{n,n}| <2^{R-\sigma_{n-1}},\] \[|u_{n}| <2^{R-\sigma_{n}}.\]
The set of polynomials \(u_{1},\ldots,u_{n}\in k[t]\) satisfying these constraints clearly defines a \(k\)-vector space of dimension \(\sum_{i=1}^{n}\max\{0,R-\sigma_{i}\}\), as claimed in the statement of the lemma.
Let \(\mathsf{\Lambda}\subset K_{\infty}^{n}\) be a lattice with basis matrix \(\mathbf{M}\). The _dual lattice_\(\mathsf{\Lambda}^{*}\) is the lattice with basis matrix \(\mathbf{M}^{\mathrm{adj}}\), obtained by taking the adjoint matrix of \(\mathbf{M}\). We clearly have \(\det(\mathsf{\Lambda}^{*})=1/\det(\mathsf{\Lambda})\). In fact the successive minima of \(\mathsf{\Lambda}\) share a close correspondence with the successive minima \(\sigma_{1}^{*},\ldots,\sigma_{n}^{*}\) of the dual lattice. The relation
(A.5) \[\sigma_{i}=-\sigma_{n-i+1}^{*},\quad\text{for $1\leq i\leq n$},\]
is established in [14, Section 10].
### Davenport's shrinking lemma
Given any \(x=\sum_{-\infty<i\leq M}x_{i}t^{i}\in K_{\infty}\) we define the _distance to the nearest integer_ function to be
\[\|x\|=\left|\sum_{-\infty<i\leq-1}x_{i}t^{i}\right|.\]
Associated to any \(a,b\in\mathbf{Z}\) such that \(b>0\), and any symmetric \(n\times n\) matrix \(\mathbf{U}\) with entries in \(K_{\infty}\), we set \(\mathsf{\Lambda}_{a,b}(\mathbf{U})\subset K_{\infty}^{2n}\) for the lattice with underlying basis matrix
\[\mathbf{M}_{a,b}(\mathbf{U})=\begin{pmatrix}t^{-a}\mathbf{I}_{n}&\mathbf{0}\\ t^{b}\mathbf{U}&t^{b}\mathbf{I}_{n}\end{pmatrix},\]
where \(\mathbf{I}_{n}\) is the \(n\times n\) identity matrix. Recalling (A.3), we then see that \(\nu(\mathsf{\Lambda}_{a,b}(\mathbf{U}),0)\) is the dimension of the \(k\)-vector space of \(\mathbf{x}\in k[t]^{n}\) for which \(|\mathbf{x}|<2^{a}\) and \(\|\mathbf{U}\mathbf{x}\|<2^{-b}\). The following version of Davenport's _shrinking lemma_ generalises [1, Lemma 5.3] to arbitrary function fields.
**Lemma A.3**.: _Let \(\mathbf{U}\) be a symmetric \(n\times n\) matrix with entries in \(K_{\infty}\). Let \(a,b,s\in\mathbb{Z}\) such that \(b>0\) and \(s\geq 0\). Then_
\[\nu(\mathsf{\Lambda}_{a,b}(\mathbf{U}),0)\leq\nu(\mathsf{\Lambda}_{a-s,b+s}( \mathbf{U}),-s)+ns+n\max\left\{\left\lfloor\frac{a-b}{2}\right\rfloor,0\right\}.\]
Proof.: The inequality is trivial when \(a\leq 0\) since then the left hand side is \(0\). Hence we may assume that \(a>0\). Let \(-\infty<\sigma_{1}\leq\cdots\leq\sigma_{2n}\) be the successive minima of \(\mathsf{\Lambda}_{a,b}(\mathbf{U})\) and let \(-\infty<\sigma_{1}^{*}\leq\cdots\leq\sigma_{2n}^{*}\) be the successive minima of the dual lattice \(\mathsf{\Lambda}_{a,b}(\mathbf{U})^{*}\subset K_{\infty}^{2n}\), with underlying matrix
\[\mathbf{M}_{a,b}(\mathbf{U})^{\mathrm{adj}}=\begin{pmatrix}t^{a}\mathbf{I}_{ n}&-t^{a}\mathbf{U}\\ \mathbf{0}&t^{-b}\mathbf{I}_{n}\end{pmatrix}.\]
We note that
\[t^{b-a}\mathbf{M}_{a,b}(\mathbf{U})^{\mathrm{adj}}=\begin{pmatrix}t^{b} \mathbf{I}_{n}&-t^{b}\mathbf{U}\\ \mathbf{0}&t^{-a}\mathbf{I}_{n}\end{pmatrix}=\begin{pmatrix}\mathbf{0}& \mathbf{I}_{n}\\ -\mathbf{I}_{n}&\mathbf{0}\end{pmatrix}\mathbf{M}_{a,b}(\mathbf{U})\begin{pmatrix} \mathbf{0}&\mathbf{I}_{n}\\ -\mathbf{I}_{n}&\mathbf{0}\end{pmatrix}^{-1}.\]
It follows that the lattice with underlying basis matrix \(t^{b-a}\mathbf{M}_{a,b}(\mathbf{U})^{\mathrm{adj}}\) is equal to the one with basis matrix \(\mathbf{M}_{a,b}(\mathbf{U})\), up to left and right multiplication by a matrix in \(\mathrm{SL}_{2n}(k)\). Hence the associated lattices share the same successive minima, whence \(2^{\sigma_{i}}=2^{b-a+\sigma_{i}^{*}}\), for \(1\leq i\leq 2n\). Appealing to (A.5), it follows that \(\sigma_{i}+\sigma_{2n-i+1}=b-a\), for \(1\leq i\leq 2n\). Taking \(i=n+1\), we deduce that
(A.6) \[\sigma_{n+1}\geq\left\lceil\frac{b-a}{2}\right\rceil.\]
We now apply Lemma A.2 to deduce that
\[\nu(\mathsf{\Lambda}_{a,b}(\mathbf{U}),0)=\sum_{i=1}^{2n}\max\{0,-\sigma_{i}\}\]
and
\[\nu(\mathsf{\Lambda}_{a-s,b+s}(\mathbf{U}),-s)=\sum_{i=1}^{2n}\max\{0,-s-\sigma_{i }\}.\]
For \(1\leq i\leq n\), it is clear that
\[\max\{0,-\sigma_{i}\}-\max\{0,-s-\sigma_{i}\}\leq s.\]
Moreover, for \(n+1\leq i\leq 2n\) we have
\[\max\{0,-\sigma_{i}\}-\max\{0,-s-\sigma_{i}\}\leq\max\left\{\left\lfloor\frac {a-b}{2}\right\rfloor,0\right\},\]
by (A.6). The statement of the lemma is now clear.
|
2303.13862 | Two-level Graph Network for Few-Shot Class-Incremental Learning | Few-shot class-incremental learning (FSCIL) aims to design machine learning
algorithms that can continually learn new concepts from a few data points,
without forgetting knowledge of old classes. The difficulty lies in that
limited data from new classes not only lead to significant overfitting issues
but also exacerbates the notorious catastrophic forgetting problems. However,
existing FSCIL methods ignore the semantic relationships between sample-level
and class-level. % Using the advantage that graph neural network (GNN) can mine
rich information among few samples, In this paper, we designed a two-level
graph network for FSCIL named Sample-level and Class-level Graph Neural Network
(SCGN). Specifically, a pseudo incremental learning paradigm is designed in
SCGN, which synthesizes virtual few-shot tasks as new tasks to optimize SCGN
model parameters in advance. Sample-level graph network uses the relationship
of a few samples to aggregate similar samples and obtains refined class-level
features. Class-level graph network aims to mitigate the semantic conflict
between prototype features of new classes and old classes. SCGN builds
two-level graph networks to guarantee the latent semantic of each few-shot
class can be effectively represented in FSCIL. Experiments on three popular
benchmark datasets show that our method significantly outperforms the baselines
and sets new state-of-the-art results with remarkable advantages. | Hao Chen, Linyan Li, Fan Lyu, Fuyuan Hu, Zhenping Xia, Fenglei Xu | 2023-03-24T08:58:08Z | http://arxiv.org/abs/2303.13862v1 | # Two-level Graph Network for Few-Shot Class-Incremental Learning
###### Abstract
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbates the notorious catastrophic forgetting problems. However, existing FSCIL methods ignore the semantic relationships between sample-level and class-level. In this paper, we designed a two-level graph network for FSCIL named Sample-level and Class-level Graph Neural Network (SCGN). Specifically, a pseudo incremental learning paradigm is designed in SCGN, which synthesizes virtual few-shot tasks as new tasks to optimize SCGN model parameters in advance. Sample-level graph network uses the relationship of a few samples to aggregate similar samples and obtains refined class-level features. Class-level graph network aims to mitigate the semantic conflict between prototype features of new classes and old classes. SCGN builds two-level graph networks to guarantee the latent semantic of each few-shot class can be effectively represented in FSCIL. Experiments on three popular benchmark datasets show that our method significantly outperforms the baselines and sets new state-of-the-art results with remarkable advantages. Code is available at [https://github.com/sukechenhao/SCGN](https://github.com/sukechenhao/SCGN).
component, formatting, style, styling, insert
## I Introduction
In the real world, artificial intelligence often receives novel classes [1, 2]. When updating the model with new classes, a fatal problem occurs, namely catastrophic forgetting [3, 4, 5], _i.e._, the discriminability of old classes drastically declines. To meet the adaptation to new knowledge, Class-Incremental Learning (CIL) [6, 7, 8] recognizes new classes and maintains discriminability over old classes, which has become an important research area. Most solutions to CIL problems are with abundant training samples. However, in practical applications, the instance labeling and collection cost are sometimes unbearable, where the incremental class may have few samples. This CIL task with few training samples is called Few-Shot Class-Incremental Learning (FSCIL). Similar to CIL, learning new classes can lead to catastrophic forgetting of previous classes. In addition, due to the lack of new class instances, it is easy to observe the overfitting phenomenon on these limited inputs, which increases the learning difficulty of incremental tasks.
It is unwise to directly adopt CIL methods in FSCIL, where limited training samples result in serious overfitting and poor performance on old classes [9]. In recent years, several works [10, 11] are designed for FSCIL, which classify FSL tasks through the class mean (prototype feature) to alleviate the problem of overfitting. However, these methods are difficult to distinguish few-shot classes well because only the prototype can hardly mine the latent semantic similarities and dissimilarities among a few samples. In traditional FSL, Graph Neural Network (GNN) can express complex interactions between samples by performing feature aggregation from neighbors, and mining refined information from a few samples between support and query data. However, unlike traditional FSL, the training data of FSCIL is incremental and in sequence, and the data of past classes is unavailable. That is, FSCIL not only needs to solve the few-shot problem, but also needs to overcome the semantic interference cross tasks. The GNN used in traditional FSL cannot effectively solve these two problems at the same time.
In this paper, we propose a novel two-level graph network SCGN for FSCIL. As shown in Fig. 1, the two levels are respectively Sample-level Graph network (SGN) and Class-level Graph Network (CGN). Specifically, we propose a pseudo incremental paradigm based on meta-learning to simulate FSCIL learning scenarios at the base training. In the pseudo
Fig. 1: Illustration of our proposed two-level graph network for FSCIL. Top: the setting of FSCIL. Bottom: Sample-level to class-level graphs. Square nodes represent sample-level features, and circular nodes represent class-level features. Sample-level features are learned through sample-level graph networks to obtain class-level features, and class-level features are used to achieve class incremental learning through class-level graph networks.
incremental process, we randomly sample FSL tasks from the base dataset, and generate virtual FSL tasks as new FSCIL tasks. Then, in the process of meta-learning, SGN learns the FSL task, calculates the similarity between samples, gathers category samples, and distinguishes samples of different categories to obtain refined features. Moreover, in order to alleviate the semantic gap between tasks, CGN calibrates the categories with the semantic gap according to the relationship between old classes and new classes to reduce the impact of new classes on old classes. Experiments on benchmark datasets under various settings are conducted, validating the effectiveness of our method.
## II Related Work
**Few-Shot Learning**. Few-shot learning aims at rapidly generalizing to new tasks with limited samples, leveraging the prior knowledge learned from a large-scale base dataset. The existing methods can be divided into two groups. Optimization-based methods [12, 13] try to enable fast model adaptation with few-shot data. Metric-based algorithms [14, 15] utilize a pretrained backbone for feature extraction, and employ proper distance metrics between support and query instances. Recent research tries to leverage GNNs to explore complex similarities among examples. DPGN [16] builds up a dual graph to model distribution-level relations of examples for FSL. ECKPN [17] proposes an end-to-end transductive GNN to explore the class-level knowledge.
**Class-Incremental Learning**. Class-Incremental Learning aims to learn from a sequence of new classes without forgetting old ones, which is now widely discussed in various computer vision tasks. Current CIL algorithms can be divided into three groups. The first group estimates the importance of each parameter and prevents important ones from being changed [7, 18]. The second group utilizes knowledge distillation to maintain the model's discriminability [3]. Other methods rehearse former instances to overcome forgetting [19, 20]. Pernici _et al._[21] pre-allocates classifiers for future classes, which needs extra memory for feature tuning and is unsuitable for FSCIL.
**Few-Shot Class-Incremental Learning** Few-Shot Class-Incremental Learning is recently proposed to address the few-shot inputs in the incremental learning scenario. TOPIC [9] uses the neural gas structure to preserve the topology of features between old and new classes to resist forgetting. [22] treats the word embedding as auxiliary information, and builds knowledge distillation terms to resist forgetting. CEC [10] utilizes an extra graph model to propagate context information between classifiers for adaptation. FACT [11] efficiently incorporates new classes with forward compatibility and meanwhile resists the forgetting of old ones.
## III Method
### _Problem Description and Pretraining_
**Problem Description**. We denote \(X\), \(Y\) and \(Z\) as the training set, the label set and the test set, respectively. FSCIL task is to train a model from a continuous data stream in a class-incremental form, _i.e._, training sets \(X^{0},X^{1},\ldots X^{n}\), where samples of a set \(X^{i}\) are from the label set \(Y^{i}\), and \(n\) represents the incremental session. The incremental classes are disjoint, _i.e._, \(Y^{i}\bigcap Y^{j}=\varnothing\) for \(i\neq j\). For the base session, \(X^{0}\) has sufficient samples. For each class in the subsequent sessions, we have only a few samples (e.g., 5 samples). To measure a FSCIL model, we calculate the classification accuracy on the test set \(Z^{i}\) at each session \(i\).
**Pretraining**. First, we will pretrain on the base dataset to obtain the class-level feature graph of the base session. In this case, the input of the model is only the query image \(Q\) to be predicted. We train a feature extractor \(f_{e}\) parameterized by \(\theta_{e}\) with a fully-connected layer as the classifier by minimizing the standard cross-entropy loss using the training samples of \(X^{0}\) under the supervision of target label \(T\). We measure the relationship between the representation and the learnable class-level features \(\theta_{p}\) for all classes \(d(f_{e}(Q),\theta_{p})\), \(d(\cdot,\cdot)\) represents the similarity measure, and we use the cosine similarity. The pre-training optimization function can be expressed as:
\[\theta_{*}=arg\min_{\theta}L(d(f_{e}(Q),\theta_{p}),T). \tag{1}\]
Here \(\theta\) include above \(\theta_{e}\) and \(\theta_{p}\), \(L\) represents cross-entropy loss function. To boost the ability of learning new classes in future tasks, we design a pseudo-incremental training paradigm at the base training based on meta-learning, which make the model learn how to learn a new class from a few samples.
### _Two-level Graph Network (SCGN)_
**Pseudo incremental learning**. In the FSCIL task, the model should have the ability to adapt to new classes of knowledge and expand to new knowledge. However, it is difficult to have the ability with only a few samples. Therefore, we simulated the FSCIL learning situation and designed a pseudo-incremental learning paradigm in the base session to enhance the model's ability to adapt to new FSL tasks. Specifically, we randomly sample two N-way K-shot (N classes, K samples for each class) FSL tasks, _i.e._, \(C_{1}\) and \(C_{2}\), from the base training set \(X^{0}\) in each iteration and we have \(Y^{c_{1}}\bigcap Y^{c_{2}}=\varnothing\). These two FSL tasks serve as base tasks in the pseudo-incremental process.
Motivated by [23], we fuse instances by manifold mixup and treat the fused instances as virtual incremental classes. We decouple the embedding into two parts at the hidden layer \(f_{e}(x)=g(h(x))\). We fuse two FSL tasks to generate a new virtual FSL task \(C_{3}\) :
\[r_{i}^{c_{3}}=\sum\nolimits_{i}^{NK}g[\lambda h(x_{i}^{c_{1}})+(1-\lambda)h(x _{i}^{c_{2}})], \tag{2}\]
where \(\lambda\in[0,1]\) is sampled from Beta distribution, \(r_{i}^{c_{3}}\) represents the features of the sample in the virtual FSL task. The pseudo-incremental learning paradigm needs to enable two-level graph network to build graph relationships among samples and classes in FSCIL.
**Sample-Level Graph Network (SGN)**. As shown in Fig. 2, we obtained class-level graph of base task through pretraining.
Then, we introduce the Sample-level Graph Network (SGN) to learn the FSL task. SGN aggregates samples of the same class and distinguishes samples of different classes by exploring the relationship between a few samples, so as to mine refined class-level features. This not only improves the performance of FSL tasks, but also increases the extensibility of feature representations. The formula for the relationship between samples in each FSL task is as follows:
\[e_{ij}^{c}=f_{r}((r_{i}^{c}-r_{j}^{c})^{2}), \tag{3}\]
where \(r_{i}^{c},r_{j}^{c}\) respectively represents the sample features of the \(i\)-th and \(j\)-th FSL task \(C\), \(C\in\{C_{1},C_{2}\}\) and \(f_{r}\) is the encoding network that transforms the instance similarity to a certain scale. \(f_{r}\) contains two Conv-BN-ReLU blocks. We update the sample representation through the relationship parameters between samples. The obtained embeddings are averaged for each class as a class-level feature:
\[R_{s}^{c}=\text{mean}(\text{SGN}(r_{i}^{c}+\sum\nolimits_{j}^{NK}e_{ij}^{c} \cdot r_{j}^{c})), \tag{4}\]
where SGN is the aggregation network with parameter set \(\theta_{s}\). **Class-Level Graph Network (CGN)**. In the process of FSCIL incremental learning, the model should adjust itself with the new FSL task and perform well on old task. However, the incremental model is optimized on the many-shot old classes, which is tailored to depict old classes' features. As a result, there exists a semantic gap between the old classifiers and extracted new classes prototypes. To solve the semantic gap between the old class and the new class, we introduce the Class-level Graph Network (CGN).
CGN should reflect the context relationship between old and new classes, so as to adjust the embedding space of prototype features of new classes in the class-level graph. In our implementation, we combine the Transformer [24] with the GNN. Specifically, we use the multi-head attention mechanism to construct the relationship between the old class and the new class, and use the GNN to aggregate these information to calibrate the prototype features of the new class. Transformer is a store of triplets in the form of (query \(Q\), key \(K\) and value \(V\)). We set these parameters to \(V=[\theta_{p},R_{s}^{c_{s}}]\), \(K=W_{k}^{T}R_{s}^{c_{s}}\), \(Q=W_{q}^{T}V\). \(R_{s}^{c_{s}}\) is class-level features obtained by SGN learning virtual FSL task \(C_{3}\). \(W_{k}\) and \(W_{q}\) are the learnable parameter of linear projection function. The class-level features formula after CGN calibrating FSL \(C_{3}\) is as follows:
\[\widetilde{R_{s}}^{c_{3}}=\text{CGN}(R_{s}^{c_{3}}+\sum\nolimits_{k}\alpha_{ kq}\cdot V_{k}). \tag{5}\]
Fig. 2: Our incremental prototype learning scheme for few-shot class-incremental learning. (a) an overview of the SCGN framework (b) Sample-Level Graph Network(SGN) (c) Class-Level Graph Network(CGN)
where \(\alpha_{kq}\propto\text{exp}(\frac{KQ^{T}}{\sqrt{d}})\) represents the association weight between the old class features and the new class features, CGN is the aggregation network with parameter set \(\theta_{c}\).
### _FSCIL Training using SCGN_
Fig. 2 shows the training schematic of SCGN. SGN matches the class-level features after learning with the base class graph, which not only strengthens SGN's ability to learn FSL tasks but also reduces the interference to other classes. CGN extends the calibrated class-level features to the base class graph and predicts the virtual samples constructed. Ensure performance while mitigating interference to old classes. We define the following loss function to learn SGN:
\[\mathcal{L}_{1}=-\sum\nolimits_{i}^{2N}\text{cos}(\text{tanh}(R_{s}^{c_{1}} \cup R_{s}^{c_{2}}),\text{tanh}(\theta_{p})). \tag{6}\]
With continuous optimization, sample features of the same class in the base dataset will become more compact. To keep the distinction between the new class and the old class, we define the following loss function to learn CGN:
\[\mathcal{L}_{2}=L(d(r_{i}^{c_{3}},[\theta_{p},\widehat{R_{s}}^{c_{3}}]),T), \tag{7}\]
where \([\cdot]\) denotes the concatenation operation, \(L\) represents cross-entropy loss function, \(T\) represents the label of the virtual sample constructed.
## IV Experiment
### _Implementation Details_
**Dataset**: We evaluate on MiniImageNet, CUB200-2011 and CIFAR100. MiniImageNet is a subset of ImageNet with 100 classes. CUB200-2011 is a fine-grained image classification task with 200 classes. CIFAR100 contains 60,000 images from 100 classes.
**Dataset Split**: For MiniImageNet and CIFAR100, 100 classes are divided into 60 base classes and 40 new classes. The new classes are formulated into eight 5-way 5-shot incremental tasks. For CUB200, 200 classes are divided into 100 base classes and 100 incremental classes, and the new classes are formulated into ten 10-way 5-shot incremental tasks.
**Compared methods**: We compare to classical CIL methods iCaRL [3], EEIL [25], and Rebalancing [26]. Besides, we also compare to current SOTA FSCIL algorithms: TOPIC [9], SPPR [30], Decoupled-DeepEMD/Cosine/NegCosine [14, 27, 31], CEC [10], LIMIT [29] and FACT [11].
**Training details**: All methods are implemented with Pytorch. For CIFAR100, we use ResNet20, while for others we use ResNet18. We optimize with SGD+momentum, and the learning rate is set to 0.1 and decays with cosine annealing.
**Evaluation Protocol**: We evaluate models after each session on the test set \(Z^{i}\) and report the Top 1 accuracy. We also use a performance dropping rate(**PD**) that measures the absolute accuracy drops in the last session w.r.t. the accuracy in the first session, \(\text{PD}=\mathcal{A}_{0}-\mathcal{A}_{N}\), where \(\mathcal{A}_{0}\) is the classification accuracy in the base session and \(\mathcal{A}_{N}\) is the accuracy in the last session.
### _Major Comparison_
We report the performance over benchmark datasets in Fig. 3. We can infer from Fig. 3 that SCGN consistently outperforms the current SOTA method, _i.e._, FACT [11] on benchmark datasets. We also report the detailed value on MiniImageNet dataset in Table I. The performance of SCGN
Fig. 4: The accuracy of each session in MiniImageNet dataset is in the FSCIL task learning process. SCGN demonstrates superior performance in few-shot tasks within FSCIL tasks compared to other methods.
Fig. 3: Comparison of our classification results with other methods on MiniImageNet, CIFAR100 and CUB200-2011. From the experimental results, it can be seen that SCGN outperforms the state-of-the-art (SOTA) methods.
method is higher than that of other methods in each session, and the performance dropping rate is lower than that of other methods. The poor performance of CIL method (such as iCaRL) indicates that the method of a large number of sample tasks is not suitable for FSL tasks. SCGN has better performance than Decoupled-DeepEMD/Cosine/NegCosine [14, 27, 31], CEC [10] and FACT [11]. It reveals that in FSCIL, it is important to make FSL tasks be trained well which strengthens new task constraints to reduce the impact on old tasks. As shown in Fig. 4, We compared the accuracy of each session on the MiniImageNet dataset with the CEC [10] and FACT [11] methods. It can be seen from the figure that in the FSCIL task learning process, the performance of each session is higher than that of other methods. This further proves the superiority of SCGN method.
### _Ablation Study_
We analyze the importance of each component of SCGN on MiniImageNet, CIFAR100 and CUB-200-2011 dataset in Fig. 5. We separately construct models with different combinations of the core elements in SCGN. Baseline represents that the backbone network is used to directly learn FSCIL tasks. From Fig. 5 we can infer that the use of CGN module effectively alleviates the catastrophic forgetting of incremental learning of baseline in FSCIL tasks. The use of SGN module improves the learning performance of the FSL task, and significantly improves the overall performance of each session, which also proves the importance of the training of FSL tasks. The combination of the two modules not only improves the learning performance of FSL tasks but also takes into account the semantic conflict between the old class and the new class due to data imbalance and other reasons. The ablation experiments validate that SGN and CGN modules are helpful for FSCIL tasks.
### _Visualization of Incremental Session_
We visualize the learned decision boundaries with t-SNE on CUB-200-2011 dataset in Fig 6. Fig. 6(a) stands for the decision boundary of the training set, where we train five old classes and three classes with few samples. The circle represents the embedded space of the sample, and the star represents the class-level prototype. We can find that a few samples of the new class are clustered, because the SGN learns more refined features through the association between samples. In addition, CGN calibrates the categories with close similarity through the connection between the old class and the new class. It can be seen from the visualization that the class-level characteristics of the old class and the new class
Fig. 5: Ablation study on MiniImageNet, CIFAR100 and CUB-200-2011. Every part in SCGN improves the performance of FSCIL.
Fig. 6: Visualization of decision boundary of training set and test set on CUB200-2011. Circles represent sample features, stars represent class-level features, and different colors represent different categories.
remain distinguishable Fig. 6(b) tests the trained FSCIL task on the test set. It can be seen that SCGN helps to adapt the prototype and calibrate the decision boundary between old and new classes.
## V Conclusion
In this paper, we proposed a novel two-level graph network SCGN for FSCIL. SCGN builds a pseudo incremental learning paradigm simulating FSCIL in base training. SGN is used to build the relationship between samples in the FSL task to mine more favorable refined features, but also adapt to the learning paradigm of the FSCIL task, with strong model expansion capability. CGN aligns cross tasks, solves the semantic gap between old classes and new classes, and alleviate the catastrophic forgetting problem in FSCIL tasks. SCGN enhances the long-term learning ability of the model, making it consistent with the real scene. Experimental results show that our model is superior in both performance and adaptability than the SOTA methods.
|
2305.02439 | A composite measurement scheme for efficient quantum observable
estimation | Estimation of the expectation value of observables is a key subroutine in
quantum computing and is also the bottleneck of the performance of many
near-term quantum algorithms. Many works have been proposed to reduce the
number of measurements needed for this task and they provide different
measurement schemes for generating the measurements to perform. In this paper,
we propose a new approach, composite measurement scheme, which composes
multiple measurement schemes by distributing shots to them with a trainable
ratio. As an example of our method, we study the case where only Pauli
measurements are allowed and propose Composite-LBCS (C-LBCS), a composite
measurement scheme made by composing locally-biased classical shadows. We
numerically demonstrate C-LBCS on molecular systems up to $\mathrm{CO}_2$ (30
qubits) and show that C-LBCS outperforms the previous state-of-the-art methods
despite its simplicity. We also show that C-LBCS can be efficiently optimized
by stochastic gradient descent and is trainable even when the observable
contains a large number of terms. We believe our method opens up a reliable way
toward efficient observable estimation on large quantum systems. | Zi-Jian Zhang, Kouhei Nakaji, Matthew Choi, Alán Aspuru-Guzik | 2023-05-03T21:50:36Z | http://arxiv.org/abs/2305.02439v1 | # A composite measurement scheme for efficient quantum observable estimation
###### Abstract
Estimation of the expectation value of observables is a key subroutine in quantum computing and is also the bottleneck of the performance of many near-term quantum algorithms. Many works have been proposed to reduce the number of measurements needed for this task and they provide different measurement schemes for generating the measurements to perform. In this paper, we propose a new approach, composite measurement scheme, which composes multiple measurement schemes by distributing shots to them with a trainable ratio. As an example of our method, we study the case where only Pauli measurements are allowed and propose Composite-LBCS (C-LBCS), a composite measurement scheme made by composing locally-biased classical shadows. We numerically demonstrate C-LBCS on molecular systems up to CO\({}_{2}\) (30 qubits) and show that C-LBCS outperforms the previous state-of-the-art methods despite its simplicity. We also show that C-LBCS can be efficiently optimized by stochastic gradient descent and is trainable even when the observable contains a large number of terms. We believe our method opens up a reliable way toward efficient observable estimation on large quantum systems.
## I Introduction
Quantum technology makes it possible to create and measure entangled quantum states living in high-dimensional Hilbert spaces, enabling the implementation of quantum algorithms fundamentally faster than their classical counterparts [1; 2; 3]. In these algorithms, the measurement of quantum states plays a critical role. On one hand, it converts intangible quantum states to classical results that can be recognized and serve as outputs. On the other hand, the destructive nature of quantum measurements forces one to prepare the quantum states multiple times, forming the bottleneck of many quantum applications. Significantly, the emergence of near-term intermediate-scale quantum (NISQ) devices [4; 5] puts efficient measurement methods in increasing importance since near-term quantum algorithms typically involve estimating the expectation value of complicated observables [6; 7; 8; 9; 10; 5; 11]. There are also error mitigation methods [12; 13; 14; 8] proposed for trading the number of measurements with the accuracy of the result, making measurement methods a significant factor in the overall performance of near-term algorithms.
The problem of estimating the expectation value of observables on a quantum state can be formulated as follows: there is an observable \(O=\sum_{j}a_{j}O_{j}\), where \(\vec{a}\) are real coefficients and \(\{O_{j}\}\) are easily measurable fragments of the observable \(O\). Given a quantum state \(|\psi\rangle\) that we can prepare, the problem is how to estimate the expectation value \(\langle O\rangle=\langle\psi|O|\psi\rangle\) to a certain accuracy with fewer copies (shots) of \(|\psi\rangle\). We note that for quantum computing with near-term quantum devices, we usually limit the family of measurements to Pauli measurements so that they can be easily implemented; correspondingly, the observable is decomposed into a sum of the tensor product of Pauli operators (Pauli strings) as \(O=\sum_{j}a_{j}P_{j}\). If one naively estimates the expectation value by independently performing the estimation on each Pauli string, to a fixed accuracy, the required number of measurements scales quadratically with \(\|\vec{a}\|_{1}\). This will be problematic when larger systems of practical interest are considered.
Various measurement methods have been proposed to mitigate the problem above and there are roughly two important families of them. The first family outputs a probability distribution on a small set of measurements for a given observable. The measurements to be made can then be sampled from the set. The methods in the family can be represented by the largest degree first (LDF) grouping [15], overlapped group measurement (OGM) [16], and other recent approaches [17; 18]. An additional large family of methods is characterized by using ideas from classical shadow (CS) [19] and locally-biased classical shadow (LBCS) [20]. These methods are not associated with a small set of measurements to sample from. Instead, they employ distributions on all Pauli measurements. This family of methods provides a feasible way to measure the observables even when they contain exponentially many terms [19]. However, these methods usually only offer distributions of measurements with a simple structure and cannot provide the best performance for molecular Hamiltonians [16]. This drawback makes them currently not the best choice for applications such as variational quantum eigensolver (VQE) [7]. Therefore, how to enhance the representation power of these methods so that they have a bet
ter shot efficiency for complicated observables, becomes an interesting question.
In this work, we introduce a new approach, the composite measurement scheme, for designing efficient measurement methods. The composite measurement scheme combines multiple measurement generators (measurement schemes) together by a weighted combination of them. Based on this methodology, we are able to scale up a type of measurement scheme by combining many of them. As an example of our methodology, we study a method we call the composite-LBCS (C-LBCS), which combines multiple LBCS schemes. We numerically show that by optimizing the weights of the combination and the parameters in each LBCS scheme together, with a gradient rescaling strategy and a two time-scale update rule (TTUR) [21], the combined measurement scheme can provide state-of-the-art measurement efficiency, even when stochastic gradient descent with a small batch size is adopted.
The rest of the paper is organized as follows. In Sec. II, we present the framework and our methodology. Then, in Sec. III, we propose several optimization strategies that are tailored to our case. In Sec. IV, we demonstrate a specific method (C-LBCS) that is based on our methodology and compare its performance with previous methods on various molecule Hamiltonians. Finally, in Sec. V, we conclude with discussion and a general outlook.
## II Framework
### Measurement schemes
In this work, we define the term _measurement schemes_ (MS) as measurement generators that input the required number of measurements and output a set of measurements to perform.
**Definition 1** (Measurement scheme).: _A measurement scheme \(S\) is a generator of measurements that outputs a multi-set of measurements on the objective quantum system given the required number of measurements (shot number) and optionally other information._
A measurement scheme may contain optimizable parameters. For example, in the LBCS method [20], the probabilities of generating each Pauli operator on each qubit are the parameters to be optimized. In OGM [16], the parameters are the probabilities of generating each Pauli measurement (group) that is constructed by the method. We note that we formulate these optimizable parameters as included within a measurement scheme rather than provided as inputs.
Here, we emphasize the difference between the term _measurement scheme_ and the term _measurement method_. By _measurement method_, we imply it is an end-to-end protocol for estimating the expectation value of observables, which includes the optimization of the parameters, the measurements generation, the measurements process and the post-processing protocol that synthesizes the final result from measurement outcomes. In this sense, previously proposed methods such as OGM [16] and LBCS [20] can be recognized as measurement methods that contain a measurement scheme. Measurement schemes, on the other hand, are just generators of measurements. Based on this definition, there is a special family of measurement schemes that has a memory-less (Markovian) structure.
**Definition 2** (Simple measurement scheme).: _A simple measurement scheme \(S\) is a measurement scheme that can be implemented by sampling measurements from a fixed distribution for each required measurement._
A simple measurement scheme only inputs the number of measurements needed and the generation of each measurement is independent. Many proposed measurement schemes (e.g. the ones used in OGM and LBCS) can be modelled by simple measurement schemes and their simple structure makes them easy to analyze and serve as a base for generalization.
### Composite measurement schemes
Next, we introduce a way to combine multiple simple measurement schemes to make a _composite measurement scheme_ (CMS). Suppose \(\vec{S}\) is a list of schemes to be combined. A natural way to combine multiple schemes is to assign each scheme a probability; when generating a measurement from the combined scheme, we sample a scheme according to the probability and generate a measurement from the sampled scheme. Specifically, we propose the \(\mathrm{SampleProd}\) operator to represent this combination.
**Definition 3** (Composite measurement scheme).: _Suppose there is a list of simple measurement schemes \(\vec{S}\) and there is a distribution of the schemes in \(\vec{S}\) represented by the probabilities \(\vec{r}\), in which \(r_{k}\) corresponds to \(S_{k}\). We define \(S^{\prime}=\mathrm{SampleProd}(\vec{r},\vec{S})\) to be the composite measurement scheme which generates each measurement by first sampling a sub-scheme \(S_{k^{\prime}}\) from \(\vec{S}\) by the probabilities \(\vec{r}\) and then generating a measurement from \(S_{k^{\prime}}\)._
Figure 1: A diagrammatic representation of the composite measurement scheme defined in Definition 3. The total number of measurements \(M\) is distributed to sub-measurement-schemes (Sub-MS) by the probabilities \(\vec{r}\). The composite scheme gathers the measurements generated by all the sub-schemes as the output.
CMS can be used to form interesting new types of measurement schemes. For example, it is viable to combine shadow methods with grouping methods by making \(\mathrm{SampleProd}\) of their measurement schemes.
More importantly, CMS provides a way to scale up measurement schemes. In this work, we will focus on demonstrating \(\mathrm{SampleProd}\) by applying it to LBCS schemes as an example of CMS. The measurement scheme of LBCS has a distribution \(\beta_{i}\) on the set of Pauli operators (\(X,Y\) and \(Z\)) for each qubit \(i\) in the system. When generating a measurement, for the \(i\)-th qubit, the scheme samples a Pauli operator on the qubit by the distribution \(\beta_{i}\). The Pauli measurement generated will be the tensor product of the Pauli operators sampled for each qubit. Suppose \(\vec{S}\) is a list of LBCS schemes, \(\mathrm{SampleProd}(\vec{r},\vec{S})\) offers a natural generalization of single LBCS schemes, which we call by composite-LBCS (C-LBCS) in this work.
**Protocol 1:** Composite locally-biased classical
shadow (C-LBCS)
Suppose the composite scheme is made by \(\mathrm{SampleProd}(\vec{r},\vec{S})\), in which each sub-scheme \(S_{k}\) is a LBCS scheme with adjustable distributions \(\{\beta_{i}^{k}\}\).
When \(M\) measurements are required, repeat the following process \(M\) times.
Sample a sub-scheme \(S_{k^{\prime}}\) by the probabilities \(\vec{r}\). Initialize a Pauli string \(Q\) with \(Q[i]=I\) for all \(i\).
For each qubit \(i\), decide the Pauli operator \(Q[i]\) on it by sampling a Pauli operator \(P\) from \(\beta_{i}^{k^{\prime}}\) and setting \(Q[i]\) to \(P\).
Output \(Q\) as the Pauli measurement to carry out.
C-LBCS has more parameters and stronger representation power than LBCS. Moreover, the representation power of a C-LBCS scheme can be adjusted by the number of sub-schemes. In the extreme case when there is only one sub-scheme, C-LBCS will degenerate to the original LBCS.
C-LBCS differs from previous methods by providing a _top-down_ approach to improve the measurement efficiency, whereas most of the previous methods use the _bottom-up_ approach. Previous methods usually solve the problem by considering how to improve a certain measurement scheme (the _bottom_) by certain heuristics. For example, in grouping methods, the scheme is usually \(l_{1}\) sampling and commuting relations are leveraged to improve it. In the Derand method [22], one improves the distribution from the classical shadow by greedily optimizing a cost function. These methods usually do not involve the concept of an optimal scheme in their derivation since there is no direct way to involve it in its heuristics. However, in C-LBCS, we can directly consider how to approximate an optimal simple scheme (the _top_) and provide heuristics in a top-down manner.
In the following, we show that C-LBCS is capable of representing any distribution of Pauli measurements that are applied to the whole system.
**Theorem 1** (Universality of C-LBCS).: _Suppose there are \(n_{q}\) qubits in a system. When there are \(3^{n_{q}}\) sub-schemes, a C-LBCS can simulate any distribution of Pauli measurements if every Pauli measurement \(Q_{k}\) in the distribution acts nontrivially on every qubit._
Proof.: Denote the set of Pauli measurements in the distribution and their probability to be simulated by \(\vec{Q}\) and \(\vec{p}\). To simulate this distribution, one just needs to set sub-scheme \(S_{k}\) in the C-LBCS scheme to output \(Q_{k}\) and set \(\vec{r}=\vec{p}\). Notice that there are at most \(3^{n_{q}}\) different Pauli measurements that act on every qubit in the system. Therefore, at most \(3^{n_{q}}\) sub-schemes are needed.
Though C-LBCS might need exponentially many sub-schemes to simulate the optimal simple scheme, in Sec. IV.1, we show that state-of-the-art efficiency can be provided by C-LBCS with as many sub-schemes as groups in the OGM method. This implies the number of sub-schemes we need to achieve good efficiency can be far less than exponentially large. In practice, one can adjust the number of sub-schemes by the available computational resources and the required efficiency.
## III Optimization of CMS
One problem with CMS is how to determine the probability of each sub-scheme and calculate its optimal parameters. In the following, we discuss how to optimize CMS, including both the parameters of the sub-schemes and their probabilities, with the assumption that all the measurements involved are Pauli measurements and the state to be measured is totally unknown. We introduce a cost function which has a simple physical interpretation and can be constructed without the knowledge of the target quantum states. Then, we analyze the structure of the cost function and investigate how to perform gradient descent with it.
### Average one-shot variance
A straightforward way to quantify the performance of a measurement method is using the variance of the estimation it produces, given a fixed number of shots. However, as we mentioned, a post-processing protocol must be specified for measurement schemes producing estimations. In the following, we formalize and adopt a post-processing protocol that is widely adopted [17; 23], in which estimations of terms in the observable are generated and summed up to the estimation of the whole observable. This post-processing protocol is different from the one adopted in Ref. [19; 20], in which only estimations to the whole observable are generated and averaged.
We note that throughout this work, we use \(\widetilde{\cdots}\) to represent random variables. Suppose there is a list of Pauli measurement \(\{Q_{k}\}\) generated from the measurement scheme and \(\{\widetilde{x}_{k}\}\) are the corresponding results in the form of bitstrings. Also, we define the following relation that characterizes what
Pauli measurements can produce estimations for the expectation value of a Pauli string.
**Definition 4** (Qubit-wise covering).: _For a Pauli string \(P\) and a Pauli measurement represented by the Pauli string \(Q\), we say \(Q\) covers \(P\), or equivalently \(P\triangleright Q\), when \(P[i]\) equals to either \(Q[i]\) or identity for all qubit \(i\)._
With the above notations, we present Protocol 2, which is set to be the post-processing protocol for demonstrating all measurement schemes in this work.
```
1:For each term \(P_{j}\) in the observable \(O=\sum_{j}a_{j}P_{j}\), generate an estimation \(\widehat{\langle P_{j}\rangle}\) for it by averaging one-shot estimations \[\widehat{\langle P_{j}\rangle}:=\frac{1}{m_{j}}\sum_{k,P_{j}\triangleright Q_{ k}}\mu(P_{j},\widetilde{x}_{k}),\] (1) where \(m_{j}=\sum_{k,P_{j}\triangleright Q_{k}}1\) is number of available one-shot estimations and \[\mu(P_{j},\widetilde{x}_{k})=\prod_{i,P_{j}[i]\neq I}(-1)^{\widetilde{x}_{k} [i]},\] (2) is the one-shot estimation generated by each measurement outcome \(\widetilde{x}_{k}\). Here, \(\widetilde{x}_{k}[i]\) is used to denote the \(i\)-th bit of \(\widetilde{x}_{k}\).
2:Output the estimation of \(\langle O\rangle\) by summing up the estimations of each term. \[\widehat{\langle O\rangle}=\sum_{j}a_{j}\widehat{\langle P_{j}\rangle}.\] (3)
For a list of Pauli measurements \(\{Q_{k}\}\), the output \(\widehat{\langle O\rangle}\) of Protocol 2 is random due to the random nature of quantum mechanics. The variance of the estimate of \(\widehat{\langle O\rangle}\) can be calculated by (See Sec. C for derivation)
\[\begin{split}\mathrm{Var}(\widehat{\langle O\rangle})& =\sum_{j,\ell}a_{j}a_{\ell}\operatorname{Cov}(\widehat{\langle P_{j} \rangle},\widehat{\langle P_{\ell}\rangle})\\ &=\sum_{j,\ell}a_{j}a_{\ell}\frac{m_{j\ell}}{m_{jj}m_{\ell\ell}} (\langle P_{j}P_{\ell}\rangle-\langle P_{j}\rangle\langle P_{\ell}\rangle), \end{split} \tag{4}\]
where
\[m_{j\ell}:=\sum_{k,P_{j}\triangleright Q_{k},P_{\ell}\triangleright Q_{k}}1, \tag{5}\]
and \(\langle\cdots\rangle\) here are the true expectation values that depend on the state measured. \(m_{jj}\) equals \(m_{j}\) that we defined above and \(m_{j\ell}\) (\(j\neq\ell\)) is the number measurements that generate a one-shot estimation for both \(\langle P_{j}\rangle\) and \(\langle P_{\ell}\rangle\).
When the measurement scheme is stochastic, \(m_{j\ell}\) needs to be modelled as a random variable \(\widetilde{m}_{j\ell}\) and the variance then needs to be obtained by applying the law of total variance.
\[\mathrm{Var}(\widehat{\langle O\rangle})=\sum_{j,\ell}a_{j}a_{\ell}\mathbb{E} [\frac{\widetilde{m}_{j\ell}}{\widetilde{m}_{jj}\widetilde{m}_{\ell\ell}}]( \langle P_{j}P_{\ell}\rangle-\langle P_{j}\rangle\langle P_{\ell}\rangle), \tag{6}\]
The above equation involves \(\langle P_{j}\rangle\) and \(\langle P_{j}P_{\ell}\rangle\) which depends on the state that is measured. As we assume the target state is unknown, it cannot be directly used as the cost function. Therefore, we propose to use the variance averaged over the whole Hilbert space with the Haar measure using the following lemma.
**Lemma 1**.: _Suppose there are \(n_{q}\) qubits in the system, then_
\[\int_{\mathrm{Haar}}\Big{(}\langle P_{j}P_{\ell}\rangle-\langle P_{j}\rangle \langle P_{\ell}\rangle\Big{)}\mathrm{d}|\psi\rangle=\delta_{j\ell}\frac{2^{n_ {q}}}{2^{n_{q}}+1}, \tag{7}\]
_where \(\delta_{j\ell}\) is the Kronecker delta function._
A proof of the lemma is given in Sec. B. With Lemma 1, we have
\[\mathbb{E}_{|\psi\rangle\sim\mathrm{Haar}}[\mathrm{Var}[\widehat{\langle \psi|O|\psi\rangle}]]=\sum_{j}a_{j}^{2}\mathbb{E}[\frac{1}{\widetilde{m}_{j}} ]\frac{2^{n_{q}}}{2^{n_{q}}+1}, \tag{8}\]
which is independent of the state to be measured. We remark that we can also show that the variance of the variance \(\mathrm{Var}(\widehat{\langle O\rangle})\) over the whole Hilbert space with the Haar measure is exponentially suppressed (see Section 4 in [24]). Therefore, \(\mathrm{Var}(\widehat{\langle O\rangle})\) tends to take the average value Eq. (8) when sampling \(|\psi\rangle\) according to the Haar measure.
Notice that \(\mathbb{E}[\frac{1}{\widetilde{m}_{j}}]\) in the right-hand side of Eq. (8) is ill-defined when the probability that \(\widetilde{m}_{j}\) equals \(0\) is non-zero, which is usual when \(M\) is finite. To make a well-defined quantifier, we define the ratio between \(\mathbb{E}_{|\psi\rangle\sim\mathrm{Haar}}[\mathrm{Var}[\widehat{\langle\psi|O| \psi\rangle}]]\) and \(1/M\) in the limit that \(M\rightarrow+\infty\):
\[V=\lim_{M\rightarrow+\infty}\sum_{j}a_{j}^{2}\mathbb{E}[\frac{M}{\widetilde{m} _{j}+\epsilon}]\frac{2^{n_{q}}}{2^{n_{q}}+1}, \tag{9}\]
in which the random variable \(\frac{M}{m_{j}+\epsilon}\) will converge to \(\frac{M}{m_{j}}\) when \(M\rightarrow+\infty\) if we set \(\epsilon\in o(M)\). In the Sec. D, we show that if \(\epsilon\) is a positive number in \(\Theta(M^{\frac{6}{2}})\) and \(\widetilde{m}_{j}\) is a well-behaved random variable whose expectation value and variance are both in \(\Theta(M)\), we have
\[\lim_{M\rightarrow+\infty}\mathbb{E}[\frac{M}{\widetilde{m}_{j}+\epsilon}]=\lim _{M\rightarrow+\infty}\frac{M}{\mathbb{E}[\widehat{m}_{j}]}=\frac{1}{h_{j}}, \tag{10}\]
where \(h_{j}\) can be interpreted as the average probability for the term \(P_{j}\) being covered by a measurement generated by the scheme. As long as \(h_{j}>0\) for all \(j\), \(V\) is well-defined as expected. This formula holds for all simple measurement schemes, since in that case, \(\widetilde{m}_{j}\) obeys the binomial distribution whose expectation value and variance grow linearly with \(M\) (therefore in \(\Theta(M)\)). Therefore, in the following, we can reasonably define the cost function we use in this work.
**Definition 5** (Average one-shot variance).: _Suppose there is a \(n_{q}\) qubit observable \(O=\sum_{j}a_{j}P_{j}\) and a simple measurement scheme \(S\). The average one-shot variance \(V\) of \(S\) for \(O\) is_
\[V=\sum_{j}\frac{a_{j}^{2}}{h_{j}}\frac{2^{n_{q}}}{2^{n_{q}}+1}, \tag{11}\]
_where \(h_{j}=\lim_{M\rightarrow\infty}\mathbb{E}[\widetilde{m}_{j}]/M\)._
The physical meaning of \(V\) is the scaling factor of the variance of \(\widehat{O}\) averaged by Haar measure in the limit as the number of measurements tends to infinity. A scheme can be optimized by this cost function as long as \(\widetilde{h}\) can be efficiently estimated given the parameters of the scheme. Notably, the structure of \(\mathrm{SampleProd}\) allows a further decomposition of \(\widetilde{h}\) by the contribution of each sub-scheme. Define \(\widetilde{m}_{j}^{k}(M_{k})\) to be the number of measurements that cover \(P_{j}\) generated by the sub-scheme \(S_{k}\) with \(M_{k}\) measurements assigned to it. Denote \(h_{j}^{k}=\lim_{M_{k}\rightarrow+\infty}\mathbb{E}[\widetilde{m}_{j}^{k}(M_{ k})]/M_{k}\). \(h_{j}\) can be decomposed as
\[h_{j} =\lim_{M\rightarrow+\infty}\sum_{k}\mathbb{E}[\widetilde{m}_{j }^{k}(M_{k})]/M \tag{12}\] \[=\sum_{k}\lim_{M\rightarrow+\infty}(\mathbb{E}[\widetilde{m}_{j }^{k}(M_{k})]/M_{k})(M_{k}/M)\] (13) \[=\sum_{k}r_{k}h_{j}^{k}, \tag{14}\]
where we used that \(\lim_{M\rightarrow+\infty}M_{k}/M=r_{k}\). In this way, \(V\) can be rewritten as
\[V=\frac{2^{n_{q}}}{2^{n_{q}}+1}\sum_{j}\frac{a_{j}^{2}}{\sum_{k}r_{k}h_{j}^{k}}, \tag{15}\]
which will be adopted in all the following sections. In the case of C-LBCS, \(h_{j}^{k}\) is simply the probability that \(P_{j}\) is covered by the sampled measurement when the \(k\)-th LBCS scheme has been sampled. We put the detail of its calculation in Sec. A.
### Optimization strategy
After setting the cost function, we discuss how to efficiently optimize a CMS made by \(\mathrm{SampleProd}\). In \(V\), there are two sets of parameters to be optimized, the probabilities \(\vec{r}\) and the parameters \(\{\theta_{i}^{k}\}\) for each sub-scheme. Here, \(\theta_{i}^{k}\) denotes the \(i\)-th parameter for the sub-scheme \(S_{k}\).
#### ii.2.1 Gradient rescale
A straightforward way to optimize \(\vec{r}\) and \(\{\theta_{i}^{k}\}\) is to apply a gradient descent directly. To this end, we calculate the gradient on the parameter \(\theta_{i}^{k}\) as
\[\frac{\partial V}{\partial\theta_{i}^{k}}=\sum_{j}\frac{\partial V}{\partial h _{j}}\frac{\partial\sum_{k}r_{k}h_{j}^{k}}{\partial\theta_{i}^{k}}=\sum_{j} \frac{\partial V}{\partial h_{j}}r_{k}\frac{\partial h_{j}^{k}}{\partial \theta_{i}^{k}}. \tag{16}\]
The gradient on \(\theta_{i}^{k}\) can be very small when \(r_{k}\) is close to zero, which may happen when parameters in \(S_{k}\) are poorly initialized and \(r_{k}\) is optimized to nearly zero for avoiding \(S_{k}\) being sampled. In this way, \(S_{k}\) will be frozen out from the composite scheme as its parameters stop updating and its probability to be sampled is nearly zero. This situation should be avoided as we want to utilize all sub-schemes. Thus, we adopt a strategy to rescale the gradient by \(1/r_{k}\) when optimizing the parameters of \(S_{k}\). Specifically, we use
\[\frac{1}{r_{k}}\frac{\partial V}{\partial\theta_{i}^{k}}=\sum_{j}\frac{ \partial V}{\partial h_{j}}\frac{\partial h_{j}^{k}}{\partial\theta_{i}^{k}}. \tag{17}\]
instead of Eq. (16) in the gradient descent. The rescaled gradient ensures that each sub-scheme continues to be optimized even if their \(r_{k}\) is small.
#### ii.2.2 Two time-scale update rule
As the second technique, we introduce the two time-scale update rule (TTUR) [21], in which the parameters of the model are divided into two parts and optimized with different learning rates. Before going into detail, we clarify the feature of the cost function Eq. (15) in terms of convexity. As the term \(r_{k}h_{j}^{k}\) is included, the cost function Eq. (15) is non-convex, and the optimization may be trapped in a local minimum. However, we show that the cost function \(V\) is convex as a function of \(\vec{r}\) (with \(\{\theta_{i}^{k}\}\) and therefore \(\{h_{j}^{k}\}\) fixed) or \(\{h_{j}^{k}\}\) (with \(\vec{r}\) fixed).
Let us first prove the convexity with respect to \(\{h_{j}^{k}\}\) when \(\vec{r}\) is fixed. The same proof can be used to show the convexity of \(V\) when \(\{\widetilde{h}_{j}^{k}\}\) is fixed. Recall that a summation of convex functions weighted by positive numbers is also a convex function. Since \(V\) is a summation of the terms \(\{1/(\sum_{k}r_{k}h_{j}^{k})\}\) weighted by positive numbers, we just need to prove the convexity of each term. Then, notice that \(1/(\sum_{k}r_{k}h_{j}^{k})\) is just a composition of \(f(x)=1/x\) and \(g_{j}(\{h_{j}^{k}\})=\sum_{k}r_{k}h_{j}^{k}\). Because \(g_{j}\) is concave and positive on its domain, combined with the fact that \(f(x)\) is convex and non-increasing when \(x>0\), we know that \(f(g_{j}(\{h_{j}^{k}\}))=1/(\sum_{k}r_{k}h_{j}^{k})\) is convex. Since \(\{h_{j}^{k}\}\) is defined on a convex set (\(h_{j}^{k}\in[0,1]\)), combined with the above arguments, we see \(V\) is a convex function of \(\{\widetilde{h}_{j}^{k}\}\) when \(\vec{r}\) is fixed.
The structure that \(V\) is convex when \(\vec{r}\) or \(\{h_{j}^{k}\}\) is fixed implies the optimization will be easier when only \(\vec{r}\) or \(\{h_{j}^{k}\}\) (or \(\{\theta_{i}^{k}\}\)) is optimized. This situation is similar to the training of a generative adversarial network (GAN) [25], in which the training can also be divided into two parts (the generator and the discriminator), whose optimization is easier with the parameters of the other part fixed. Therefore, we propose to adopt the same strategy in the optimization of the probabilities \(\vec{r}\) and parameters in the sub-schemes and set different learning rates for \(\vec{r}\) and \(\{\theta_{i}^{k}\}\) when optimizing a CMS. Throughout this work, we will set the learning rate of \(\vec{r}\) to be 10 times smaller than the learning rate of \(\{\theta_{i}^{k}\}\).
Stochastic gradient descent
Stochastic gradient descent (SGD) is a widely adopted strategy in machine learning, in which the gradient provided to the optimizer is generated only with part of the whole dataset. SGD provides a general approach for training models when the dataset is large. In our case, the models (measurement schemes) are trained with the terms from the observable. An observable might contain a large number of terms in near-term quantum algorithms. For electronic structure problems (the main target of VQE), the number of terms in the Hamiltonian contains \(\mathcal{O}(n_{O}^{4})\) terms [26], with \(n_{O}\) being the number of spin-orbitals. If the Hamiltonian is not split, it will be hard to fit the computation in the memory of a GPU.
Therefore, we propose to use SGD in the training of measurement schemes. Specifically, in each epoch, we randomly split the terms of observable \(O\) into batches \(\{O_{j}\}\), with each of them containing a nearly fixed number of terms and \(\sum_{j}O_{j}=O\). Then, we iterate over \(\{O_{j}\}\) and calculate the gradient in each step with one batch. After iterating all the batches, we split \(O\) with a different seed for the next epoch. We show in Sec. IV.2 that C-LBCS can perform well even when the batch size is very small.
## IV Numerical examples for C-LBCS
In the following, we numerically demonstrate the performance of C-LBCS with the average one-shot variance as the quantifier. In C-LBCS, the number of sub-schemes affects the performance of the composite scheme. We show that a C-LBCS can outperform previous state-of-the-art methods by using a proper number of sub-schemes. We also analyze how different training strategies can affect the convergence of optimization.
Throughout this section, by default, both the Rescale and TTUR strategies are adopted in the training. All probabilities (\(\vec{r}\) and \(\{\beta_{i}^{k}\}\)) are calculated by passing parameters into \(\mathrm{SoftPlus}\) and normalization layers (See Sec. E). The batch size for stochastic gradient descent is set to \(500\) Pauli strings and the learning rate for all optimizations is \(5\times 10^{-3}\) for sub-scheme parameters and \(5\times 10^{-4}\) for \(\vec{r}\). Optimizations terminate when the cost function fails to decrease more than \(0.1\%\) in the past \(1000\) steps. The initial parameters for the LBCS schemes are set so that the C-LBCS scheme resembles the \(l_{1}\) sampling scheme. For a C-LBCS scheme with \(n_{S}\) sub-schemes, we select the \(n_{S}\) terms with the largest weights (\(|a_{j}|\)) in \(O\) and set the LBCS sub-schemes to generate them initially. The probabilities \(\vec{r}\) of each sub-scheme are set to be proportional to the weight of those terms correspondingly.
The molecular Hamiltonians are generated by mapping the Fermionic Hamiltonian for the electronic structure problem with Jordan-Wigner (JW) [27] and Bravyi-Kitaev (BK) [28] transformations. All molecules are in their equilibrium geometry and under the STO-3G basis set, except for hydrogen chains, in which the hydrogens are spaced by \(2\) Bohr radius and the STO-6G basis set is used. The equilibrium geometries are retrieved from [29].
### Performance
We test our method by comparing its performance on molecular Hamiltonians against previous methods, including the derandomized classical shadow (Derand) [22], OGM and ShadowGrouping (SG) [30]. The number of sub-schemes of C-LBCS is set to be the same as the number of groups generated in OGM. Since the average one-shot variance is defined in the limit of infinitely many measurements, it is hard to estimate it in the exact same way for Derand and SG because the \(\{h_{j}\}\) in these methods cannot be analytically calculated. Here, for a Hamiltonian with \(n_{H}\) terms, we choose to generate \(3n_{H}\) measurements for each method and calculate the average one-shot variance by the measurement scheme that uniformly samples the \(3n_{H}\) measurements. We note that \(3n_{H}\) is a number much larger than that used in the original works of Derand and SG (1000 shots). Some of the exact numbers of \(n_{H}\) are shown in Table 3.
The result of experiments is shown in Table 1. We see that in all cases except for \(\mathrm{H}_{2}\mathrm{O}\) molecule with BK transformation, C-LBCS outperforms previous methods, which confirms the validity of the composite measurement schemes. Our results agree with the results of the work of SG, though they are not using average one-shot variance as the metrics.
We also study how the average one-shot variance of C-LBCS changes with the number of sub-schemes. The results are shown in Fig. 2, with SG as a reference of variance (red line) and OGM as a reference of the number of sub-schemes (yellow line). We found that C-LBCS constantly outperforms SG with a number of sub-schemes comparable to the number of OGM groups. Fig. 2 also implies that better measurement efficiency can be obtained by increasing the number of sub-schemes. This property of C-LBCS distinguishes it from many previous methods, in which no hyperparameters can be used to trade measurement efficiency with computational resources.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline Molecule & Enc. & Derand & OGM & SG & C-LBCS \\ \hline \multirow{2}{*}{\(\mathrm{LH}(12)\)} & JW & 8.26 & 7.62 & 6.85 & **6.53** \\ & BK & 10.68 & 7.85 & 7.16 & **6.77** \\ \hline \multirow{2}{*}{\(\mathrm{H}_{6}(12)\)} & JW & 32.64 & 30.47 & 27.35 & **24.93** \\ & BK & 48.25 & 35.91 & 35.21 & **30.68** \\ \hline \multirow{2}{*}{\(\mathrm{H}_{2}\mathrm{O}(14)\)} & JW & 712 & 479 & 432 & **430** \\ & BK & 891 & 518 & **454** & 455 \\ \hline \multirow{2}{*}{\(\mathrm{NH}_{3}(16)\)} & JW & 833 & 357 & 289 & **287** \\ & BK & 1111 & 399 & 312 & **309** \\ \hline \multirow{2}{*}{\(\mathrm{N}_{2}(20)\)} & JW & 1571 & 1115 & 827 & **811** \\ & BK & 2002 & 1045 & 852 & **841** \\ \hline \multirow{2}{*}{\(\mathrm{C}_{2}\mathrm{H}_{2}(24)\)} & JW & 1708 & 880 & 638 & **580** \\ & BK & 2424 & 879 & 645 & **614** \\ \hline \multirow{2}{*}{\(\mathrm{C}_{2}\mathrm{H}_{4}(28)\)} & JW & 3692 & 1481 & 1004 & **928** \\ & BK & 4908 & 1641 & 1040 & **1018** \\ \hline \multirow{2}{*}{\(\mathrm{CO}_{2}(30)\)} & JW & 11655 & 3500 & 2442 & **2335** \\ & BK & 16349 & 4169 & 2754 & **2677** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table for the average one-shot variance with various measurement methods. C-LBCS with a moderate number of sub-schemes outperforms the other methods in nearly all the systems.
### Training
We numerically study how different strategies for parameter optimization affects the convergence of training. We do ablation tests in which _Rescale_ or _TTUR_ is turned off during training. Other settings of the training are kept the same as adopted by Table 1. All Hamiltonians are encoded with the JW transformation and the result is shown in Table 2. We find that keeping both of the strategies provide the best result in most cases.
We then study how different batch sizes affect the training of C-LBCS. We set the batch sizes \(n_{b}\) to be \(1/128\), \(1/64\) and \(1/32\) of the number of terms in the Hamiltonian \(n_{H}\). To make a fair comparison, we adjust the stop criteria proportionally, so that the optimization terminates when the cost function fails to decrease more than \(0.1\%\) in the past \(1000\times\frac{500}{n_{b}}\) steps. The Hamiltonians are encoded with the BK transformation. We show in Table 3 that the cost function converges to similar values, which implies the training of C-LBCS is not sensitive to batch size for molecular Hamiltonians. Our result indicates that C-LBCS can be applied on much larger molecular Hamiltonians and trained efficiently on GPUs.
## V Discussion & Outlook
In this work, we proposed a new approach, the composite measurement scheme, to combine multiple measurement schemes and explored a new and scalable path for improving the efficiency of observable estimation. As an example of the CMS, based on the SampleProd operator, we found an easily trainable composite measurement scheme C-LBCS and numerically showed how C-LBCS can outperform previous state-of-the-art methods. We studied the trainability of C-LBCS, proposed to rescale the gradients for optimization and revealed a two-level structure in the cost function. We also found that C-LBCS can be trainable by stochastic gradi
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Molecule & None & Rescale & TTUR & Both \\ \hline \(\mathrm{N_{2}}\)(20) & 825 & **807** & 815 & 811 \\ \hline \(\mathrm{C_{2}H_{2}}\)(24) & 606 & 592 & 586 & **580** \\ \hline \(\mathrm{C_{2}H_{4}}\)(28) & 977 & 945 & 935 & **928** \\ \hline \(\mathrm{CO_{2}}\)(30) & 2379 & **2335** & 2346 & **2335** \\ \hline \(\mathrm{H_{6}}\)(12) & 25.67 & 25.35 & 25.59 & **24.93** \\ \hline \(\mathrm{H_{8}}\)(16) & 91.52 & 88.76 & 83.71 & **83.54** \\ \hline \(\mathrm{H_{10}}\)(20) & 251 & 229 & 222 & **219** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The average one-shot variance by C-LBCS with different strategies in optimization. We find that applying the _Rescale_ and _TTUR_ strategies can significantly improve the result of training. In most cases, applying both of these strategies provides the best result and in a few cases applying _Rescale_ alone gives the best result.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Molecule & \(n_{H}\) & \(1/128\) & \(1/64\) & \(1/32\) \\ \hline \(\mathrm{H_{2}O}\)(14) & 1085 & **452.73** & 454.39 & 454.85 \\ \hline \(\mathrm{NH_{3}}\)(16) & 2936 & **307.15** & 307.44 & 307.56 \\ \hline \(\mathrm{N_{2}}\)(20) & 2238 & **837.86** & 838.31 & 839.16 \\ \hline \(\mathrm{C_{2}H_{2}}\)(24) & 5184 & 611.07 & **610.78** & 611.42 \\ \hline \(\mathrm{C_{2}H_{4}}\)(28) & 8918 & 1014.12 & **1013.51** & 1014.39 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The average one-shot variance by C-LBCS with batch sizes to be \(1/128\), \(1/64\) and \(1/32\) of the number of terms in the Hamiltonian (\(n_{H}\)). It can be seen that the batch size does not significantly affect the training result of C-LBCS.
Figure 2: The performance of C-LBCS with different numbers of sub-schemes, quantified by the average one-shot variance. All the Hamiltonians are encoded by JW transformation. The vertical yellow line represents the number of groups produced by the OGM method. The horizontal red line represents the average one-shot variance of the ShadowGrouping method.
ent descent with small batch sizes, so the model will be trainable with much larger observables.
We used the average one-shot variance as our cost function and performance quantifier, which does not involve any information about the state to be measured. This is different from many previous works [16, 20, 30] which use the variance concerning a certain state (e.g. the ground state of the Hamiltonian) as the performance quantifier. We chose not to follow this approach because we do not expect the ground state (or its approximation) to be always available, especially when 40 or more qubits are involved. For the same reason, we also adopted a cost function that does not involve the information from a pre-calculated state. However, in practice, it might be advantageous to use the information of states collected from measurement results to improve the estimation efficiency [23]. We leave improving C-LBCS this way as a future question to be investigated.
For more generalization of this work, as we only demonstrated one type of composite measurement scheme, it will be interesting to see whether better measurement schemes can be produced by applying \(\mathrm{SampleProd}\) on other types of sub-schemes with more complex structure [31, 32]. Also, as we assumed that only Pauli measurements are allowed considering the implementation hardness, it will be interesting to see how our framework can be generalized to more types of measurements, such as Clifford measurements.
###### Acknowledgements.
The authors thank Anders G. Froseth for his generous support. K.N. acknowledges the support of Grant-in-Aid for JSPS Research Fellow 22J01501. A.A.-G. also acknowledges the generous support of Natural Resources Canada and the Canada 150 Research Chairs program.
|
2302.11384 | The internal clock of many-body delocalization | After a decade of many claims to the opposite, there now is a growing
consensus that generic disordered quantum wires, e.g. the XXZ-Heisenberg chain,
do not exhibit many-body localization (MBL) - at least not in a strict sense
within a reasonable window of disorder values $W$. Specifically, computational
studies of short wires exhibit an extremely slow but unmistakable flow of
physical observables with increasing time and system size (``creep") that is
consistently directed away from (strict) localization. Our work sheds fresh
light on delocalization physics: Strong sample-to-sample fluctuations indicate
the absence of a generic time scale, i.e. of a naive ``clock rate"; however,
the concept of an ``internal clock" survives, at least in an ensemble sense.
Specifically, we investigate the relaxation of the imbalance $\mathcal{I}(t)$
and its temporal fluctuations $\mathcal{F}(t)$, the entanglement and Renyi
entropies, $\mathcal{S}_{\mathrm{e}}(t)$ and $ \mathcal{S}_2(t)$, in a 1D
system of interacting disordered fermions. We observe that adopting
$\mathcal{S}_{\mathrm{e}}(t), \mathcal{S}_2(t)$ as a measure for the internal
time per sample reduces the sample-to-sample fluctuations but does not
eliminate them. However, a (nearly) perfect collapse of the average
$\overline{\mathcal{I}}(t)$ and $\overline{\mathcal{F}}(t)$ for different $W$
is obtained when plotted against $\overline{\mathcal{S}}_{\mathrm{e}}(t)$ or
$\overline{\mathcal{S}}_2(t)$, indicating that the average entropy
appropriately models the ensemble-averaged internal clock. We take the tendency
for faster-than-logarithmic growth of $\overline{\mathcal{S}}_{\mathrm{e}}(t)$
together with smooth dependency on $W$ of all our observables within the entire
simulation window as support for the cross-over scenario, discouraging an MBL
transition within the traditional parametric window of computational studies. | Ferdinand Evers, Ishita Modak, Soumya Bera | 2023-02-22T13:57:44Z | http://arxiv.org/abs/2302.11384v2 | # The internal clock of many-body (de-)localization
###### Abstract
After a decade of many claims to the opposite, there now is a growing consensus that generic disordered quantum wires, e.g. the XXZ-Heisenberg chain, do not exhibit many-body localization (MBL) - at least not in a strict sense within a reasonable window of disorder values \(W\). Specifically, computational studies of short wires exhibit an extremely slow but unmistakable flow of physical observables with increasing time and system size ("creep") that is consistently directed away from (strict) localization. Our work sheds fresh light on delocalization physics: Strong sample-to-sample fluctuations indicate the absence of a generic time scale, i.e. of a naive "clock rate"; however, the concept of an "internal clock" survives, at least in an ensemble sense. Specifically, we investigate the relaxation of the imbalance \(\mathfrak{J}(t)\) and its temporal fluctuations \(\mathfrak{J}(t)\), the entanglement and Renyi entropies, \(\mathscr{S}_{\mathrm{e}}(t)\) and \(\mathscr{S}_{2}(t)\), in a 1D system of interacting disordered fermions. We observe that adopting \(\mathscr{S}_{\mathrm{e}}(t),\mathscr{S}_{2}(t)\) as a measure for the internal time per sample reduces the sample-to-sample fluctuations but does not eliminate them. However, a (nearly) perfect collapse of the average \(\mathfrak{J}(t)\) and \(\overline{\mathscr{J}}(t)\) for different \(W\) is obtained when plotted against \(\overline{\mathscr{S}}_{\mathrm{e}}(t)\) or \(\overline{\mathscr{S}}_{2}(t)\), indicating that the average entropy appropriately models the ensemble-averaged thermal clock. We take the tendency for faster-than-logarithmic growth of \(\overline{\mathscr{S}}_{\mathrm{e}}(t)\) together with smooth dependency on \(W\) of all our observables within the entire simulation window as support for the cross-over scenario, discouraging an MBL transition within the traditional parametric window of computational studies.
## I Introduction
Many-body localization (MBL) is a spatio-temporal phenomenon that is believed to exist in interacting fermion systems at strong enough disorder [1; 2; 3; 4; 5; 6; 7; 8]. It manifests itself as a strongly reduced (preMBL) or even fully inhibited tendency (proper MBL) towards thermalization at large enough disorder strength. The requirement of strong disorder implies that all remnants of relaxation dynamics in the preMBL-regime are necessarily very slow, reminiscent of the familiar behavior of glasses. "Slowness" originates from the fact that long-time behavior typically is dominated by collective reorganization processes in a highly disordered many-body-energy landscape. The salient collective events are associated with a broad distribution of time scales; there is no single characteristic rate that would lend itself as a measure of time.
We briefly elaborate on the issue of time- and length scales by formulating a scenario: Consider an ensemble of disordered quantum wires of a finite length \(L\). At strong enough disorder, typical many-body states can be thought about essentially as single (dressed) Slater-determinants constructed from localized single-particle wavefunctions; this is the jest of the celebrated concept of local integrals of motion (LIOM) [9; 10; 11; 12; 13]. While many, perhaps even most, wires may follow the LIOM- paradigm, a fraction \(\mathfrak{q}(L)\) of the wires will not; they form a thermalizing sub-ensemble that we refer to as "ergodic bubbles" [14; 15]. A new ensemble of wires with the length \(2L\) can be formed by combining samples giving rise to a new fraction \(\mathfrak{q}(2L)\). The evolution of \(\mathfrak{q}(2L)\) with system size has been investigated in RG-studies of toy models [16; 17; 18; 19; 20; 21].
We emphasize two important implications of this rough picture for observations, numerical and experimental, in thermalizing phases when growing the system size at \(\mathfrak{q}(L)\ll 1\): (i) strong finite size effects occur because thermalization sets in only after a certain length-scale has been reached long enough to include an ergodic bubble; (ii) the time scale for relaxation can be exponentially long because it reflects how long it takes a bubble to thermalize an almost localized and large sample region. We refer to this slow relaxation process as 'creep'. Since creep emerges from destabilizing an interacting but localized (i.e. LIOM-dominated) sample region via remote thermal bubbles, creep indicates the dominant mechanism for many-body de-localization and is prevalent in the preMBL-regime.
Signals of creep are exponentially long observation times required in numerical and experimental simulations as well as an appreciable dependency of transport-sensitive observables on the simulation volume. We interpret the gradual increase of the estimate for the disorder strength beyond which the localization transition was expected to occur as a signature of creep: while early works favored a value \(W\)\(\approx\)3.8 [22; 23], later authors reported significant finite size effects and favored larger values \(W\)\(\approx 4.5-5.5\)[24; 25; 26; 27].
Creep manifests, in particular, in the spatio-temporal behavior of correlation functions; in the preMBL-regime, they exhibit a spatial decay much slower than exponential with tails slowly increasing with time; for a discussion of how creep appears in various observables see Weiner et al. [28]. It is only recently that creep in its diverse manifestations and the resulting implications for the (pre)MBL phenomenology in finite size systems have been appreciated: A weak tendency towards equilibration has been analyzed in the relaxation dynamics even at large dis
order in the local charge density, the sublattice imbalance, and the density matrix [28; 29; 30; 31; 32; 33]; the evolution of the entanglement and the number entropy has been investigated [34; 35; 36; 37; 38; 39; 40; 41]; attempts have been made to interpret the shift of intersection points ("critical points") and the flow of the spectral function with increasing system sizes [42; 43; 44; 45; 46; 47; 26]. The status as we see it is summarized in the phase diagram Fig. 1.
An implication of creep is the absence of characteristic time scales; one rather expects broad distributions of rates characterizing dynamical processes. This poses the question of at which time it is meaningful to compare the dynamical status of two samples that nominally belong to the same ensemble but differ by the specific disorder realization. We here pursue the implications of the following hypothesis: The time evolution in both samples can at least partially be synchronized when introducing an 'internal sample time'. We here propose to model the internal time by a form of entropy. While the model has its limitations when adopted on a per-sample level, it works incredibly well for the disorder-averaged 'internal ensemble time'. To illustrate the power of the concept, we plot in Fig. 2 the average charge imbalance \(\overline{\mathfrak{J}}(t)\) over the entanglement entropy \(\overline{\mathbb{S}}_{\rm e}(t)\): a data collapse is observed in a wide time window for a range of disorder values situated in the (transient) sub-diffusive regime (Fig. 1).
On a more intuitive level, two considerations motivate us to consider an entropic entity \(\mathcal{S}_{L}(t)\) as a model of the internal clock of a system with spatial extension \(L\): (i) The time evolution of the entanglement entropy is, in a sense, the fastest and most stable relaxation process: It is relatively fast, because, unlike the redistribution of, e.g., energy and particle number it is not restricted by a local conservation law: even in the proper MBL phase, one expects a logarithmic-in-time growth of the entanglement entropy, when energy and particle densities have long ceased to relax. The saturation dynamics of physical observables will thus be represented by a plateau when plotted against \(\mathcal{S}_{\rm e}\) in the thermodynamic limit (ii) On a more formal level, the rate of entropy production can be considered as a generalized driving force for the relaxation of (ensemble-averaged) physical observables [49]. Subjecting the synchronization concept to a further test on a second observable, we apply it to the time fluctuations of the local density, \(\overline{\mathfrak{J}}(t)\). Again an excellent scaling collapse is observed, demonstrating the power of the concept.
Further, we find that \(\overline{\mathfrak{J}}\propto\overline{\mathcal{S}}_{\rm e}^{-\rho}\), where the exponent depends on the disorder strength, \(\rho(W)\); a similar relationship also holds for typical values of \(\mathcal{F}\). The corresponding exponent functions, \(\rho_{\rm ave}(W)\) and \(\rho_{\rm typ}(W)\), reveal opposing trends with increasing disorder strength, in particular, they intersect at a disorder value \(W_{\mathcal{F}}\). Since \(W_{\mathcal{F}}\) is sufficiently close to \(W_{c}\) we take this as fresh evidence in favor of the existence of (at least) two subphases in the thermalizing regime indicated in the phase diagram Fig 1 ('Subphase' is meant here in a weak sense indicating that the thermalizing phase has regions with qualitatively different relaxation behavior that are connected via a cross-over.).
Figure 2: Evolution of the ensemble-averaged imbalance after quenching a product (Neél)-state in a disordered wire of length \(L\)=\(16,20,24,26\) at four moderate disorder strengths \(W\)=\(1.5,2.0,2.5,3.0\). Time is measured in terms of the average entanglement entropy in units of the Page-value \(\mathcal{S}_{\rm Page}\):= \(\ln 2(L/2)\)\(-1/2\).
Figure 1: Schematic phase diagram of the disordered t-V (isotropic Heisenberg) model (1). Shaded region: thermalizing regime exhibiting (transient) power-laws, e.g., for the decay of imbalance (or time-fluctuations) and growth of entanglement and second Renyi entropy. \(W_{c}\)\(\gtrsim 4.5\): crossover from accelerating to decelerating dynamics as indicated by the temporal increase/decrease of an effective diffusion exponent \(\beta(t)\)[28; 29]. Beyond \(W_{\rm freeze}\)\(\approx\)10: fractal dimension of the dynamically active part of the many-body Hilbert space vanishes (freezing) [48]. This region is largely unexplored in finite size and time numerical studies and hosts the proper MBL phase, if it exists.
Theoretical setting
### Model and Method
We consider spinless fermions within the \(t{-}V\) model
\[\mathcal{H}= -\frac{1}{2}\sum_{i=1}^{L-1}c_{i}^{\dagger}c_{i+1}+\text{h.c}+\sum_ {i=1}^{L}\epsilon_{i}(n_{i}-1/2)\] \[+V\sum_{i=1}^{L-1}(n_{i}-1/2)(n_{i+1}-1/2), \tag{1}\]
where \(n_{i}\coloneqq c_{i}^{\dagger}c_{i}\), \(i\) denotes the site index, \(L\) the system size and \(V\) the nearest neighbor interaction strength; \(V{=}1\) throughout this work. The on-site potentials, \(\epsilon_{i}\), are uncorrelated and distributed within \([-W,W]\). All calculations are done at half-filling, \(N/L{=}1/2\). We study quench dynamics starting from the (product state) charge density wave state \(|\Psi\rangle=|1010\ldots\rangle\). Time propagation employs the standard Chebyshev expansion of the time evolution operator, which reduces the exponentiation of \(\mathcal{H}\) to several sparse matrix-vector multiplications [29, 50]. A comparison of the Chebyshev method with exact time evolution is presented in Appendix C.
### Observables
In most of this work, we focus on two main observables. For the "internal clock" we adopt the entanglement entropy
\[\mathcal{S}_{\text{e}}\coloneqq-\text{Tr}_{\text{A}}\rho_{\text{A}}\ln\rho_{ \text{A}}, \tag{2}\]
where \(\rho_{\text{A}}\) denotes the density operator of subsystem A as obtained after integrating out the complement of A, i.e. subsystem B. We use an equal partition \(L_{\text{A}}{=}L_{\text{B}}{=}L/2\). With an eye to the measurement of internal times, we mention that \(\mathcal{S}_{\text{e}}(t)\) behaves similarly to the second Renyi-entropy \(\mathcal{S}_{2}\coloneqq-\ln\text{Tr}_{\text{A}}\hat{\rho}_{\text{A}}^{2}\), see Appendix B. Recently, \(\mathcal{S}_{\text{e}}\)[51] as well as the Renyi entropy \(\mathcal{S}_{2}\) have been measured [52] in ion trap experiments.
Our second observable is the sublattice imbalance
\[\mathcal{I}(t)\coloneqq\sum_{j}^{L}(-)^{j}\ n_{j}(t) \tag{3}\]
and derived quantities such as the density fluctuations
\[\mathcal{F}(t)\coloneqq 1/L\sum_{j=1}^{L}\{[n_{j}(t)-\{n_{j}(t)\}_{\Delta t}]^{2} \}_{\Delta t}, \tag{4}\]
where \(\{\ldots\}_{\Delta t}\) denotes a sliding time window average over a set of given time traces \(n_{j}(t),j=1,\ldots L\), also see [48]; \(\Delta t=12.5\) in our calculations. Observables averaged over different disorder configurations will be denoted by an overline, e.g., \(\overline{\mathcal{S}}_{\text{e}}(t),\overline{\mathcal{I}}(t)\).
## III Clean case - results & discussion
Figs. 3 and 4 display the evolution of \(\mathcal{S}_{\text{e}}(t)\) and \(\mathcal{I}(t)\) in the clean system, i.e., \(W{=}0\).
### Entanglement entropy - evolution in time and system size
We identify three windows of characteristic times the traces \(\mathcal{S}_{\text{e}}(t)\), see Fig. 3b): Let the characteristic velocity of pair-excitations be \(v\); following the kinetic energy given in Eq. (1) one expects \(v\approx 1\). In the intermediate window, \(1\lesssim t\lesssim L/2v\), we have \(\mathcal{S}_{\text{e}}(t)=c_{\text{S}}t+\text{subleading terms}\), reflecting the asymptotic behavior expected for the large-system limit, \(L\to\infty\), of the ballistic system [53]. Based on our data Fig. 3(b) we obtain \(c_{\text{S}}\approx 0.52\). Due to the limited time window and the unknown subleading terms, reliable error bars are difficult to establish.
Relaxation processes in the short time window occur on rates given by the (inverse) bandwidth; they reflect the local physics of the clean system. As seen in Fig. 3a), the imbalance decays on these ultra-short time scales.
At large times, \(t\gtrsim L/2v\), the entanglement entropy \(\mathcal{S}_{\text{e}}(t)\) reaches a saturation value, \(\mathcal{S}_{\text{e}}^{\infty}(L)\coloneqq\mathcal{S}_{\text{e}}(L;t\to\infty)\), that exhibits pronounced system size effects. Its analytical form is well described by
\[\mathcal{S}_{\text{e}}^{\infty}(L)=a_{\text{V}}L/2+a_{\text{S}}\ln(L/2a)+\ldots \tag{5}\]
where \(\ldots\) indicate terms that vanish in the thermodynamic limit and \(a\) denotes a microscopic scale that accounts for terms \(\mathcal{O}(L^{0})\). The first term incorporates the expectation that the entanglement entropy of the ballistic system after saturation is an extensive observable, so the
Figure 3: (a) Time evolution of the sublattice imbalance, \(\mathcal{I}(t)\) and (b) the entanglement entropy, \(\mathcal{S}_{\text{e}}(t)\), after a quench in a clean quantum wire, Eq. (1). \(\alpha\) denotes the saturation value of \(\mathcal{S}_{\text{e}}\) in units of the Page value \(\mathcal{S}_{\text{Page}}\) for the respective system sizes. The inset shows the infinite-system size extrapolation of the saturation value, \(\mathcal{S}_{\text{e}}^{\infty}(L)\), as read off the main plot. We take the curvature seen in the data as an indication of corrections to scaling that are non-analytic in \(1/L\), possibly logarithmic [53]. Fitting the trace to Eq. (5) yields \(a_{\text{V}}=0.50(1),a_{\text{S}}=-0.29(4),a=1.8(1)\).
leading behavior is described by a volume law. Based on quantum field theory, one would expect a prefactor ratio \(a_{\rm V}/c_{\rm S}=1\), which is consistent with our numerical estimate, \(\approx 96\%\), within our extrapolation uncertainties [53]. An earlier estimate of this ratio, \(88\%\), has been reported for free fermions (XX model) [54].
The second term in Eq. (5) we read as a manifestation of interface (surface) effects: we interpret \(L^{d-1}\) in the limit \(d{\rightarrow}1\) as a logarithm. It accounts for interaction mediated long-range correlations in the one-dimensional bulk phase that lead to anomalously strong surface effects [53].
For a fully chaotic system, \(\mathcal{S}_{\rm e}^{\infty}(L)\) is expected to saturate at the Page value, \(\mathcal{S}_{\rm Page}\coloneqq\ln(2)L/2-1/2\)1. Confirming results of earlier authors [56], we find that clean systems do not exhaust the Page limit, meaning \(\alpha<1\), see Fig. 3b), with \(\alpha(L)\coloneqq\mathcal{S}_{\rm e}(t\rightarrow\infty)/\mathcal{S}_{\rm Page}\) and \(\alpha(L\rightarrow\infty)=a_{\rm V}/\ln 2\). For the latter coefficient, we here report an estimate significantly below unity, \(\alpha\approx 0.72\). We interpret this result, \(\alpha{<}1\), as a consequence of the fact that the clean model exhibits local conservation laws, such as momentum conservation, that (apparently) can limit the phase-space volume accessible during relaxation as compared to a fully chaotic situation.
Footnote 1: In Ref. [55] it is shown that the average entanglement entropy of random pure states to leading order is of the form
\[\mathcal{S}_{m,n}=\ln m-\frac{m^{2}-1}{2mn}+\ldots\]
where \(n,m\) is the Hilbert space dimensions of the respective subsystems assumed to be large. For the present case, we have \(n=m=2^{L/2}\) motivating the definition of \(\mathcal{S}_{\rm Page}\) given in the text.
### Clock rate and scaling
With an eye on the disordered case and its internal clock, we mention that entanglement has been introduced as a suitable time unit before already in the clean case. Calabrese and Cardy [53] observe that the entanglement evolution \(\mathcal{S}_{\rm e}(L;t)\) corresponding to quenches with different initializing states collapse onto a master curve after normalization with respect to the asymptotic value, i.e. when plotting the ratio \(\mathcal{S}_{\rm e}(L;t)/\mathcal{S}_{\rm e}^{\infty}(L)\). At intermediate times, \(1\lesssim t\lesssim L/2v\), \(\mathcal{S}_{\rm e}(t)\propto t\) and therefore the collapse implies \(\mathcal{S}_{\rm e}(t)/\mathcal{S}_{\rm e}^{\infty}\) independent of time and the initial condition. In this sense, one might say that the rate of entropy production is controlled by the (saturation) entanglement \(\mathcal{S}_{\rm e}^{\infty}\).
Taking the idea further, Kim and Huse [56] have postulated a scaling form
\[\mathcal{S}_{\rm e}(t)\approx\mathcal{S}_{\rm e}^{\infty}(L)\ F(t/\mathcal{S }_{\rm e}^{\infty}(L)) \tag{6}\]
for a situation in which the system size \(L\) rather than the initializing state is varied; \(F(x)\) denotes a scaling function. Also here, the saturation value \(\mathcal{S}_{\rm e}^{\infty}\) sets the rate for entanglement growth.
In Fig. 4 we test the hypothesis (6) and show the scaling function corresponding to the data given in Fig. 3. While a reasonable collapse is observed, corrections to scaling are appreciable that we tentatively assign to logarithmic corrections, such as displayed in (5). Hence, existing reports on quenches in clean systems lend support to the main hypothesis of our work, i.e. that the entanglement entropy can be a good model for an internal clock in interacting quantum wires.
## IV Disorder - results & discussion
### Time evolution of entanglement entropy
We will begin our analysis of disordered systems with the ensemble-averaged entanglement entropy \(\overline{\mathcal{S}}_{\rm e}(t)\) displayed in Figs. 5(a) and 6. Similar to the clean case, also here the time traces suggest introducing three time regimes: In the short time regime, all traces (nearly) collapse irrespective of disorder \(W\) (ballistic period). This regime we will here leave unattended since disorder effects are (relatively) weak. Our focus is on the sub-diffusive regime (significant disorder effects, system size converged) and on the saturation regime (strong effects of disorder and system sizes).
#### iv.1.1 Sub-diffusive Regime
The sub-diffusive regime will eventually display the asymptotic dynamics of bulk systems at large enough \(L\) and long enough times, i.e. the thermodynamic limit.
Figure 4: (Near) scaling collapse of the entanglement entropy after employing the saturation value, \(\mathcal{S}_{\rm e}^{\infty}(L){:=}\mathcal{S}_{\rm e}(L;t\rightarrow\infty)\), as a measure of time. Alternatively, the inset employs \(L/2v\) for units in order to highlight the finite-size convergence near \(t\approx L/2v\) consistent with expectations based on Ref. [53].
Our nomenclature reflects an important result of our investigations also indicated in Fig. 1: In the parametric window that we have investigated we consistently observe dynamical signatures, which are expected for the preMBL-regime; MBL proper is never observed.
Early researchers have characterized the time evolution of observables in the sub-diffusive regime by power-laws, e.g. \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\mathrm{\propto}t^{1/s_{\mathrm{ee}}}\)[57, 58], especially in the range up to \(W\simeq 3\) displayed in Fig. 5. However, as is clearly visible in Fig. 5(a) for the paradigmatic case of entanglement entropy, the putative time window of power-law dynamics has not yet opened up in the range of computationally available system sizes. 2 It thus is apparent that finite size effects in disordered wires are dramatically stronger as compared to the clean case, Fig. 3.
Footnote 2: To be extracted from this data is only a lower bound for the dynamical exponent \(z_{\mathrm{ee}}\), i.e. the slope at the turning point in the (sub-)diffusive regime.
The same conclusion also holds at larger disorder in the regime of decelerating dynamics shown in Fig. 6(a). Notice the the left-hand side (lhs) curvature in the intermediate time window where the traces are nearly system-size converged (subdiffusive regime): the crucial observation is that the lhs curvature becomes more pronounced in a larger window of time with ever growing system size. As illustrated in Fig. 6(b) the curvature signalizes an emergent power law with effective exponents that rapidly increase with the growing system size: a change in system size by 50% increases the effective exponent at long times by 200% (\(W\)=4) resp. 50% (\(W\)=6). The phase diagram proposed in Fig. 1 is based on the assumption that the trend observed in Fig. 6 is indicative of the thermodynamic limit and, therefore, does not reverse at much larger system sizes. While trend reversal towards insulating behavior ("reentrance") cannot be rigorously excluded in finite-size studies, we don't see any hints pointing toward its existence in numerical work in the MBL context 3. In this sense, our result Fig. 6 is inconsistent with the expected behavior in the MBL phase, \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\propto\ln t\)[60, 61, 62], and indicates the absence of many-body localization in the respective window of disorder values.
Footnote 3: A kind of trend reversal, if only a different one, is observed in the model of regular-random graphs (RRG) [59]: this reversal is similar to creep in the sense that the reversed flow in RRG is directed away from the localized to the ergodic regime. Opposed to this, the reentrance described in the main text implies a flow reversal towards localizing behavior.
We mention the implications of our result Fig. 6 for Refs. [34, 35]. These authors are quenching random half-filled product states and studying the num
Figure 6: (a) Traces similar to Fig. 5(a) for disorder in the regime of decelerating dynamics Fig. 1. For \(W=4.0,6.0\) system sizes are \(L=16,20,24\) and else \(L=20,24\). (b) Logarithmic time derivative of \(\overline{\mathcal{S}}_{\mathrm{e}}\) for \(W=4.0,6.0\). The plot highlights the emergence of a power-law increase with effective exponents that quickly grow with the system size \(L\). An increase faster than \(\ln t\) rules out MBL within the respective disorder regime.
Figure 5: Ensemble-averaged entanglement entropy \(\mathcal{S}_{\mathrm{e}}(L,t)\) for system sizes \(L\)=\(16,20,24\) and four moderate disorder values \(W\). (a) Three temporal regimes are identified: ballistic, (sub-)diffusive, and saturating. (b-c): logarithmic time derivative \(\dot{\mathcal{S}}_{\mathrm{e}}\) reveals a (nearly) power-law asymptotics. The shaded region indicates the window underlying the fits following Eq. (7). (Parameters in Tab. 1.) (d) Saturation value is seen in the left panel in units of the Page limit \(\mathcal{S}_{\mathrm{Page}}\) over the linear system size.
ber entropies \(\mathbb{S}_{\mathrm{N}}^{(\alpha)}\) that they relate to the second Renyi entropy \(\overline{\mathbb{S}}_{\mathrm{N}}^{(2)}\!\sim\!\ln\overline{\mathbb{S}}_{2}\) and to the entanglement entropy \(\overline{\mathbb{S}}_{\mathrm{N}}^{(1)}\!\sim\!\ln\overline{\mathbb{S}}_{ \mathrm{e}}\)[34; 35]. Fitting their data in the time domain, they conclude a double-logarithmic growth of the number entropies, \(\overline{\mathbb{S}}_{\mathrm{N}}^{(\alpha)}\sim\ln\ln t\). These statements imply \(\overline{\mathbb{S}}_{\mathrm{e}}(t),\overline{\mathbb{S}}_{2}(t)\propto\ln t\), which taken at face value is inconsistent with our claim of growth of \(\overline{\mathbb{S}}_{\mathrm{e}}(t)\) faster than logarithmic, Fig. 6.
We assign this apparent discrepancy to a significant difference in data analysis. The \(\ln t\) behavior in Refs. [34; 35] has been extracted from time traces obtained from system sizes up to \(L\)=24. In particular, the fit for \(L\)=24 was targeting the largest observation times, similar to our data Fig. 6(a). Indeed, in this regime also our data is consistent with a logarithmic fitting, as is seen by the lack of curvature in the time traces for \(L\)=24 in Fig. 6(a) at \(t\raisebox{-3.01pt}{$\stackrel{{>}}{{\sim}}$}10^{2}\). In this sense, there is no discrepancy here.
However, as is also seen in Fig. 6(a) the regime of large times is strongly affected by finite-size effects: with increasing system sizes the traces quickly develop an ever-growing tendency for lhs-curvature that connects smoothly to the lhs-curvature already secured in the sub-diffusive time window. In Fig. 6(b) this flow away from logarithmic towards power-law dynamics is particularly apparent.
The main result of Refs. [34; 35], i.e. the absence of MBL-proper in the investigated parameter regime, remains unchallenged here; the reported \(\overline{\mathbb{S}}_{\mathrm{e}}(t)\sim\ln t\) from our point of view represents a lower bound 4.
Footnote 4: In our analysis we adopt the perspective that there is no qualitative difference between quenching a Neel-state - as we do - and a random product state as done in Ref. [34; 35] for the entanglement dynamics. However, subsequent work noted that averaging over random product states could make it more difficult to extrapolate the long-time dynamics [36].
#### iii.1.2 Saturation regime - disorder enhanced entanglement
Confirming earlier results, we readily observe in Fig. 5 that also in the disordered situation, \(\overline{\mathbb{S}}_{\mathrm{e}}(t)\) does not exhaust the Page-value, \(\mathcal{S}_{\mathrm{Page}}\), in the limit of long observation times [63]. To extract the saturation value, \(\overline{\mathbb{S}}_{\mathrm{e}}(L,W)\), we feed the asymptotic form
\[\overline{\mathbb{S}}_{\mathrm{e}}(L,W;t)=\overline{\mathbb{S}}_{\mathrm{e}}( L,W)-c(L,W)/t^{\gamma_{L}(W)} \tag{7}\]
into a three-parameter fit of the data Fig. 5 (a-c). To estimate the amplitude \(c(L,W)\) and the exponent \(\gamma_{L}(W)\), we employ the asymptotics of the time derivative, \(\overline{\mathbb{S}}_{\mathrm{e}W}(L,t)\), reproduced in Fig. 5(b,c). With this input, the saturation value \(\overline{\mathbb{S}}_{\mathrm{e}}^{\infty}(L,W)\) is fitted adopting (7) to the data in the main panel, Fig. 5(a); the fitting parameters are reproduced in the appendix, Tab. 1.
We emphasize that the exponent \(\gamma_{L}(W)\) is an effective one. Close scrutiny of Fig. 5(b,c) reveals a slow time evolution of the slope (exponent) that is nearly but (perhaps) not fully converged within our window of observation times. As a consequence, the fitting parameters contain a systematic error that is not accounted for in the error estimates given in the appendix, Tab. 1.
The saturation values of the entanglement entropy, \(\overline{\mathbb{S}}_{\mathrm{e}}(L,W)\) are reproduced in Fig. 5(d). A striking aspect revealed here is _disorder enhanced entanglement_; there is a wide regime in the \(L,W\)-plane, in which the subsystem-entanglement _grows_ with the disorder. This is always the case in the limit of weak disorder, where it perhaps is less surprising: weak disorder destroys integrability and therefore brings the system closer towards chaotic dynamics, so \(\overline{\mathbb{S}}_{\mathrm{e}}(L,W)\) moves upwards towards the Page limit. This increase has been seen previously in Ref. [64]. Remarkably, the trend appears to continue even to moderate and strong disorder, \(1.5\lesssim W\lesssim 3.0\), provided the system size is large enough - as seen from the traces with family parameter \(W\) intersecting each other in Fig. 5(d).
### Entanglement entropy as an internal clock
In this subsection, we explore the potential of entanglement entropy as a model for an internal clock. We begin by pointing out a possible advantage of using internal clocks. Time traces that display the relaxation of two different transport-related observables both measured in a given sample tend to be correlated. For example consider \(\mathcal{S}_{\mathrm{e}}(t)\) and \(\mathcal{I}(t)\): When employing \(\mathcal{S}_{\mathrm{e}}(t)\) as a measure of time on a per-sample basis, sample-to-sample fluctuations should reduce considerably, because the slow growth of the first will, in general, indicate strong localization and therefore hints at slow growth of the latter; ergo, \(\mathcal{I}(\mathcal{S}_{\mathrm{e}})\) fluctuates less than \(\mathcal{I}(t)\) between different samples and therefore should be easier to interpret. The reduction of sample-to-sample fluctuations is indeed seen in Fig. 7, right most column.
We elaborate on the qualitative argument. It can be rephrased employing concepts underlying the traditional mode-coupling theory. It foresees that two slow observables in their time evolution can each follow their own internal clock rate, but only if they are 'orthogonal', i.e., sufficiently decoupled. The generic expectation is that orthogonality is an exception; at least quantities deriving from the same (conserved) field, e.g. the particle density, follow the same clock rate.
Adopting \(\mathcal{S}_{\mathrm{e}}\) as a model for the internal clock illustrates a benefit of the concept, i.e. reducing sample-to-sample fluctuations in Fig. 7 significantly. At the heart of the concept is the hypothesis that improved models (as compared to \(\mathcal{S}_{\mathrm{e}}\)) exist, at least in principle, that further reduce sample-to-sample fluctuations, nearly eliminating them in generic observables at large enough \(L\). The optimal model for the internal time on a per-sample level still has to be found. However as we will see in the
next paragraphs, \(\overline{\mathcal{S}}_{\mathrm{e}}\) is close to an optimal model for the disorder-averaged internal clock, i.e. the 'internal ensemble time'.
#### iv.1.1 Excursion: the arrow of time - progress and regression
A conservative interpretation of \(\mathcal{S}_{\mathrm{e}}(t)\) as the internal system clock requires that the time evolution of the entanglement entropy exhibits a monotonous growth not only on average but also on a sample-to-sample basis. This requirement is not obviously met, because the entanglement entropy is not bound to strict growth in time far enough in non-equilibrium. As seen in Fig. 7, first and third column, similar to the charge imbalance [48]\(\mathcal{I}(t)\), also the time evolution of the entanglement entropy \(\mathcal{S}_{\mathrm{e}}(t)\) exhibits strong sample-to-sample fluctuations. And indeed, as seen in Fig. 7, the third column at strong disorder \(W\gtrsim 4\), \(\mathcal{S}_{\mathrm{e}}(t)\) becomes highly non-monotonous. Hence, a conservative interpretation of \(\mathcal{S}_{\mathrm{e}}(t)\) as an internal system clock will confine itself to moderate disorder \(W\lesssim 4\).
We would like to argue, however, that strict monotony is not a necessary condition for an observable to be a useful measure of time. To this end we recall that the arrow of time is bound to point in a single direction only in the macroscopic limit; relaxing finite-size systems may always exhibit transient periods of entropy shrinkage ('regression'), so that the condition of strict monotony can be released, at least in principle.
iv.1.2 Collapsing averaged imbalance traces - \(\overline{\mathcal{I}}(t)\) over \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\)
In the spirit of the preceding paragraphs, we adopt \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\) as a legitimate model for the internal clock of the ensemble dynamics also at larger disorder values, even if individual samples display periods of regression. In fact, we will continue to bravely use \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\) also at large disorder as displayed, e.g., in Fig. 9.
As we have argued, internal clocks are useful to the extent that they reveal similarities in the time evolution of two different samples that are not apparent when using the lab-time \(t\). A particularly strong case in favor of internal clocks can be made if similarities can be revealed in the dynamics of systems - or ensembles - that nominally exhibit strong differences, e.g., in the disorder strength, so that at least 'naively' similarities are not expected. Such a strong validation of the "internal clock" concept has been given with Figs. 2 and 8: imbalance traces \(\overline{\mathcal{I}}(t)\) taken from samples of fixed length \(L\), and for disorder varying from the moderate (\(W\)\(\approx\)1.5) to the beginning of the strong disorder regime (\(W\)\(\approx\)3) collapse to a master-curve - within the available window of observa
Figure 7: Plot illustrating extremely strong sample-to-sample fluctuations. Left: Imbalance \(\mathcal{I}(t)\) over simulation time \(t\) for 10 samples and four disorder values. The plot illustrates the logarithmically wide distribution of \(\mathcal{I}\) at large times. Second column: Logarithmic derivative (effective exponent \(\beta(t)\)) of data shown in lhs column. The logarithmically broad distribution of \(\beta\) is illustrated. Third column: Corresponding entanglement entropy \(\mathcal{S}_{\mathrm{e}}(t)\). At large disorder, at a given time \(\mathcal{S}_{\mathrm{e}}\) is logarithmically distributed. Right: Imbalance as a function of entanglement \(\mathcal{S}_{\mathrm{e}}(t)\); sample-to-sample fluctuations are reduced if times with similar values for \(\mathcal{S}_{\mathrm{e}}(t)\) are compared.
tions times.
For collapsing \(\overline{\mathfrak{J}}\), rescaling of the abscissa is not required, which reflects the absence of relevant microscopic time scales - at least within the observation window and outside the short-time regime. The fact that the data collapse requires a rescaling of the ordinate is not unexpected, but still merits a remark: Our computations operate in the limit of infinite temperature, \(T^{-1}\)=0; the corresponding equilibrium density is spatially homogeneous, so that the equilibrium imbalance vanishes, \(\mathfrak{J}\)=0. Therefore, this observable or quantities derived thereof do not lend themselves to compare to the relaxation behavior seen in \(\overline{\mathfrak{J}}(t)\). Due to the apparent lack of an obvious scale derived from an equilibrium quantity, it is not inconceivable that the scaling factor of the ordinate, \(f_{L}(W)\) (inset of Fig. 2), reflects an initial condition, here the Neel state, and changes upon other choices.
We mention that at late times when entanglement approaches its saturation value, the master curve exhibits a large - possibly diverging - slope. The implication is that entanglement propagates fast and may saturate long before other physical observables do, which are subject to a local conservation law.
iv.2.3 Collapsing of density fluctuations - \(\overline{\mathfrak{J}}\) over \(\overline{\mathfrak{S}}_{\mathrm{e}}\)
Traditional mode-coupling theory suggests that if \(\overline{\mathfrak{S}}_{\mathrm{e}}\) is suitable as an internal clock for \(\overline{\mathfrak{J}}\) then it should also be for related observables that derive from the evolution of density modes. In this spirit, we show in Fig. 9 that also the time evolution of imbalance fluctuations (4) can be collapsed employing the entanglement evolution, \(\overline{\mathfrak{S}}_{\mathrm{e}}\), as an internal clock.
Since we have already demonstrated the concept of a system-internal clock for \(\overline{\mathfrak{J}}(t)\) in the regime of accelerated dynamics, \(W\lesssim 3\), the excellent scaling observed for \(\overline{\mathfrak{F}}\) in this regime is comforting but not, perhaps, surprising. Interestingly, a reasonably good data collapse is seen for \(\overline{\mathfrak{F}}(t)\) even in the regime of large disorder, \(W\approx 8\), which we take as an additional encouragement for the concept of internal clocks, here proposed.
Incidentally, it is implied here that at least with respect to the observable density fluctuations (as seen within our observation time) there is no indication of a phase transition all the way from moderate to strong disorder.
### Intermediate power-laws in average & typical \(\overline{\mathfrak{F}}\)
The raw data underlying the master curve is displayed in Fig. 10. The data confirms a trend already observed in our previous work Nandy _et al._[48]: the imbalance fluctuations exhibit a rather benign convergence with the system size. We, therefore, are confident to identify for each trace an intermediate regime between short times (plateau region) and longest times (pronounced system-size dependency) that defines an (effective) power law
\[\log\overline{\mathfrak{F}}/\overline{\mathfrak{F}}_{1}=-(\rho_{\mathrm{ave}} \ \xi_{\mathrm{sp}}\log 2)\log\overline{\mathfrak{S}}_{\mathrm{e}} \tag{8}\]
with \(\overline{\mathfrak{F}}_{1}\) denoting the prefactor; \(\xi_{\mathrm{sp}}\) is the non-interacting localization length extracted in the long time limit of the second moment of the density-density correlation function at infinite temperature [29]. The exponent function \(\rho_{\mathrm{ave}}(W)\) is extracted fitting the data Fig. 10 to the form (8); an analogous fitting procedure can also be performed for the typical traces of \(\mathcal{F}\) and \(\mathcal{S}_{\mathrm{e}}\) yielding \(\rho_{\mathrm{typ}}(W)\).
The results for both exponents are displayed in Fig. 11. We first note that the evolution of the traces with the disorder is rather smooth and slow; there is no indication
Figure 8: Data from Fig. 2 plotted over the second Renyi-entropy. A good collapse is achieved here, also, confirming that \(\overline{\mathfrak{S}}_{\mathrm{e}}(t)\) and \(\overline{\mathfrak{S}}_{2}(t)\) model the internal clock with comparable quality. A direct comparison of \(\overline{\mathfrak{S}}_{\mathrm{e}}(t)\) and \(\overline{\mathfrak{S}}_{2}\) is given in the appendix, Fig. 13. Parameters: \(L\)=\(16,20,24,26\) at four moderate disorder strengths \(W\)=\(1.5,2.0,2.5,3.0\).
Figure 9: Approximate collapse to a single master curve of the traces for \(\overline{\mathfrak{F}}(t)\) for different disorder values \(W\)=\(1.5,2.0,2.5,3.0,4.0,5.0,6.0\) displayed in Fig. 10 for \(L\) = 24.
of a nearby MBL-transition. 5
Footnote 5: One might ask why we expect \(\rho(W)\) to exhibit a signature of the transition into the proper MBL phase: the definition of the exponent, \(\overline{\mathcal{F}}\propto\overline{\mathcal{S}}_{\mathrm{e}}^{\rho}\), at least partially reflects the fact that \(\overline{\mathcal{F}}(t)\) and \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\) exhibit a power-law growth at long times (in a large-enough system). At least for the entanglement entropy, \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\propto t^{\nicefrac{{1}}{{2}}\text{,}}\) the growth becomes slower than any power, \(1/z_{\mathrm{ee}}\to 0\), so that generically the proportionality \(\overline{\mathcal{F}}\propto\overline{\mathcal{S}}_{\mathrm{e}}^{\rho}\) should cease to hold.
Remarkably, the exponent functions display opposing trends with increasing disorder. The typical value follows the _naive_ expectation, which is that with the increasing disorder, the interaction-induced damping of temporal fluctuations is less effective, so \(\rho_{\mathrm{typ}}\) has a tendency to decrease. While the typical exponent \(\rho_{\mathrm{typ}}(W)\) decreases, the average exponent \(\rho_{\mathrm{ave}}(W)\) increases with disorder and there is, indeed, a common intersection point \(W_{\mathcal{F}}\approx 6\). At weaker disorder, \(W{<}W_{\mathcal{F}}\), rare samples exist that exhibit rather long decoherence times, so in this regime \(\rho_{\mathrm{typ}}{>}\rho_{\mathrm{ave}}\). In the other regime, \(W{>}W_{\overline{\mathcal{F}}}\) the situation is reversed and the average is dominated by rare samples in which damping is unusually effective. Notice that the crossover value is situated near the regime in which the system dynamics changes from accelerated to decelerated, \(W_{\overline{\mathcal{F}}}\approx W_{c}\), cf. Fig. 1.
Summarizing, we interpret Fig. 11 in conjunction with Fig. 6 as supplying fresh support in favor of the phase-diagram Fig. 1 and its main claim: In the range \(W_{c},W_{\overline{\mathcal{F}}}\approx 3-6\) there is a crossover between two markedly different thermalizing regimes; there is no evidence that the \(t-V\)-model exhibits a phase transition at disorder values below \(W\lesssim 10\). We thus advocate the point of view that until recently this crossover has been widely misinterpreted in numerical work as indicating an ergodicity-breaking (i.e. MBL-) transition, e.g. in Refs. [22; 23; 24; 26; 27; 65; 66; 67; 68; 69; 70; 71]. Recent numerical work claims that the MBL transition if it exists, occurs at a disorder strength \(W^{*}\gtrsim 20\)[32; 33]. We notice that already at disorder value \(W{\approx}3\) the non-interacting localization length \(\xi_{\mathrm{sp}}\) is of the order the lattice spacing \(a\), which implies \(\xi_{\mathrm{sp}}(W^{*})\ll a\). In this sense, if the MBL transition occurs at all, it for sure takes place at an extremely strong disorder. This immediately prompts the question of its physical significance.
### Sample-to-sample fluctuations and thermalization
#### iii.4.1 Distribution of exponents
The importance of sample-to-sample fluctuations is demonstrated in Fig. 7, lhs column. The plot shows the distribution of \(\mathcal{I}\) being logarithmically broad at large observation times already at moderate disorder, \(W{=}1.5\). In fact, a fraction of samples exhibit traces \(\mathcal{I}(t)\) that does not indicate a discernible trend towards equilibration, \(\mathcal{I}(t){\to}0\) in the limit \(t\to\infty\), at all. This point is also illustrated in Fig. 7, second column. It shows an effective exponent
\[\beta(t)\coloneqq\frac{\partial\log\mathcal{I}(t)}{\partial\log t} \tag{9}\]
that characterizes the time-evolution of \(\mathcal{I}(t)\) for an individual sample; the corresponding (logarithmic) distribution functions \(\mathcal{P}(\log\beta)\) are given in Fig. 12. As is clearly illustrated by the data, the exponent distribution is logarithmically wide. Extremely strong sample-to-sample fluctuations have been observed also by other authors. As an interesting example, we mention Doggen _et al._[25]: these authors use a machine-learning algorithm in order to distinguish localized from thermal samples. Within a regime of intermediate disorder values and
Figure 11: (Effective) exponents characterizing the evolution of the imbalance fluctuations with the entanglement entropy, \(\mathcal{F}{\propto}\overline{\mathcal{S}}_{\mathrm{e}}^{\rho}\), for typical values, \(\rho_{\mathrm{typ}}\), and average values \(\rho_{\mathrm{ave}}\). The intersection point indicates the crossover from accelerated to decelerated dynamics, see Fig. 1.
Figure 10: Ensemble-averaged temporal fluctuations of the imbalance \(\mathcal{I}(t)\) as defined in (4). Following our earlier work [48], we have dressed \(\overline{\mathcal{F}}\) with a power \(1/\xi_{\mathrm{sp}}\log 2\). For selected traces, two system sizes, \(L=20\) (lines), \(24\) (symbols), are given to expose finite-size effects.
system sizes the algorithm identifies the existence of two different sample types: those with insulating character and manifestly thermalizing others.
#### iv.4.2 Restoration of self-averaging
The tails of \(\mathcal{P}\) are seen to be shrinking with increasing system size in Fig. 12, if only exceedingly slowly. The general expectation is that at \(W<W^{*}\) the imbalance, \(\mathfrak{J}\), in interacting, disordered wires is self-averaging. The statement implies that in the limit of large systems, \(L\gg 32\), the distribution \(\mathcal{P}\) acquires zero width, eventually. In other words, in the thermodynamic limit all samples, except for a small number of measure zero, are expected to thermalize with a relaxation behavior that is characterized by the same (\(W\)-dependent) exponents.
Due to computational limitations in system sizes and observation times, we can neither confirm nor dispute this expectation. If it is true, then all traces shown in Fig. 11 will undergo a slow evolution so that they eventually collapse in the thermodynamic limit. However, also in this case it should not go unnoticed that for the range of system sizes traditionally studied in numerics, e.g. \(L\lesssim 32\), individual samples hardly ever show the typical behavior because the exponent distribution \(\mathcal{P}\) is so wide.
### Relation to previous work: creep and RG
For further discussion, we recall the RG-scenario outlined in the introduction. It assumes that samples fall into roughly two categories, thermalizing (ergodic) "bubbles" and non-thermalizing "grains". Fig. 7, lhs column supports this picture and allows rough quantitative estimates: the thermalizing fraction of samples comprises seven or eight out of ten samples at \(W\)=1.5, so \(\mathfrak{q}(L{=}24,W{=}1.5){\approx}80\%\); this fraction is rapidly decreasing for a larger disorder, e.g., we have only \(\mathfrak{q}(L{=}24,W{=}2.5){\approx}50\%\) at \(W{=}2.5\) and \(\mathfrak{q}(L{=}24,W{=}5.0){\approx}0\).
One can consider growing the sample size, e.g., by merging two samples of \(L=24\), each. The RG assumes that the bubble and grain scenario continues to hold after merging and then predicts the evolution of \(\mathfrak{q}(L,W)\) with \(L\) by postulating phenomenological growths rules [16, 17, 18, 19, 20] Of interest to us is the general picture of the relaxation dynamics that take place immediately after merging two samples. Specifically, at moderate to strong disorder, \(W\gtrsim 2.5\), most samples behave like grains since \(\mathfrak{q}<1/2\). Hence, even when doubling the sample size the most likely outcome is that a grain is added to a grain, so the relaxation dynamics after merging is extremely slow and does not support thermalization. Only upon growing the system size even further a bubble will be added eventually, which can foster the delocalization process. Since a rather small bubble will need to equilibrate a large grain, thermalization is especially slow. An exponentially slow equilibration process that is enhanced by growing the system size is the hallmark of "creep" [28, 29]. In this sense, our numerical observations of large sample-to-sample fluctuations seen in Fig. 7, the slow flow seen in Fig. 12, and creep are all qualitatively consistent with the simplified RG-picture of thermalization through bubbles and grains. 6
Footnote 6: In recent numerical works, attempts have been made to substantiate the picture and identify correlated/entangled clusters in disordered samples [72, 69, 73]. Since the data analysis in these papers underlies the assumption of a critical point near \(W\approx 3.8\), it remains to be seen to what extent the conclusions are affected once creep is taken into account.
The alert reader will have realized that we adopted from the RG-concept its intuitive phenomenological building blocks, i.e. bubbles and grain, and left aside an important result of the reported RG-flow, which predicts a critical point separating a localized from an ergodic phase [17, 19, 20]. We have neglected this part of the RG studies, because, within the parameter regimes investigated, we don't see support for a critical scenario provided by "ab-initio" simulations of microscopic models. We mention two possible reasons for the discrepancy - other than reentrance behavior at a larger system size outside of our observation window. (i) The critical point might be situated at extremely large disorder values, as suggested by Morningstar _et al._[32] and Sels [33]. (ii) The phenomenological RG-equations are oversimplifications that miss relevant terms with a delocalizing effect.
## V Conclusion
Short quantum wires at relatively weak interactions and intermediate disorder strength can exhibit localization behavior: initial deviations from the equilibrium density are highly resilient against equilibration; a non-vanishing fraction of them may not, in fact, equilibrate at all. Understanding the fate of shorter samples with
Figure 12: Distribution function \(\mathcal{P}\) of the effective exponents \(\beta(t)\) (Eq. (9)) that characterize the long-term behavior of the imbalance \(\mathfrak{J}(t)\) after a quench in systems of sizes \(L=16,20,24\) and for two disorder values in the (sub-)diffusive regime.
respect to their relaxation dynamics when successively growing their length is the central theme of many-body (de-) localization (MBL). The numerical investigation we have presented in this work motivates three statements:
(i) We follow how the relaxation dynamics of the sublattice imbalance \(\overline{\mathfrak{J}}\) and its fluctuations \(\overline{\mathfrak{F}}\) evolve over a large range of disorder values, reaching from below the clean bandwidth to four times its value. The temporal flow of the ensemble dynamics is conveniently parameterized by \(\overline{\mathfrak{S}}_{\mathrm{e}}\), which acts as a model for an internal (ensemble) clock. The usefulness of the "internal clock" concept becomes most apparent by demonstrating that for a broad range of disorder values time traces \(\overline{\mathfrak{J}}(t)\) and \(\overline{\mathfrak{S}}_{\mathrm{e}}(t)\) can be collapsed to a single master curve - for fixed system size. The collapse works within the entire window of investigated disorder values, which implies the absence of a localization transition even when the disorder exceeds the clean bandwidth by a factor of four.
(ii) We observe extremely strong sample-to-sample fluctuations: For systems with \(L=24\) sites and moderate disorder non-ergodic samples (grains) coexist with highly ergodic samples (bubbles). Previous work has reported an extremely slow flow toward equilibration when growing the system size ("creep") [28]. In this work we have argued that the creep phenomenon is closely related to the strong sample-to-sample fluctuations here observed; the relationship has been established by borrowing ideas from a real-space renormalization group approach and the avalanche concept [15].
(iii) While the existing evidence points to a thermodynamic limit that represents an equilibrating thermal phase, the flow towards this limit suggests the existence of subphases. Specifically, when the disorder strength reaches about two times the bandwidth, accelerated dynamics gives way to decelerated dynamics as indicated in Fig. 1. In this work, we present additional numerical evidence for this scenario, based on the observation that the evolution of \(\mathcal{F}(t)\) exhibits two regimes: At weak disorder, typical fluctuations are damped more strongly than average fluctuations; at larger disorder, it is the other way round.
_Outlook._ As an outlook, we express our belief that the neglect of creep in many, if not most, of the earlier computational studies of MBL, invalidates in parts the data interpretation that has been offered in these works, presumably often in crucial ways. At the time being it is too early writing an MBL review from the perspective of many-body de-localization (MBdL), i.e. "creep". Nonetheless, a firm ground has been laid that allows for a more careful analysis of the physical phenomena to be encountered. Many fascinating ideas have been expressed in the past, some of them formulated as mathematical theorems, some of them in terms of toy models, all of them making predictions that merit a careful computational test. Of course, creep will have to be included as a hallmark of delocalization physics in the data analysis - and no longer be ignored.
The most important statement supporting the existence of MBL goes back to Imbrie [74; 13]. While Imbrie's proof is believed to be rigorous, it also relies upon assumptions, e.g., concerning the spectral statistics of the sample Hamiltonian. It will be interesting to see in future work, whether the lack of evidence for MBL in the XXZ-Heisenberg model is due to the disorder being still too weak, due to the proof being not fully complete yet, or due to a much simpler reason, which is that the theorem does not apply to the XXZ-chain, because its assumptions are not met.
Another open pressing question concerns the nature of MBLL in correlated disorder, such as represented, e.g., by the Andre-Aubry potential (AA). In our previous studies Weiner _et al._[28] of charge-density relaxations, we did not detect a qualitative difference between correlated and fully uncorrelated randomness; similarly, also here we have no indication that the dominating physics is related to the effects of rare regions. Both observations together prompt the expectation that the AA-model exhibits creep, i.e., MBdL. If true, the interpretation of cold-atom experiments in terms of the observation of MBL proper is challenged [75; 76; 77].
One more crucial issue that merits closer scrutiny relates to the choice of the initial state used for time propagation, e.g., a Neel state. In thermalizing phases, it is tempting to assume that the qualitative dynamics seen in a quench mostly reflect properties of the phase rather than the initial state. To what extent this remains true also in marginally thermalizing situations remains to be seen.
We conclude with a caveat: In this article, we have adopted a manner of speaking that is established in the MBL community according to which a phase is "equilibrating" or "thermal" if the temporal evolution of the entanglement entropy is at large times faster than \(\ln t\) and if simultaneously the sublattice imbalance and derived quantities in the thermodynamic limit decay to zero. It should be noted that the traditional meaning of equilibration implies that _all_ local observables relax towards their equilibrium value, eventually. Whether the equilibrating/thermalizing phase(s) indicated in Fig. 1 also satisfies such a stronger condition is a matter of ongoing research.
## VI Acknowledgments
We would like to thank J. Bardarson, I. Gornyi, A. Mirlin, M. Kiefer-Emmanouilidis, J. Sirker, and J. Zakrzewski for critical reading of the manuscript, and for their valuable comments, which improved our manuscript considerably. SB would like to thank S. Nandy for several discussions, and for an earlier collaboration on a closely related topic. FE expresses his gratitude to A. Rosch for an enjoyable set of conversations on the topic. SB acknowledges support from SERB-DST, India, through Matrics (No. MTR/2019/000566), and MPG for funding through the Max Planck Partner Group
at IITB. Funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through EV30/11-2, EV30/12-1, EV30/14-1, EV30/14-2 is gratefully acknowledged.
## Appendix A Fitted parameters
The list of the parameters obtained from fitting power-law corrections, (7) to the data given in Fig. 5.
Details of the procedure are described in the main text; the ratio \(\overline{\mathcal{S}}_{\mathrm{e}}^{\infty}/\mathcal{S}_{\mathrm{Page}}\) is plotted in Fig. 5(d). At weak disorder, the saturation values have been read directly from the original traces. At stronger disorder, \(W>3.0\) the saturation behavior moves out of the time window where the ansatz (7) would apply.
To get a reasonable estimate of the error in the fitting parameters, we use the bootstrap method [78, 79]. This involves generating 100 new random sample traces around the original trace with a variance that is given by the sampling error. All these traces are then fitted with Eq. 7 to get a spread with 95% confidence interval in the fitting parameters \(\overline{\mathcal{S}}_{\mathrm{e}}^{\infty},\gamma_{\mathrm{L}}(W)\).
## Appendix B \(\mathcal{S}_{2}\) vs \(\mathcal{S}_{\mathrm{e}}\)
To address the issue we depict in Fig. 13 the ratio of \(\overline{\mathcal{S}}_{2}/\overline{\mathcal{S}}_{\mathrm{e}}\). The ratio deviates from (roughly) a constant at short times and at long times, where system-size effects enter. We conclude that the entropy model for the internal clock has the best chance to work in an intermediate time regime where the ratio adopts a plateau-type behavior. Finite-size effects set in as soon as \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\propto\mathcal{S}_{\mathrm{Page}}\) with a prefactor that shrinks with increasing \(W\).
As a model for the internal clock, we have adopted the entanglement entropy \(\mathcal{S}_{\mathrm{e}}(t)\). Alternative choices are available, such as the generalized Renyi-entropies
\[\mathcal{S}^{(\alpha)}\coloneqq(1-\alpha)^{-1}\ln\mathrm{Tr}_{\mathrm{A}} \hat{\rho}_{\mathrm{A}}^{\alpha} \tag{10}\]
with \(\mathcal{S}_{2}=\mathcal{S}^{(2)}\) and \(\mathcal{S}_{\mathrm{e}}=\mathcal{S}^{(1)}\). Therefore, a question arises concerning the equivalence of different choices.
Figure 14: Shows the time evolution of entanglement entropy and imbalance for both the exact and the Chebyshev method for two randomly chosen disorder realizations (red and blue) for \(L=16\), and \(W=1.5\).
Figure 13: The time evolution of the (second) Renyi entropy in units of \(\overline{\mathcal{S}}_{\mathrm{e}}\) in a broad range of disorder values, and system sizes \(L=16,20,24\).
\begin{table}
\begin{tabular}{c c|c c c c} \(W\) & \(L\) & \(\overline{\mathcal{S}}_{\mathrm{e}}^{\infty}\) & \(\overline{\mathcal{S}}_{\mathrm{e}}^{\infty}/\mathcal{S}_{\mathrm{Page}}\) & \(c_{0}\) & \(\gamma_{\mathrm{L}}(W)\) \\ \hline \hline
0.0 & 16 & 3.568(4) & 0.707 & - & - \\ & 20 & 4.504(21) & 0.700 & - & - \\ & 24 & 5.453(12) & 0.697 & - & - \\ & 26 & 5.929(6) & 0.696 & - & - \\ \hline
1.25 & 16 & 3.90(1) & 0.773 & - & - \\ & 20 & 5.118(17) & 0.796 & - & - \\ & 24 & 6.253(20) & 0.800 & - & - \\ & 26 & 6.848(24) & 0.805 & - & - \\ \hline
1.5 & 16 & 3.871(3) & 0.767 & - & - \\ & 20 & 5.158(5) & 0.802 & - & - \\ & 24 & 6.387(8) & 0.817 & 83(10) & 0.852(40) \\ & 26 & 7.012(8) & 0.824 & 74(4) & 0.796(23) \\ \hline
2.0 & 16 & 3.524(7) & 0.698 & 40(10) & 0.807(48) \\ & 20 & 5.117(10) & 0.796 & 13.96(30) & 0.445(54) \\ & 24 & 6.645(30) & 0.850 & 18.66(24) & 0.40(9) \\ & 26 & 7.476(9) & 0.878 & 19.68(13) & 0.369(3) \\ \hline
2.5 & 16 & 2.928(1) & 0.580 & 19.32(7) & 0.653(1) \\ & 20 & 4.450(5) & 0.692 & 10.95(10) & 0.362(4) \\ & 24 & 6.307(5) & 0.807 & 13.13(4) & 0.269(1) \\ & 26 & 7.372(10) & 0.866 & 14.58(8) & 0.239(2) \\ \hline
3.0 & 16 & 2.371(4) & 0.470 & 10.75(80) & 0.545(8) \\ & 20 & 3.497(11) & 0.544 & 9.61(41) & 0.356(7) \\ & 24 & 5.187(10) & 0.663 & 9.96(19) & 0.226(7) \\ & 26 & 6.70(20) & 0.788 & 10.87(17) & 0.166(10) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters obtained from fitting the expression (7) for \(\overline{\mathcal{S}}_{\mathrm{e}}(t)\) to the data given in Fig. 5a). Fits of the clean limit, Fig. 3 right column, have also been included.
## Appendix C Convergence of time evolution
The validity of the conclusions presented in this work relies upon the reliability of our simulation data.
To demonstrate the accuracy of the Chebyshev expansion as we have implemented it, we have performed a per-sample comparison with data from exact diagonalization (ED). Figure 14 displays the result for two typical, i.e. randomly chosen samples. As seen there, within our simulation time \(t\sim 10^{3}\) the traces are indistinguishable. The main reason for the usage of the Chebyshev expansion is that it allows treating systems larger than those affordable with ED, i.e. \(L\gtrsim 16\).
|
2306.15415 | Quantum Fourier Networks for Solving Parametric PDEs | Many real-world problems, like modelling environment dynamics, physical
processes, time series etc., involve solving Partial Differential Equations
(PDEs) parameterised by problem-specific conditions. Recently, a deep learning
architecture called Fourier Neural Operator (FNO) proved to be capable of
learning solutions of given PDE families for any initial conditions as input.
However, it results in a time complexity linear in the number of evaluations of
the PDEs while testing. Given the advancements in quantum hardware and the
recent results in quantum machine learning methods, we exploit the running
efficiency offered by these and propose quantum algorithms inspired by the
classical FNO, which result in time complexity logarithmic in the number of
evaluations and are, therefore, expected to be substantially faster than their
classical counterpart. At their core, we use the unary encoding paradigm and
orthogonal quantum layers and introduce a circuit to perform quantum Fourier
transform in the unary basis. We propose three different quantum circuits to
perform a quantum FNO. The proposals differ in their depth and their similarity
to the classical FNO. We also benchmark our proposed algorithms on three PDE
families, namely Burgers' equation, Darcy's flow equation and the Navier-Stokes
equation. The results show that our quantum methods are comparable in
performance to the classical FNO. We also perform an analysis on small-scale
image classification tasks where our proposed algorithms are at par with the
performance of classical CNNs, proving their applicability to other domains as
well. | Nishant Jain, Jonas Landman, Natansh Mathur, Iordanis Kerenidis | 2023-06-27T12:21:02Z | http://arxiv.org/abs/2306.15415v1 | # Quantum Fourier Networks for Solving Parametric PDEs
###### Abstract
Many real-world problems, like modelling environment dynamics, physical processes, time series etc., involve solving Partial Differential Equations (PDEs) parameterised by problem-specific conditions. Recently, a deep learning architecture called Fourier Neural Operator (FNO) proved to be capable of learning solutions of given PDE families for any initial conditions as input. However, it results in a time complexity linear in the number of evaluations of the PDEs while testing. Given the advancements in quantum hardware and the recent results in quantum machine learning methods, we exploit the running efficiency offered by these and propose quantum algorithms inspired by the classical FNO, which result in time complexity logarithmic in the number of evaluations and are, therefore, expected to be substantially faster than their classical counterpart. At their core, we use the unary encoding paradigm and orthogonal quantum layers and introduce a circuit to perform quantum Fourier transform in the unary basis. We propose three different quantum circuits to perform a quantum FNO. The proposals differ in their depth and their similarity to the classical FNO. We also benchmark our proposed algorithms on three PDE families, namely Burgers' equation, Darcy's flow equation and the Navier-Stokes equation. The results show that our quantum methods are comparable in performance to the classical FNO. We also perform an analysis on small-scale image classification tasks where our proposed algorithms are at par with the performance of classical CNNs, proving their applicability to other domains as well.
## I Introduction
### Fourier Neural Network
Solving Partial Differential Equations (PDEs) has been a crucial step in understanding the dynamics of nature. They have been widely used to understand natural phenomena such as heat transfer, modelling the flow of fluids, electromagnetism, etc.
Each PDE is an equation, along with some initial conditions, for which the solution is a function \(f\) of space and time \((x,t)\), for instance. A _PDE family_ is determined by the equation itself, such as Burgers' equation or Navier-Stokes equation. An _instance_ of a given PDE family is the aforementioned equation along with a specific initial condition, represented, for instance, as \(f(x,t_{0})\). Modifying this initial condition leads to a new PDE instance and, therefore, to a new solution \(f(x,t)\). Note also that the solution is highly dependent on some physical parameters (_i.e._ viscosity in fluid dynamics).
In practical scenarios, a closed-form solution for most PDE families' instances is difficult to find. Therefore, classical solvers often rely on discretising the input space and performing many approximations to model the solution. A large number of computations for each PDE instance are required, depending on the chosen _resolution_ of the input space.
Recently, considerable effort in research for approximating a PDE's solution is based on neural networks. The main idea is to let a neural network become the solution of the PDE by training it either for a fixed PDE instance or with various instances of a PDE family. The network is trained in a supervised way by trying to match the same solutions as the ones computed with classical solvers. The first attempts [18, 19] were aimed at finding the PDE's solution \(f(x,t)\) for an input \((x,t)\) given a specific initial condition (one PDE instance), and later [20, 17, 1] to a specific discretisation resolution for all instances of a PDE family. For the first case, once trained, the neural network can output solution function values at any resolution for the instance it was trained on. However, it has to be optimised for each instance (new initial condition) separately. In the latter case, the neural network can predict solution function values for any instance of the PDE family but for a fixed resolution on which it was trained.
A recent proposal named _Fourier Neural Network_[14] overcame these limitations and posed the problem as learning a function-to-function mapping for _parametric_ PDEs. Parametric PDEs are families of PDEs for which the initial condition can be seen as parametric functions. Given any initial condition function of one such PDE family sampled at any resolution, the neural network can predict the solution function values at the sampled locations.
The input is usually the initial condition \(f(x,t_{0})\) itself. It is encoded as a vector of a certain length \(N_{s}\) by sampling it uniformly at \(N_{s}\) locations \(x\) of the input space, given some resolution. This input is also called an _evaluation_ of the initial condition function. The number of samples \(N_{s}\) is key in analysing the computational complexity as it is the neural network input size. Note that sometimes the initial condition is also sampled for several times \(t\) as well. The output of the neural network is the corresponding PDE's solution \(f(x,t)\) applied at all \(x\) sampled and for a fixed \(t\). Experiments on widely pop |
2308.03903 | Average Estimates in Line Graphs Are Biased Toward Areas of Higher
Variability | We investigate variability overweighting, a previously undocumented bias in
line graphs, where estimates of average value are biased toward areas of higher
variability in that line. We found this effect across two preregistered
experiments with 140 and 420 participants. These experiments also show that the
bias is reduced when using a dot encoding of the same series. We can model the
bias with the average of the data series and the average of the points drawn
along the line. This bias might arise because higher variability leads to
stronger weighting in the average calculation, either due to the longer line
segments (even though those segments contain the same number of data values) or
line segments with higher variability being otherwise more visually salient.
Understanding and predicting this bias is important for visualization design
guidelines, recommendation systems, and tool builders, as the bias can
adversely affect estimates of averages and trends. | Dominik Moritz, Lace M. Padilla, Francis Nguyen, Steven L. Franconeri | 2023-08-07T20:35:30Z | http://arxiv.org/abs/2308.03903v1 | # Average Estimates in Line Graphs Are Biased Toward Areas of Higher Variability
###### Abstract
We investigate _variability overweighting_, a previously undocumented bias in line graphs, where estimates of average value are biased toward areas of higher variability in that line. We found this effect across two preregistered experiments with 140 and 420 participants. These experiments also show that the bias is reduced when using a dot encoding of the same series. We can model the bias with the average of the data series and the average of the points drawn along the line. This bias might arise because higher variability leads to stronger weighting in the average calculation, either due to the longer line segments (even though those segments contain the same number of data values) or line segments with higher variability being otherwise more visually salient. Understanding and predicting this bias is important for visualization design guidelines, recommendation systems, and tool builders, as the bias can adversely affect estimates of averages and trends.
bias, lines graph, ensemble perception, average
## 1 Introduction
Since William Playfair invented line graphs in 1786 [23], they have become one of the most common data visualization types. Designers use line graphs to visualize stocks, sensor data, machine learning metrics, and human vitals (e.g., heart rate). Line graphs show a continuous variable's change over another continuous variable, typically time, as the changing position of a line mark.
We generally assume that visualizations, especially of effective visual encoding channels such as position, are perceived not perfectly but without bias [5]. The popularity of line graphs may be because the visual encoding of time series as the position of a line is considered effective relative to other visual encoding channels, such as hue, depending on the task. However, designers should be cognizant of perceptual biases that can lead to misinterpretation of visualizations [29, 14]. For example, prior work demonstrates that the background color can bias the perception of the color of marks [28], and continuous rainbow color maps are perceived as discrete categories [17, 25].
There may be unexplored biases in line charts as well. When drawing a line, the length of the line drawn varies not only with the duration of the visualized time series but also with the variability of the values (and the resulting variability of the line graph). For example, take two time series of regularly sampled values over the same duration. The first value may be constant while the second value oscillates. Both time series have the same number of values (the same duration), but in the visualization as a line graph, the second line has a longer overall length--we call this the _arc length_ of the line. The arc length is the sum of the length of all line segments. Steeper line segments are longer than other line segments of the same length along x. The arc length of a line affects how much visual weight a line has (how much "ink" is needed to draw it) and how much it draws viewers' attention [30]. Within a single line, periods of the same length may have a longer or shorter arc length depending on how much the line goes up and down, which depends on the amount of variability in the visualized time series.
Estimates of average values may be biased by design features of the marks that draw viewers' attention, as found in prior work [13], and increased variability in visualized times series may capture attention. Our bottom-up intention is generally attracted to visual information that contrasts with its surroundings [30]. Marks can vary in contrast to the background and other elements, which dictates how capturing they are to our attention, referred to as _salience_. For example, areas of a line graph with high variability also have more ink (often in color) and more edges, creating high contrast with the background. Therefore, we hypothesize that average estimates in lines are biased toward areas of line graphs that have a longer relative arc length (i.e., that have a longer arc length for the same duration or that use more ink). Put differently, we hypothesize that increased variability in higher values increases the average estimate of a time series (and vice versa) in line graphs and that the bias is consistent with the salience of the line.
We tested this hypothesis in two experiments. Our first experiment showed that average estimates are biased toward the area of the line that visualizes more variable data. In the second experiment, we sought to understand the reasons for the observed bias. We hypothesized that average estimates in line graphs are consistent with the salience of a line. We, therefore, hypothesized that average estimates of points drawn along the arc of a line are more consistent with average estimates
on lines than points drawn at regular intervals along the time dimension of a graph. In other words, the bias to variability may decrease from a line graph encoding to a point encoding of the same data. The results of the second experiment confirm this hypothesis, and a demonstration of the findings is shown in Figure 1. We preregistered the experiments on the Open Science Framework (Experiment 1 and Experiment 2) and have made our study materials available at (OSF Link).
Although we are not the first to document that different time series have different arc lengths, and that arc length affects aggregates [12, 21], we experimentally show the effect and reveal how much average estimates in line graphs can be biased by variability in the time series. Understanding this bias is important because people often use line graphs (as one of the most common visualization types) to visually assess whether values are, on average, above or below critical thresholds or to estimate future trends. Our results show that variability in line graph biases the perception of averages and, therefore, conclusions people draw. While the variability affects audiences' perception, it may be an artifact of an irrelevant factor that should not affect their conclusions. We discuss these implications and potential designs that could reduce the observed bias.
## 2 Related Work
Line graphs are a common visualization-- especially of time series data--in various domains [24], appearing in papers, reports, monitoring dashboards, and visual analysis systems. They are generally considered an effective visualization for time series data [31]. Mackinlay describes a visualization as _effective_ when the information it conveys is more readily perceived than with other visualizations [18]. A visualization is always effective only with respect to a particular _task_. In this paper, the task is to estimate the average of a time series, which corresponds to "compute derived value" in Amar et al.'s popular low-level task taxonomy [3]. Whether a visualization is effective depends on choosing the right visual mark and effective visual encoding channels. Position is considered the most precise visual encoding channel [5].
However, research also demonstrates the limitations of line graphs for various tasks (e.g., [1, 2, 5, 6, 9, 11, 15, 33]; reviewed in [24]). Several studies compared line charts to other encodings and found that the efficacy of line charts depends on the task [2, 6, 11, 15]. Albers, Correll, and Gleicher found that line graphs are best suited for identifying the min, max, and range while less effective for average estimation [2] (see also [6]). Studies have shown that positional encoding may be a precise visual encoding channel, but it can produce systematic biases regarding how averages are perceived [33] and remembered [19]. Researchers investigated bias in composed displays with line and bar graphs [33]. When comparing two curved lines, work shows that the steepness of the lines causes a perceptual illusion making it challenging to estimate differences between the lines visually [5].
We are not the first to recognize that line graphs dedicate more visual weight to steeper lines. This effect becomes critical when summarizing large ensembles of line graphs. Heinrich and Weiskopf reduced the salience of steeper lines in density visualizations of parallel coordinate plots [12]. Moritz and Fisher aggregated line graphs to create density visualizations of large time series [21]. To avoid visual artifacts of steeper lines, they normalized each line by the arc length such that each time series contributes equally to the density visualization. Zhao et al. proposed an effective density computation and extended density visualizations with interactivity [34]. However, none of these works experimentally confirm that average position estimates in line graphs are biased due to the increased salience.
## 3 Experiments
In two experiments, we investigated the perception of average values in line graphs. The goal of the first experiment was to determine if the perception of averages is biased toward variability and whether we can predict this bias. To examine one possible source of the bias, in Experiment 2, we aimed to identify the contribution of the line encoding. To test this, we conducted a study comparing the bias of three mark types: 1) points equally spaced along the x-axis, or _Cartesian spaced_, 2) points equally spaced along the arc of the line, and 3) a line.
### _Experiment 1_
The first experiment aimed to determine if there was a previously undocumented bias in perceptual line graph average estimation. We termed this potential bias _variability-overweighting_, which is when people believe that the average value of the data set is closer to the more variable data. For example, in Figure 1 center, if a participant were to indicate that the average value was located at the red line, they would be incorrect. In the figure, the true average is lower, but it could be the case that people are biased toward the more variable data.
We hypothesized that individuals would have a skewed perception of averages toward sets of values with higher variance. High variance increases the amount of ink in a line graph and, therefore, the visual saliency of the line. If, in a line graph, the variance correlates with the value plotted along the same axis (here y), then we expect people to estimate that most values are where the high-variance data is located.
To test if variability-overweighting occurs with line graphs, in Experiment 1, we showed participants line graphs of synthetic stock data that we modified to induce increased variability. We created stimuli that included more variability in the higher y-values and then reflected the stimuli to create graphs with more variability in the lower y-values (see Figure 2). We will refer to the stimuli with variability in the higher y-values as "variability upper" and those with variability in the lower y-values as "variability lower." Participants were tasked with estimating the average y-value of the stock data using a dragable line (Figure 3). We used a 2 (variability upper vs. lower) \(\times\) 2 (more vs. less variability) within-subjects design for a total of 4 stimuli types of interest.
We generated the stimuli from 12 seeds to create 48 trials to ensure test-retest reliability. Creating images reflected vertically allowed us to test if variability-overweighting occurs similarly for higher or lower
Fig. 3: Example stimulus. Participants can move the grabber up and down to where they estimate the line’s average to be. Each series is shown as a 500px by 200px graph.
Fig. 2: First 8 of 48 stimuli for Experiment 1. The stimuli included two variability levels for each seed and conditions where the graphs were mirrored creating stimuli with variability in the higher and lower y-values.
areas on the y-axis. Participants were shown 48 images in a randomized order and estimated the average y-value for each image. We calculated each judgment's Euclidean distance estimation error by subtracting the actual average from the estimated average. The direction of the error was preserved, such that positive values indicated an overestimation of the average value, and negative values represented an underestimation.
### Experiment 2
In Experiment 2, we aimed to determine if we could influence the degree of variability-overweighting by changing the mark type. To test this, we replicated Experiment 1, but we encoded the data using 1) Cartesian spaced points, 2) points equally spaced along the arc of the line, and 3) the same line encoding used in Experiment 1 (see Figure 4).
We hypothesize that by encoding the time series data as Cartesian spaced points, we can reduce the bias toward more variable data. Each data point is rendered as one point mark without a connection in this encoding. Therefore, two rendered points have the same salience regardless of their distance in y. In lines, the salience of the mark depends on the length of the arc. We also rendered points at equally spaced intervals along the arc (points along the arc) to simulate this behavior in our experiment. With points along arc, the average y-position of the points is heavily biased toward more variability since lines between neighboring points with more different values are longer. Therefore more points are along the arc of lines with more variability. In this encoding, a perfectly accurate viewer cannot estimate the true average of the underlying data series. We also included a line encoding to replicate Experiment 1.
As in Experiment 1, we showed participants graphs of synthetic time series. We then ask them to estimate the average using a dragable line. We used a 3 (point along x, point along arc, line) \(\times\) 2 (variability upper vs. lower) \(\times\) 2 (more variability vs. no variability) design.
We generated the stimuli types from 12 seeds to create 144 total trials. Participants viewed 48 images of one mark type from the 144 trials, and we calculated the error similarly to Experiment 1. We switched to a between-subject experiment to limit the number of graphs each participant saw and reduce the possibility of potential bias of viewing multiple graph types.
## 4 Stimuli generation
We generated the stimuli for both experiments with the same process. This process used a simulation to generate realistically-looking line charts that we add linearly-interpolated noise to. We re-scaled the series to correct for a subtle yet important bias introduced by the noise. The code for our stimuli generation for Experiment 1 and Experiment 2 are available online and as supplemental material.
### Experiment 1
The stimuli (with a sample shown in Figure 2) are line graphs of randomly generated series of numbers. Each series has 120 data points. A series is generated from a base series to which we add noise. We generated base series from a geometric Brownian motion stochastic process [32], a process used to generate realistic-looking stock data. We set \(\mu=0\) and \(\sigma=1\). We then applied a moving average over 30 points to smooth the base series. To get 120 data points in a series, we generate \(120+30=150\) data points from geometric Brownian motion. We scale the base series, so all values are between zero and one. To the base series (consisting of data points \(\text{base}_{i}\)), we added uniform random noise (centered around 0). The amount of noise increases linearly with the value of the base series for positive y-alignment (linear interpolation between \(\text{lowVariability}\) and \(\text{highVariability}\)).
\[\begin{split}\text{noise}_{i}&=\text{lowVariability} \times(1-\text{base}_{i})+\text{highVariability}\times\text{base}_{i}\\ \text{dataPoint}_{i}&=\text{base}_{i}+(\text{rand} ()-0.5)\times\text{noise}_{i}\end{split} \tag{1}\]
Low variability series have less variability (0.15) than high variability series (0.4). We seeded the random number generator for the geometric Brownian motion stochastic process and noise to reproduce the same series. We curated the set of seeds to generate diverse line graphs with different shapes.
We generated a series for negative y-alignment by mirroring the series vertically. We mirror each series to understand whether the average estimates may be biased toward higher and lower y-values and counter-balance this bias in our experiments to measure bias toward higher variability.
\[\text{mirroredDataPoint}_{i}=1-\text{dataPoint}_{i} \tag{2}\]
### Scaling the averages
To allow for comparison across stimuli, we set the values between zero and one. We could naively scale the generated series to \([0,1]\), but this would invalidate our experiment. To understand why, assume without loss of generality that the averages of the base series are around 0.5 and that the generation procedure adds noise to larger y-values (for not mirrored series). Therefore, the generated series are between 0 and \(\geq\) 1 (with the exact amount depending on the noise). Therefore, if we rescaled the data to \([0,1]\), we would push the average values of the series to lower y-values.
Let us assume our participants respond randomly or always estimate the average at 0.5. In both cases, we get the same result. Since the true averages are overall lower, we would find that estimates are biased to be higher than the average. We believe that every experiment that investigates bias should test its analysis with random responses. With random responses, any observed human bias should disappear.
To overcome the issue, we scale (multiply) the averages of the low-noise data to have the same averages as the high-noise data.
\[\text{scalingFactor}=\frac{\text{average}(\text{highNoiseData})}{\text{ average}(\text{lowNoiseData})} \tag{3}\]
For mirrored series, we scale by \((1-\text{average}(\text{highNoiseData}))/(1-\text{average}(\text{lowNoiseData}))\). After this scaling, all stimuli with the same
Figure 4: First 8 of 144 stimuli for Experiment 2. For each seed, we included two variability levels, three mark types, and stimuli with variability in the higher and lower y-values.
seed (and mirroring) have the same average. If we simulate an experiment where participants always estimate 0.5, we find no bias toward higher-variable areas.
### Experiment 2
We use the same stimuli generation procedure as in Experiment 1 and the same seeds. In addition to the line encoding, we created graphs with points sampled at equal distances along the x-axis and sampled along the arc of the lines. This design resulted in three stimuli types (lines, Cartesian spaced points, and arc spaced points). Like in Experiment 1, we used two--albeit different--levels of variability in the graphs (no additional variability, and 0.4). We chose no additional variability for the first level to understand the generated series' baseline bias. We used the same level of variability for the high variability series to replicate Experiment 1. The series had 60 data points and were scaled as in Experiment 1. Figure 4 shows example stimuli.
## 5 Design, Procedure, and Participants
### Design
**Experiment 1:** We used a 2 (variability:.15 and.4) x 2 (variability upper vs. lower) within-subjects design. Average estimation error was collected as the dependent variable. This design resulted in a total of four trial types which were generated 12 times (48 total trials) to ensure test-retest reliability.
**Experiment 2:** We used a 2 (variability: 0 and.4) x 2 (variability: upper vs. lower) x 3 (mark type: line graph, Cartesian spaced points, arc spaced points) mix-design. The between-subjects measure was mark type, and the within-subjects conditions were variability 0/.4 and variability upper/lower was used as a manipulation check. Mean estimation error was collected as the dependent variable. Each participant completed the task with graphs that included variability 0/.4 and variability upper/lower in a randomized order. The total number of trials was the same as in Experiment 1.
### Procedure
In both experiments, participants completed this study online on their personal machines. After giving Institutional Review Board (IRB) approved consent to participate, individuals were given three types of instructions. The first set of instructions prompted participants to set their browser window to 100% zoom. The second set of instructions pertained to the task, which was:
"_Experiment Instructions_. _Please read the following paragraphs carefully. You will be asked questions about the information in the paragraphs._
_Scenario: Assume that you are a stock market investor. You are investing your own money in stocks, and you want to determine the average price of a stock over time in order to pick the best investment._
_Task: In this experiment, you will be shown graphs of stock prices over a one-year period like the one below. Your task is to determine the average stock price for that year. What is the average stock price? (Click and drag the line to indicate the average stock price)_
_Response: To indicate the average stock price, use your mouse to drag the line on the chart. Move the line to where you think the average stock price is for that year. You can readjust the line by cicking and dragging. Once you are happy with your judgment of the average stock price, click the next button._"
The final set of instructions was an attention check, where participants were asked to fill in a blank with the word "stock". The sentence was, "_During this study, you will be asked to look at graphs of_ -- _prices._" Following the instructions, participants completed 48 estimation judgments in a randomized order. They indicated their judgments using a horizontal slider that was superimposed on the stimuli (shown in Figure 3) to estimate the average data value in the graphs. The trials included text reminding the participants about the task. If participants failed to move the slider, they would be prompted to do so and restricted from progressing until they made their judgment. They received no feedback as to the accuracy of their judgments.
Following the main experiment, participants answered open ended questions about their strategy and what they thought the experiment was about. They also reported their gender and age.
### Participants
Based on the effect size calculated from pilot data, a power analysis was conducted using G*Power, to determine an adequate sample size, and preregistered. At an alpha of 0.05, power of 0.95, 4 predictors, and an effect size of adjusted r-square of 0.13, the minimum number of participants needed is 132, which we rounded to 140. For Experiment 1, participants were 142 people from Amazon's Mechanical Turk, with participation criteria set to workers in the US who were 18 years of age or older. Participants demographics were 98 and 44 female, with an average age of 39 (\(SD=9\)).
For Experiment 2, participants were 420 (140 per between-subjects group) people from Amazon's Mechanical Turk. Of those who chose to answer, 46% identified as female, with an average age for the whole sample of 41 (\(SD=11\)). IRB approval for this research was obtained from (removed for anonymization) University's IRB. Participants were paid in accordance with (removed for anonymization) minimum wage.
## 6 Results
To answer our primary analysis question, whether the perception of averages in lines is biased toward variability and whether we can manipulate and predict this bias, we will detail the results of the two experiments. In each experiment, we will begin with descriptive statistics about estimation error. We then show the results of statistical tests of our preregistered hypotheses. For all of the analyses, we did not remove any participants.
Following the preregistered analysis, we detail the thematic analysis of participants' strategies, including examining the variability-overweighting exhibited by participants who reported using the correct strategy. We also conducted a sensitivity analysis to determine if individuals who guessed the purpose of the study biased the results. We conclude the analysis with model comparisons that use the average along the arc to predict the observed biases.
#### 6.0.1 Accuracy Calculation
We computed the error for each participant's estimates as the difference between the estimated average and the true average.
\[\text{Error}=\text{Estimated Average}-\text{Average} \tag{4}\]
We also calculated whether a participant overestimates the average. We specify that they overestimated when the estimated average is higher than the true average of the time series data.
\[\text{Overestimated}=\begin{cases}\text{Overestimated},&\text{if Error}>0\\ \text{Underestimated},&\text{otherwise}\end{cases} \tag{5}\]
To always have the high variability data at the higher y values, we also compute a normalized average as:
\[\text{Normalized Average}=\begin{cases}-\text{Average}+1,&\text{if variability upper vs. lower}\\ \text{Average},&\text{otherwise}\end{cases} \tag{6}\]
We similarly compute a normalized error where a positive error indicates an estimated average toward higher variability.
### Experiment 1
**Descriptive statistics**. As a preliminary analysis of estimation error, we counted how many times participants over or underestimated the average. Of the 6816 responses, 3239 (48%) were overestimated, and 3577 (52%) were underestimated (see Figure 5). Participants generally underestimated averages.
Since our experimental data contains graphs with variability in the upper and lower y-values, we broke down these counts by this condition to determine whether the estimates are toward or away from the higher
variability. As shown in Figure 5, for graphs where the variability was in the higher y values, participants overestimated 2305 (68%) trials, and they underestimated 1103 (32%). In the condition where the variability was in the lower y values, 934 (27%) were overestimated, and 2474 (73%) were underestimated. In both conditions, the estimation error was consistent with the variability.
**Statistical tests of preregistered hypotheses.** To determine if the findings from the descriptive statistics are robust, we conducted a statistical analysis of our preregistered hypotheses. We preregistered three hypotheses on the Open Science Framework for the first experiment1.
Footnote 1: In the original pre-registration, we used the term _noise_ rather than _variability_. We updated the term here to be consistent.
1. _"Estimation error will be significantly different than zero."_
2. _"There will be significantly more estimation error for trials with higher variability compared to lower variability."_
3. _"Estimation error will be observed in the direction of the increased variability (i.e., positive errors will be observed when the area of highest variability is above the average y-value and negative errors will occur when the highest variability is lower on the y-axis than the average.)"_
To test these hypotheses, we conducted the preregistered analysis, in which a linear regression model was fit to the data using the R function lmer [26] with restricted maximum likelihood estimation procedures [27]. Note that we used multi-level linear regression models to account for correlations between participants' responses instead of the more simplistic pre-registered linear regression models. Linear regression assumptions were tested and met. The model included _variability_ size (.15 vs..4), _variability position_ (upper vs. lower), their interaction, and random intercepts for each participant to predict errors in participants' average estimations. The referents were.15, and variability in the upper y-values. The resultant model in R notation was: \(Error\sim variabilitySize* variabilityPosition+(1|Id)\).
**Test of H1: _estimation error will be significantly different than zero._** The results revealed a significant intercept of the model (\(b=-.036\), \(t(6,811)=-6.6\), \(p<.001\), 95% CI \([-.047,-.025]\)), providing evidence that the absolute estimation error (3.6%) for the referent conditions was meaningfully different than zero (supporting H1). This effect can be seen in Figure 6 (left panel), which displays estimation errors for each condition, with none of the conditions overlapping zero.
**Test of H2: _significantly more estimation error for trials with higher variability._** The results also revealed a significant main effect of variability (\(b=-.022\), \(t(6,811)=-6.5\), \(p<.001\), 95% CI \([-.029,-.016]\)). This effect can be seen in Figure 6, where there is a meaningful separation between the two variability types for the variability upper vs. lower conditions (denoted with H2). This finding supports H2, suggesting significantly more estimation error for trials with higher variability than lower variability.
**Test of H3: _estimation error will occur in the direction of the increased variability._** There was also a significant interaction between _variability.15 vs. 4_ and _variability upper vs. lower_ (\(b=.034\), \(t(6,811)=7\), \(p<.001\), 95% CI \([.025,.044]\)). To unpack the interaction, we ran the same model as above but with the variability in the lower y-value graphs as the referent. This model yielded a significant effect of variability but in the opposite direction (\(b=.\textbf{012}\), \(t(6,811)=3.39\), \(p=.001\), 95% CI \([.005,.018]\)) compared to the prior model (\(b=.\textbf{-022}\)). As seen in Figure 6, errors occurred in the direction of the increased variability, supporting H3. We found positive errors when the area of highest variability was above the average y-value and negative errors when the highest variability was lower on the y-axis than the average.
### _Experiment 2_
To examine one possible source of the variability-overweighting, in Experiment 2, our goal was to identify the contribution of the line encoding. We predicted that there would be an interaction between variability and the mark type, such that the effect of variability will be smaller for graphs with points spaced along the x-axis than graphs with points spaced along the arc and line graphs.
**Descriptive statistics.** Using the same methods as in Experiment 1, we counted how many times participants were biased toward variability (see Figure 7). Of the 6720 responses, 3724 (55%) for Point, 4226 (63%) for Line, and 4738 (71%) for Point Arc were biased toward variability.
**Statistical tests of preregistered hypotheses.** To test the reliability of the descriptive statistics, we preregistered two hypotheses on the Open Science Framework for the second experiment.
1. _"There will be significantly more estimation error for trials with higher variability than no additional variability."_
2. "_The least variability-overweighting will occur in graphs with points that are equally spaced along the x-axis."_
We used a multilevel model to fit the data using the _lmer_ package [4] in R, which is appropriate for mixed designs with between- and within-subjects variables. The model used variability (0 and.4) to predict normalized estimation error (testing H4). We calculated the normalized estimation error for each condition using the absolute error (the error is computed in the same way as in Experiment 1) when the graph was vertically mirrored. We used _normalized error_ rather than _error_ and removed the _variability upper vs. lower_ term to reduce the complexity of the model, which was preregistered. To evaluate H5, we also included an interaction term between mark type and variability and the necessary lower-order terms. Finally, we included random intercepts for each participant. The resultant model in R notation was: \(NormalizedEstimationError\sim markType* variability+(1|Id)\). The referents of the model were the line mark and zero variability.
**Test of H4: _significantly more estimation error for trials with higher variability._** Replicating Experiment 1, the model results revealed a main effect of variability (\(b=0.03\), \(t(20,153)=10.03\), \(p<.001\), 95% CI \([.024,.036]\)), indicating that graphs with more variability had greater estimation error. Figure 8 shows that, when collapsed across the mark types, estimation error increased by.03 from graphs with no additional variability to.4 variability (confirming H4). The meaningful increase in error with the more variable graphs can also be seen in Figure 6 (right panel), which shows the impact of variability on each mark type.
**Test of H5: _least variability-overweighting in graphs with equally spaced points._** As shown in Figure 6 (right panel), Point (Cartesian spaced) had the smallest change in normalized error from charts with low to high variability (0 to.4). Our results revealed the change from low to high variability was meaningfully larger for Line vs. Point (\(b=-.023\), \(t(20,153)=-5.45\), \(p<.001\), 95% CI \([-.031,-.015]\)). The change in normalized error from low to high variability was also meaningfully larger for Point Arc vs. Point (\(b=.05\), \(t(20,153)=10.74\), \(p<.001\), 95% CI \([-.031,-.015]\)). Point Arc showed the largest increase in the normalized error of 5%, followed by Line (3%), and then Point (.69%).
Fig. 5: Counts of the over- and under-estimations in Experiment 1, broken down by the variability upper vs. lower condition.
We conducted a follow-up regression analysis to determine if the small increase in normalized estimation error was meaningful for Point. This analysis revealed a meaningful but small bias for Point \((b=.007,\)\(t(6,718)=2.24,\)\(p=.025)\). In sum, these results support H5, indicating that the points equally spaced along the x-axis had the least bias (.69%), with the line (3%) and points along the arc showing greater bias (5%).
### Open Responses for Experiments 1 and 2
After completing the estimation judgments, participants answered an open-ended question about their strategies in the task. The question was, "_We are very interested in how you made your decisions about the average stock price. Please list all the things you considered when making your judgments._" Two raters read the responses and coded them based on the six most common strategies to analyze these data. The following sections report the six most frequent strategies and include example responses. In this analysis, we identify that a small proportion of the participants reported using the correct strategy. We conducted a follow-up analysis to determine if those who were consciously aware of the correct strategy displayed less variability overweighting.
Participants also reported their beliefs concerning the purpose of the experiment to determine if any participants intentionally biased their judgments. The question text included,"_What do you think the experiment was about?_" In the second part of this section, we report the proportion of participants who were aware of the purpose of the study. Then we conducted a sensitivity analysis to determine if the people who guessed the purpose of the study meaningfully impacted the findings.
#### 6.3.1 Reported strategies
We identified six main strategies that participants used to estimate the average of the stock data. Using the strategy codes, we computed inter-rater reliability scores (_HR_, Cohen's Kappa) [8] for the codes to determine the level of agreement between the raters (shown in the bottom row of Table I). The average inter-rater reliability for the six questions was.83 and ranged from.70 to.89. This range of inter-rater reliability scores indicates a substantial level of agreement between the two raters [20]. The codes were not mutually exclusive, and many times participants indicated that they used several strategies, in which case they received multiple codes. The proportion of strategies reported in Table I is for the codes the two raters agreed on.
**Mental averaging.** The most commonly reported strategy was mentally computing the average using visual perception. For example, a participant wrote, "_I marked the point on the graphs where it seemed like the generalized average would fall if the points on the graph were boiled down into numbers and you wanted to find the average of those numbers._" Another participant described, "_I tried to get a visual sense of where the average would fall. I looked for a good mid point of the overall graph._" Table I shows that this was the most commonly reported strategy for each mark type.
**Focusing on extrema.** The second most common strategy was to focus on the max and min points and select a location between those extrema. For example, a participant wrote, "_I looked at the highest and lowest point and went with the middle._" Another example includes, "_I looked at the lowest mark and the highest mark and then the middle of that, but looked to see if there were upper or lower trends and adjusted accordingly to that..._" Focusing on the extrema is not the most effective strategy. It is surprising to see that, on average, roughly 17% of the participants indicated that they incorporated the high and low points into their average estimations.
**Incorporating variability.** Roughly 15% of participants reported incorporating variability into their judgments. However, they incorporated the variability in different ways. For example, one type of strategy included incorporating the areas with both high and low variability. For example, a participant wrote, "_I tried to find out the relatively stable parts of the graph, these were useful when they extended over a long
Fig. 8: The meaningful main effect of variability in Experiment 2, averaged over the mark types, including annotations describing confirmed H4. The black bars within the density plots show 95% CIs with a mean dot.
Fig. 6: Experiments 1 and 2 results, showing the impact of variability, variability upper vs. lower, and mark type on estimation error. The left panel details the findings of Experiment 1 with annotations describing confirmed hypotheses 1-3. The right panel shows Experiment 2 with annotations describing confirmed hypotheses 4-5. The black bars within the density plots show 95% CIs with a mean dot.
Fig. 7: Counts of the number of estimates that were biased toward variability in Experiment 2 for each mark type.
period of time. I also considered the effect of the crests and troughs and the depth of these extreme occurrences. Using these as a metric, I tried to estimate the average._"
In contrast, another group of participants focused more on areas with low variability. For example, "_Most of the time the stock price comes to certain point, and jumps again or full back, I consider the price where it is often stable for more time._" or "_It was easier to make the average when the stock prices were not changing much and the graph was more even. When the prices were more "_jumping", I tried to find the phase where these trends stayed the longest and put my average around it._" Participants who reported this type of strategy seemed worse to variability or uncertainty, which is a well-known bias in psychology [7, 16].
**Equal number of points or line below and above.** A strategy we did not anticipate was ensuring an equal number of points or line lengths above and below the judgment indicator. To indicate their responses, participants drug a line on the graph. By allowing participants to place a line directly on the graphs, participants could then easily count the number of points above and below the line. One participant simply wrote, "_I tried to have the number of points above and below the line be approximately equal._" Unsurprisingly, this strategy was most common for participants who viewed graphs with points equally spaced along the x-axis (24%). Although less common, some participants who viewed the line encoding also used this strategy (5-8%). A participant explains, "_I tried to get half of the trend line above and half of the trend line below the average line and where I placed it._"
**Beginning and endpoints.** Another suboptimal strategy was to focus on the beginning and ending values of the time series. While a small proportion of participants used this strategy (roughly 3%), it is noteworthy because it reflects a misconception. For example, one participant wrote, "_I mainly looked at the stock at the beginning and end of the year. Afterwards, I tried to make an educated guess on what the average stock price would be._" Another person describes also being confused about the impact of data at the end of the time series. They wrote, "_Depending on how the end of the chart looks, I draw a different strategy. If the chart is rallying, I believe the average price is at the low before this rally. If the chart is going down, I place it at the lowest low there was throughout the chart._"
**Equal area (correct strategy).** The correct strategy was to select a location with equal _area_ above and below the estimated average. Only a small number of people reported using this strategy (roughly 4%). An example is, "_I just tried to make the volume of the areas above and below the line approximately equal. That was my only strategy. Think I learnt it in a maths or stats course._"
As the equal-area strategy is the correct approach, we wanted to determine if participants who used it showed less bias in their judgments. To compare performance between those who used the equal-area strategy to those who did not, we computed the bias for the two groups for Experiment 2. Figure 9 shows the nine participants with the correct strategy in the Point Arc and Line groups and the two in the Point group compared to a distribution representing all the other strategies. Note that there is one distribution for all people with the incorrect strategies compared to individual distributions for those with the correct ones. We did this to clarify that a small number of people had the correct strategy and meaningful variation exists between them.
Taking the individual distributions from those with the correct strategy as a whole compared to the distributions for those with the incorrect strategy, we found that people with the correct strategy showed 12.7% less bias (.028 normalized error) than those with the incorrect strategy (.032 normalized error). The disparity between those with the correct strategy (.021 normalized error) and without (.041 normalized error) was most pronounced for the Line encoding with.4 variability (change of.019 or 46% reduction). We opted not to do a statistical analysis on these groups as they were highly unbalanced (20 participants vs. 398) and were not equally distributed across the groups. However, visual analysis reveals a general tendency where using the correct strategy leads to less bias.
#### 6.3.2 Knowledge about the experiment purpose
Several people in each experimental condition made guesses somewhat close to the actual experiment goals in response to the question, "What do you think the experiment was about?" For example, one person wrote, "_How people picture averages differently when there are smooth transitions versus spikes in the graph._" and another person wrote "_I think it was about how accurate people can estimate the average of a line and if different the conditions affect the accuracy, such as jagged line vs. smooth line..._"
In Experiment 1, three people guessed the purpose of the study, and ten correctly guessed in Experiment 2. We conducted a sensitivity analysis to determine if those participants biased the findings. In this analysis, we removed the participants that relatively accurately guessed the manipulations of the study and reran the preregistered analysis for Experiment 2. Across all the findings, there was no meaningful impact of removing the participants who guessed correctly. To illustrate these effects, in Figure 10, we show the original data from Experiment 2 with density plots. Overlayed on the density plots are quantile dot plots that show the data after removing the participants who guess the manipulations in the study. As seen in Figure 10, where all the distributions for each condition overlap, removing the participants did not meaningfully impact the results.
biased without running their own perceptual experiments, we sought to build a model that predicts participants' responses. We aim to predict the bias and average estimate only based on properties of the data we can observe in a given line chart rather than based on the parameters of the data generation method since the latter is typically not known. We also chose to only use a simple model with few features (rather than, e.g., the whole time series as input) since we are interested in the model's generalizability.
We hypothesize that such a model is possible. If the salience of longer line segments drives the estimates of averages, we may be able to predict the estimates of averages using the true average and the average of the values along the arc--_arc average_ for short.
To understand whether the arc average is a meaningful predictor, we computed the Pearson correlation between the average error of the average estimate for each stimulus to the error of the arc average estimate. For Experiment 1, the correlation is 0.85 (\(p<.001\)), and for Experiment 2, the correlation is 0.64 (\(p<.001\)). This suggests that the arc average is meaningful to predict the variability overweighting bias.
To predict the estimated average we created linear regression models that first used the average of the data points to predict participants' responses and then a second model that included the arc average. The goal of the second model was to evaluate if the arc average accounted for meaningfully more variability in participants' responses than the average of the data set alone. We then statistically compared the two models to determine if the arc average model was a significantly better fit, using the data from Experiments 1 and 2.
For Experiment 1, we fit a linear regression model using the average of the data point to predict participants' estimates.
The average of the data points meaningfully predicted participants' responses (\(b=.53\), \(t(6814)=51.80\), \(p<.001\)) with a model adjusted r-squared of 28. For the second model that included arc average, both the average of the data points (\(b=.45\), \(t(6813)=39.42\), \(p<.001\)) and arc average (\(b=.25\), \(t(6813)=15.70\), \(p<.001\)) meaningfully accounted for variance in participants judgments. The second model had an adjusted r-squared of.31. This result suggests that after accounting for the meaningful impact of the average of the data points, for every one unit change in arc average, participants' judgments were biased by.25. We then compared the two models using an ANOVA. This comparison revealed that the second model, which included arc average, had a significantly better fit than the first model (\(F(2,6813)=246.58\), \(p<.001\)).
We also completed the same sequence of model comparisons for the data in Experiment 2 using only the data for the line stimuli. For the first model, the impact of the true average was \(b=.69\), \(t(6718)=76.07\), \(p<.001\), with an adjusted r-squared of.46. For the second model, the effect of the true average (\(b=.39\), \(t(6717)=11.62\), \(p<.001\)) was larger than the impact of the arc average (\(b=.34\), \(t(6717)=9.18\), \(p<.001\)), with an adjusted r-squared of.47. This result suggests that after accounting for the meaningful impact of the true average, for every one unit change in arc average, participants' judgments were biased by.34 (compared to.25 from Experiment 1). When comparing the two models, we found that the model that included the arc average had a meaningfully better fit than the one that did not (\(F(2,6717)=84.30\), \(p<.001\)).
Figure 11: The two response functions of a generalized additive model (GAM) trained on the line data from Experiments 1 and 2. The response functions resemble linear functions, making linear models appropriate for these data.
Figure 10: Sensitivity analysis for Experiment 2, in which the original data from Experiment 2 is displayed with density plots, and the data that excludes people who guessed the purpose of the experiment is shown with quantile dotplots. The black bars within the density plots show 95% CIs with a mean dot.
Figure 9: Strategy analysis for Experiment 2, where individual participants with the correct strategy (gray background) are shown compared to the other participants (foreground density plots). These data are broken down by mark type and variability. The horizontal bars within the density plots show 95% CIs with a mean dot. The dashed line denotes zero normalized error.
We then created a linear model from the data for both experiments with three parameters: intercept (\(b=.14,t(13533)=32.58\)\(p<.001\)), average (\(b=.40\), \(t(13533)=34.36\), \(p<.001\)), and arc average (\(b=.31\), \(t(13533)=21.87\)\(p<.001\)), and an adjusted \(t\)-squared of.39. We selected a linear model because we found that a more sophisticated GAM [10] used nearly linear feature functions (Figure 11).
For most stimuli (90%), the model predicted the correct direction of the bias. The predicted estimated average fits well with the estimated averages (Figure 12). The model's mean absolute error is.014, which means that with values in the range of [0,1], the prediction is only off by 1.4%. The RMSE is.019, also indicating a good fit.
## 7 Discussion
The results of Experiment 1 (Figure 6, left) support our hypothesis that estimation error is significantly biased toward the direction of larger variability in data. This bias could be caused by more variability leading to steeper line segments that use more ink and are more visually salient. Prior work has also found that areas of higher salience in scatter plots can bias average estimation [13]. To test this theory, we conducted Experiment 2, using a dot plot instead of a line chart to encode the series data. In the dot plot, the amount of salience is proportional to the amount of data in each x-interval and independent of the steepness of the line segment.
Experiment 2 (Figure 6, right) replicates the findings from Experiment 1. The experiment additionally supports our hypothesis that we can reduce the bias by encoding the series data as a dot plot instead of a line chart. To simulate the higher salience of steep line segments, we also tested a design that spaces points along the arc of a line. We found that the line bias was significantly higher than the bias of the dot plot but lower than the average estimation of the points along the arc. These results support our theory that the bias is toward more visually salient areas of the chart but cannot yet explain the full extent of the bias.
We generated the stimuli for our experiments using different levels of variability. Since these parameters are typically unknown, it would be impossible to model the bias in real-world applications. However, if we assume that the bias is caused by the salience of steep line segments, we can compute the direction of the bias directly from the average of the points along the arc of the line. We can also estimate the magnitude of the bias as a function of the average of the series and the average of the points along the arc using a simple regression model (Section 6.4). Future work could refine this model using more features.
Our experiments show that average estimates are biased toward higher variability. We believe that we could similarly bias trend estimation in line graphs. For example, a line graph could have more variability for smaller values in the first half and higher variability for larger values in the second half. Since we found that the estimates of averages for the first half are lower than the true average and that estimates for the second half are higher, we can expect that a person also perceives a more extreme increase (stronger trend) than there is. Our experiment only investigated average estimation in isolated charts. As such, future work must confirm that this bias exists in combined charts. Suppose trend estimation could be biased by variability. In that case, malicious people who can affect the variability of series could influence decisions other people make based on trends in data, such as in stock trading.
The results of our experiments have implications for the design of charts in applications where people estimate the average or trend of a series. Designers should consider whether they can replace a line chart with a dot plot to reduce the bias. However, the dot plot design makes the data order and the delta between consecutive points less clear. There may also be other ways to reduce the bias, such as using a different type of line chart that de-emphasizes steep line segments using thinner line segments or lines with lower opacity.
We asked participants about their strategies for estimating the average and found that people used a variety of strategies, the majority of which were incorrect. The high proportion of misconceptions observed in participants' strategies is concerning. We also found that those who used the correct strategy of aiming for an equal age between the average line and the data line seemed to be less biased (Section 6.3.1). While we have too few people to draw firm conclusions, this insight suggests that people may be able to learn to reduce the bias by using the correct strategy. When we initially developed this work, we hypothesized that the biases would be driven by visual salience, a bottom-up attentional process. While such unconscious processes are certainly part of the cause, these data provide some indication that strategies may play a role. One limitation of this work is that we cannot disambiguate the effects of visual salience and strategies. The interconnection between the two is consistent with theories in visual attention that suggest strategies and bottom-up processes are intrinsically interconnected, forming a feedback loop [22, 30]. It is also possible that both the mark type and response method bias participant strategies, which could have impacted attention and responses. Despite these limitations, our findings point to the possibility that visual literacy training might benefit from teaching people to use the correct strategy.
We carefully designed the stimuli of the experiment such that simulated random responses did not show the bias we expected in real responses (Section 4.2). We only found this subtle issue after some initial data generation and pilots and would therefore encourage everyone who runs experiments that test human biases to test their experiments with random data.
## 8 Conclusion and Future work
This paper shows that average estimates are biased toward areas of higher salience, caused by increased variability in the visualized data. Since this bias can affect the conclusions drawn from data, visualization designers who create line graphs must be aware of it. This bias is not only significant but also practically relevant. The amount of variability in a line graph may be due to irrelevant (to the conclusions) factors, such as inconsistencies in the data collection, such as varying sensor noise. In the worst case, a malicious actor could introduce small amounts of noise to mask larger changes or nudge analysts to see larger changes.
By quantifying the bias, we can consider showing viewers warnings when we expect the bias to affect the conclusions drawn from a graph or consider alternative visual encodings that do not have the bias shown in this paper. For example, we showed how points instead of lines reduce the bias. Another idea could be to reduce the salience of steep lines by varying the opacity or line width. Alternatively, designers could consider annotating graphs with averages or other visual encodings when average estimates are needed. However, designers need to consider potential biases that additional visual encodings could introduce.
We discussed that biased average estimates could also lead to biased trend estimates. Future work should investigate how manipulations of time series data visualized as line graphs affect trend estimates. Participants in our study represent a general population of people with some but not expert-level visualization literacy. If trend estimates can be affected, we should also investigate whether experts such as scientists, doctors who look at vitals, and stock traders are as affected as the general population. An avenue for investigating this effect could be to analyze historical data such as stocks and see whether increased variability affected traders' investments.
Figure 12: Predicted and estimated averages for all stimuli.
## Acknowledgments
This work was supported in part by grants from the NSF (#2238175 and #1901485).
|
2304.14172 | Berge-$k$-factors in tough hypergraphs | Chv\'atal in 1973 introduced the concept of graph toughness and initiated the
study of sufficient toughness conditions for the existence of hamiltonian
cycles in graphs.
Over the years, numerous results related to graph toughness have been proved.
Notably, Enomoto, Jackson, Katerinis, and Saito (1985) established a renowned
theorem stating that for every integer $k\ge 1$, any $k$-tough graph $G$
possesses a $k$-factor if $k |V(G)|$ is even and $|V(G)|\ge k+1$.
In this paper, we initiate the study of toughness conditions for the
existence of substructures in hypergraphs. Specifically, we extend the
aforementioned result concerning $k$-factors in graphs to Berge-$k$-factors in
hypergraphs. The proof of this extension presents a unique challenge due to the
inherent non-uniformity of hypergraphs compared to graphs, and our proof
techniques are of independent interest for similar investigations. | Yuping Gao, Songling Shan, Gexin Yu | 2023-04-27T13:15:17Z | http://arxiv.org/abs/2304.14172v2 | # Toughness and the existence of \(k\)-factors in hypergraphs
###### Abstract
The study of existence of hamiltonian cycles and factors in graphs in terms of toughness was initiated by Chvatal in 1973 and has been a popular topic since then. The study of Berge cycles and factor in hypergraphs has attracted quite a bit attention in recent years and much progress has been made in terms of conditions such as degrees. In this article, we propose to study toughness conditions for a hypergraph to have a Berge \(k\)-factor. Our main result is that every \(k\)-tough hypergraph \(H\) has a Berge \(k\)-factor if \(k\cdot|V(H)|\) is even and \(|V(H)|\geq k+1\) for integer \(k\geq 1\). This extends a similar result on graphs from 1985 by Enomoto, Jackson, Keterinis, and Saito.
**Keywords**: Toughness; \(k\)-factor; Parity factor
## 1 Introduction
A hypergraph \(H\) is a family \(E(H)\) of subsets of a ground set \(V(H)\), where we call \(E(H)\) the _edges_ of \(H\) and \(V(H)\) the _vertices_ of \(H\). For an integer \(r\geq 1\), we say that \(H\) is \(r\)-_uniform_ if every edge of \(H\) contains exactly \(r\) vertices. A graph is then a \(2\)-uniform hypergraph. For \(v\in V(H)\), \(N_{H}(v)\) is the set of edges of \(H\) containing \(v\), and \(d_{H}(v)\) of \(v\), the _degree_ of \(v\) in \(H\), is the number of edges of \(H\) containing \(v\). The minimum degree, \(\delta(H)\), is the minimum over degrees of all vertices of \(H\).
Let \(G\) be a graph. For two disjoint subsets \(S\) and \(T\) of \(V(G)\), \(e_{G}(S,T)\) denotes the number of edges in \(G\) with one endvertex in \(S\) and the other in \(T\). If \(S=\{x\}\), we simply write \(e_{G}(\{x\},T)\) as \(e_{G}(x,T)\). Also for a subgraph \(D\) of \(G\) with \(V(D)\cap S=\emptyset\), we write \(e_{G}(S,V(D))\) as \(e_{G}(S,D)\). For \(S\subseteq V(G)\), we denote by \(G[S]\) the subgraph of \(G\) induced by \(S\) and \(G-S\) the subgraph \(G[V(G)\setminus S]\). We also let \(N_{G}(S)=\bigcup_{v\in S}N_{G}(v)\). For integers \(p\) and \(q\), we let \([p,q]=\{i\in\mathbb{Z}:p\leq i\leq q\}\).
Throughout this paper, if not specified, we will assume \(t\) to be a nonnegative real number. The number of components of \(G\) is denoted by \(c(G)\). The graph \(G\) is said to be \(t\)_-tough_ if \(|S|\geq t\cdot c(G-S)\) for each \(S\subseteq V(G)\) with \(c(G-S)\geq 2\). The _toughness_\(\tau(G)\) is the largest
real number \(t\) for which \(G\) is \(t\)-tough, or is \(\infty\) if \(G\) is complete. This concept was introduced by Chvatal [1] in 1973, as a measure of graph connectivity and "resilience" under removal of vertices.
A \(k\)-regular spanning subgraph is called a \(k\)-_factor_ of \(G\). In [2, 3, 4, 5], the authors gave some sufficient conditions for the existence of a \(k\)-factor in a graph related to the toughness conditions. In particular, the following classic result was proved.
**Theorem 1.1** ([5]).: _For any integer \(k\geq 1\), every \(k\)-tough graph \(G\) has a \(k\)-factor if \(k|V(G)|\) is even and \(|V(G)|\geq k+1\)._
We will extend Theorem 1.1 to hypergraphs. Let us start with some definitions. Let \(H\) be a hypergraph. The concept of _adjacency, incidence_, and _connectedness_ for hypergraphs are defined the same way as for graphs. For \(S\subseteq V(H)\), we define \(H-S\) to be the hypergraph obtained from \(H\) by deleting all vertices in \(S\) and all edges intersecting \(S\), and define \(c(H-S)\) to be the number of components of \(H-S\). We say that \(H\) is _complete_ if for any vertex set \(S\subseteq V(H)\) with \(|S|\leq|V(H)|-2\), \(H-S\) is connected. The hypergraph \(H\) is _\(t\)-tough_ if \(|S|\geq t\cdot c(H-S)\) for every subset \(S\subseteq V(H)\) with \(c(H-S)\geq 2\). The _toughness_\(\tau(H)\) of \(H\) is the maximum real number \(t\) for which \(H\) is \(t\)-tough or is \(\infty\) if \(H\) is complete. Given a graph \(G\), we say a hypergraph \(H\) on the same vertex set contains a _Berge_\(G\) if there exists an injection \(\varphi:E(G)\to E(H)\) such that \(e\subseteq\varphi(e)\) for each \(e\in E(G)\). Now the counterpart of Theorem 1.1 for hypergraphs is stated below.
**Theorem 1.2**.: _For any integer \(k\geq 1\), every \(k\)-tough hypergraph \(H\) has a Berge \(k\)-factor if \(k|V(H)|\) is even and \(|V(H)|\geq k+1\)._
It was proved in [5] that the toughness condition in Theorem 1.1 is best possible. As a consequence, the toughness condition in Theorem 1.2 is also best possible. The remainder of this paper is organized as follows. In Section 2, we prepare tools for proving Theorem 1.2, and then in Section 3, we prove the theorem.
## 2 Parity factors in bipartite graphs
Theorem 1.2 will be proved by translating the statement into a statement about bipartite graphs. So in this section, we prepare all the corresponding concepts for bipartite graphs.
Let \(H\) be a hypergraph. The _incidence bipartite graph_\(\mathcal{I}(H)\) of \(H\) has bipartitions \(X=E(H)\) and \(Y=V(H)\), where a vertex \(x\in X\) is adjacent in \(\mathcal{I}(H)\) to a vertex \(y\in Y\) if \(x\), as an edge in \(H\), contains \(y\), as a vertex in \(H\). We now translate the interested concepts
in hypergraphs to corresponding bipartite graphs. For generality, we will define them for arbitrary bipartite graphs.
Let \(G[X,Y]\) be a bipartite graph with bipartitions \(X\) and \(Y\), and with no isolated vertices from \(X\). For \(S\subseteq Y\), we define \(\mathring{-}S=G-(S\cup N_{G}(S))\), and call this a _dot-deletion_. Notice that every component of \(\mathring{-}S\) contains a vertex from \(Y\), as \(G\) has no isolated vertices from \(X\). If \(c(\mathring{-}S)\geq 2\), we call \(S\) a _\(Y\)-cutset_ of \(G\). The _\(Y\)-toughness_ of \(G\), denoted by \(\tau_{Y}(G)\), is the maximum real number \(t\) such that \(|S|\geq t\cdot c(G\dot{-}S)\) for any \(Y\)-cutset \(S\) if \(S\) exists or is \(\infty\) if \(G\) has no \(Y\)-cutset. Let \(k\geq 1\) be an integer. A _\((2,k)\)-factor_ of \(G\) is a subgraph \(F\subseteq G\) such that for any \(x\in V(F)\cap X\), \(d_{F}(x)=2\) and for any \(y\in Y\), \(d_{F}(y)=k\). From the definitions, we see that for a hypergraph \(H\), it holds that \(\tau(H)=\tau_{V(H)}(\mathcal{I}(H))\) and that a Berge \(k\)-factor of \(H\) is a \((2,k)\)-factor of \(\mathcal{I}(H)\) and vice versa. Thus Theorem 1.2 can be restated by using the language defined for bipartite graphs.
**Theorem 2.1**.: _Let \(G[X,Y]\) be a bipartite graph with no isolated vertices from \(X\) and \(k\geq 1\) be an integer. If \(\tau_{Y}(G)\geq k\), \(k|Y|\) is even and \(|Y|\geq k+1\), then \(G\) has a \((2,k)\)-factor._
To prove Theorem 2.1, we will use the notation of _parity \((g,f)\)-factors_ and a sufficient condition for its existence. Let \(G\) be a graph. For a given subset \(W\subseteq V(G)\), let \(g,f:V(G)\rightarrow\mathbb{Z}\) be two functions such that \(g(x)\leq f(x)\) for all \(x\in V(G)\) and \(g(y)\equiv f(y)\) (mod 2) for all \(y\in W\). Then a spanning subgraph \(F\) of \(G\) is called a _partial parity \((g,f)\)-factor_ with respect to \(W\) if
* \(g(x)\leq d_{F}(x)\leq f(x)\) for all \(x\in V(G)\) and
* \(d_{F}(y)\equiv g(y)\equiv f(y)\) (mod 2) for all \(y\in W\).
Kano and Matsuda [6] gave a necessary and sufficient condition for a graph to have a partial parity \((g,f)\)-factor.
**Theorem 2.2** ([6]).: _Let \(G\) be a graph, \(W\subseteq V(G)\), and \(g,f:V(G)\rightarrow\mathbb{Z}\) be two functions satisfying_
\[g(x)\leq f(x)\text{ for all }x\in V(G)\text{ and }g(y)\equiv f(y)\pmod{2}\text{ for all }y\in W\text{.}\]
_Then \(G\) has a partial parity \((g,f)\)-factor with respect to \(W\) if and only if for all disjoint subsets \(A\) and \(B\) of \(V(G)\),_
\[\delta_{G}(A,B):=\sum_{x\in A}f(x)-\sum_{y\in B}g(y)+\sum_{y\in B}d_{G-A}(y)-h _{W}(A,B)\geq 0, \tag{1}\]
_where \(h_{W}(A,B)\) denotes the number of components \(D\) of \(G-(A\cup B)\) such that \(g(x)=f(x)\) for all \(x\in V(D)\backslash W\) and_
\[\sum_{x\in V(D)}f(x)+e_{G}(D,B)\equiv 1\pmod{2}. \tag{2}\]
Following the notation in Theorem 2.2, a component \(D\) of \(G-(A\cup B)\) is called an _odd component_ if it satisfies (2) and is called an _even component_ if
\[\sum_{x\in V(D)}f(x)+e_{G}(D,B)\equiv 0\pmod{2}.\]
**Lemma 2.3**.: _Let \(G[X,Y]\) be a bipartite graph and \(k\geq 1\) be an integer. Define \(W=X\) and \(f,g:V(G)\rightarrow\mathbb{Z}\) by_
\[f(v)=\left\{\begin{array}{ll}2,&v\in X;\\ k,&v\in Y;\end{array}\right.\quad\text{and}\quad g(v)=\left\{\begin{array}{ ll}0,&v\in X;\\ k,&v\in Y.\end{array}\right. \tag{3}\]
_Then \(\delta_{G}(A,B)\equiv 0\pmod{2}\)._
Proof.: Note that
\[h_{W}(A,B)=\sum_{D\text{ is a component of }G-(A\cup B)}\left(\left(\sum_{x\in V (D)}f(x)+e_{G}(D,B)\right)\pmod{2}\right),\]
where \(\left(\sum\limits_{x\in V(D)}f(x)+e_{G}(D,B)\right)\pmod{2}\) is taken to be either \(0\) or \(1\). Thus \(h_{W}(A,B)\equiv\sum\limits_{x\in U}f(x)+e_{G}(B,U)\pmod{2}\). Let \(U=V(G)\setminus(A\cup B)\). Then
\[\delta_{G}(A,B) \equiv \sum_{x\in A}f(x)-\sum_{y\in B}g(y)+\sum_{y\in B}d_{G-A}(y)-h_{W}( A,B)\] \[\equiv \sum_{x\in A}f(x)-\sum_{y\in B}g(y)+2|E(G[B])|+e_{G}(B,U)-\sum_{x \in U}f(x)-e_{G}(B,U)\] \[\equiv \sum_{x\in A}f(x)-\sum_{y\in B}g(y)-\left(\sum_{x\in V(G)}f(x)- \sum_{x\in A}f(x)-\sum_{x\in B}f(x)\right)\] \[\equiv \sum_{x\in A}f(x)-\sum_{y\in B}g(y)-\sum_{x\in V(G)}f(x)+\sum_{x \in A}f(x)+\sum_{x\in B}f(x)\] \[\equiv -k|B\cap Y|+k|B\cap Y|\equiv 0\pmod{2}.\]
Thus for a bipartite graph \(G[X,Y]\) with \(W=X\) and \(f\) and \(g\) defined above as in Lemma 2.3, \(\delta_{G}(A,B)\) is even for any disjoint subsets \(A,B\) of \(V(G)\). If there exist disjoint
subsets \(A,B\subseteq V(G)\) such that \(\delta_{G}(A,B)<0\) and so \(\delta_{G}(A,B)\leq-2\), we call \((A,B)\) a _barrier_ of \(G\). A _biased barrier_ is a barrier \((A,B)\) such that among all barriers, we have (i) \(\delta_{G}(A,B)\) is minimum, (ii) subject to (i) \(|B|\) is minimum, and (iii) subject to (i) and (ii) \(|A|\) is maximum. From the definition we see that a \((2,k)\)-factor in \(G\) is a partial parity \((g,f)\)-factor with respect to \(W=X\) and \(f,g\) defined in Lemma 2.3 and vice versa.
Let \((A,B)\) be a biased barrier of \(G\). For \(Z\subseteq A\cap X\), we let \(h(Z)=|N_{G}(Z)\cap B|+|\{D:D\) is an odd component of \(G-(A\cup B)\) and \(e_{G}(Z,D)\geq 1\}|\). When \(Z=\{z\}\), we simply write \(h(z)\) for \(h(\{z\})\). The following lemma shows that \((A,B)\) satisfies some nice properties.
**Lemma 2.4**.: _Let \(G[X,Y]\) be a bipartite graph, \(W=X\) and \(f\) and \(g\) defined the same as in Lemma 2.3. Suppose \((A,B)\) is a biased barrier of \(G\). Then each of the following statement holds:_
1. \(B\subseteq Y\)_;_
2. _For every odd component_ \(D\) _of_ \(G-(A\cup B)\) _and_ \(v\in V(D)\)_,_ \(e_{G}(v,B)\leq 1\)_;_
3. _For every even component_ \(D\) _of_ \(G-(A\cup B)\) _and_ \(v\in V(D)\)_,_ \(e_{G}(v,B)=0\)_._
4. _For any_ \(Z\subseteq A\cap X\) _with_ \(N_{G}(Z)\cap B=\emptyset\)_, we have_ \(h(Z)\geq 2|Z|\)_._
Proof.: Let \(U=V(G)\setminus(A\cup B)\). For (i), suppose that there exists a vertex \(v\in B\cap X\). Since \((A,B)\) is a biased barrier, we have \(\delta_{G}(A,B\setminus\{v\})\geq\delta_{G}(A,B)\). As \(h_{W}(A,B\setminus\{v\})\geq h_{W}(A,B)-e_{G}(v,U)\), we get
\[\delta_{G}(A,B\setminus\{v\}) = \sum_{x\in A}f(x)-\sum_{y\in B\setminus\{v\}}g(y)+\sum_{y\in B \setminus\{v\}}d_{G-A}(y)-h_{W}(A,B\setminus\{v\})\] \[\leq \sum_{x\in A}f(x)-\sum_{y\in B}g(y)+g(v)+\sum_{y\in B}d_{G-A}(y)- e_{G}(v,B\setminus\{v\})-h_{W}(A,B)\] \[= \delta_{G}(A,B)+g(v)-e_{G}(v,B\setminus\{v\})\] \[\leq \delta_{G}(A,B),\]
a contradiction to the choice of \((A,B)\).
We prove statements (ii) and (iii) together. Let \(D\) be a component of \(G-(A\cup B)\). As \(G\) is bipartite, \(e_{G}(v,B)=0\) if \(v\in V(D)\cap Y\). So assume that \(v\in V(D)\cap X\). By the assumption
that \((A,B)\) is a biased barrier, we know that \(\delta_{G}(A\cup\{v\},B)\geq\delta_{G}(A,B)\). Furthermore,
\[\delta_{G}(A\cup\{v\},B)=\sum_{x\in A\cup\{v\}}f(x)-\sum_{y\in B}g(y )+\sum_{y\in B}d_{G-(A\cup\{v\})}(y)-h_{W}(A\cup\{v\},B)\] \[= \sum_{x\in A}f(x)+f(v)-\sum_{y\in B}g(y)+\sum_{y\in B}d_{G-A}(y)-e_ {G}(v,B)-h_{W}(A\cup\{v\},B)\] \[\leq \left\{\begin{array}{ll}\sum\limits_{x\in A}f(x)+f(v)-\sum \limits_{y\in B}g(y)+\sum\limits_{y\in B}d_{G-A}(y)-e_{G}(v,B)-h_{W}(A,B)+1& \mbox{if $D$ is odd};\\ \sum\limits_{x\in A}f(x)+f(v)-\sum\limits_{y\in B}g(y)+\sum\limits_{y\in B}d_ {G-A}(y)-e_{G}(v,B)-h_{W}(A,B)&\mbox{if $D$ is even};\end{array}\right.\] \[= \left\{\begin{array}{ll}\delta_{G}(A,B)+f(v)-e_{G}(v,B)+1&\mbox{ if $D$ is odd};\\ \delta_{G}(A,B)+f(v)-e_{G}(v,B)&\mbox{if $D$ is even}.\end{array}\right.\]
Since \(\delta_{G}(A\cup\{v\},B)\) is even and \(f(v)=2\), we get \(\delta_{G}(A\cup\{v\},B)\leq\delta_{G}(A,B)\) if \(e_{G}(v,B)\geq 2\) when \(D\) is odd, or if \(e_{G}(v,B)\geq 1\) when \(D\) is even. Then we get a contradiction to the choice of \((A,B)\). Therefore \(e_{G}(v,B)\leq 1\) when \(D\) is an odd component and \(e_{G}(v,B)=0\) when \(D\) is an even component.
To prove (iv), suppose to the contrary that \(h(Z)\leq 2|Z|-1\). By the assumption that \((A,B)\) is a biased barrier, we know that \(\delta_{G}(A\setminus Z,B)\geq\delta_{G}(A,B)\). As \(h_{W}(A\setminus Z,B)\geq h_{W}(A,B)-h(Z)\), we get
\[\delta_{G}(A,B) \leq \delta_{G}(A\setminus Z,B)=\sum_{x\in A}f(x)-2|Z|-\sum_{y\in B}g( y)+\sum_{y\in B}d_{G-A}(y)-h_{W}(A\setminus Z,B)\] \[\leq \delta_{G}(A,B)-2|Z|+h(Z)\] \[\leq \delta_{G}(A,B)-1,\]
a contradiction.
## 3 Proof of Theorem 2.1
Assume to the contrary that \(G\) has no \((2,k)\)-factor. Thus for \(W=X\) and \(f\) and \(g\) defined as in Lemma 2.3, \(G\) has a biased barrier \((A,B)\) by Theorem 2.2. If \(G\) has no \(Y\)-cutset, let \(S\) be any subset of \(Y\) with \(|S|=|Y|-2\) and \(Y\setminus S=\{u,v\}\). Then \(u\) and \(v\) belong to the same component in \(\hat{G-}S\). Thus for any two distinct vertices \(u,v\in Y\), there is a vertex \(x\in X\) such that \(x\) is adjacent in \(G\) only to \(u\) and \(v\). Therefore, if we contract all vertices in \(X\) which have degree 2 in \(G\), we get a graph \(G^{\prime}\) such that \(G^{\prime}[Y]\) contains a spanning complete subgraph. Thus \(G^{\prime}\) has a \(k\)-factor and so \(G\) contains a \((2,k)\)-factor. Therefore we assume that \(G\) has a \(Y\)-cutset.
Let \(\mathcal{C}\) be the set of all odd components of \(G-(A\cup B)\), and \(\mathcal{C}_{1}\) be the set of all odd components of \(G-(A\cup B)\) that contains only one vertex which is from \(X\). By Lemma 2.4(ii)
and Equation (2), \(e_{G}(D,B)=1\) for any \(D\in\mathcal{C}_{1}\). Let
\[U=V(G)\setminus(A\cup B)\quad\text{and}\quad U_{0}=\bigcup_{D\in\mathcal{C}_{1}}V (D).\]
Let \(\mathcal{C}\setminus\mathcal{C}_{1}=\{D_{1},D_{2},\ldots,D_{\ell}\}\). Then we have \(h_{W}(A,B)=\ell+|\mathcal{C}_{1}|\). For those components \(D_{i}\), we have the claim below.
**Claim 3.1**.: _If \(D\) is an odd component of \(G-(A\cup B)\) and \(V(D)=\{v\}\subseteq Y\), then \(e_{G}(D,B)=0\) and \(k\equiv 1\pmod{2}\). Furthermore, for each \(i\in[1,\ell]\), we have \(|V(D_{i})|\geq e_{G}(B,D_{i})\) and each \(x\in V(D_{i})\cap N_{G}(B)\) has a neighbor in \(D_{i}\) from \(V(D_{i})\cap Y\)._
Proof.: If \(V(D)=\{v\}\subseteq Y\), then \(e_{G}(D,B)=0\) by \(B\subseteq Y\). From Equation (2), we have \(f(v)+e_{G}(D,B)\equiv 1\pmod{2}\). As \(f(v)=k\), we get \(k\equiv 1\pmod{2}\).
By Lemma 2.4 (i) and (ii), we have \(|V(D_{i})\cap X|\geq e_{G}(B,D_{i})\). As \(D_{i}\) is a component of \(G-(A\cup B)\) and so is connected, each \(x\in V(D_{i})\cap N_{G}(B)\) has a neighbor in \(D_{i}\) from \(V(D_{i})\cap Y\).
Denote by
\[a=|A|,\quad b=|B|,\quad|A_{1}|=|A\cap X|=:a_{1},\quad|A_{2}|=|A\cap Y|=:a_{2}.\]
Then
\[0 > \delta_{G}(A,B)=\sum_{x\in A}f(x)-\sum_{y\in B}g(y)+\sum_{y\in B} d_{G-A}(y)-h_{W}(A,B) \tag{4}\] \[= 2|A\cap X|+k|A\cap Y|-k|B|+e_{G}(B,U)-h_{W}(A,B)\] \[= 2a_{1}+ka_{2}-kb+e_{G}(B,U)-\ell-|\mathcal{C}_{1}|\] \[= 2a_{1}+ka_{2}-kb+e_{G}(B,U\setminus U_{0})-\ell.\]
As vertices in \(U_{0}\) belong to \(X\), components in \(\mathcal{C}_{1}\) are irrelevant when we dot-delete a special \(Y\)-cutset while targeting components to be a subset of \(B\). Thus in our argument, we will discard all the components in \(\mathcal{C}_{1}\) and all the vertices in \(U_{0}\).
We consider three cases regarding the value of \(k\).
**Case 1**: \(k=1\).
Then we have \(b-a_{1}>a_{1}+a_{2}+e_{G}(B,U\setminus U_{0})-\ell\) from (4). For each vertex \(x\in A_{1}\) such that \(x\) is adjacent in \(G\) to a vertex from \(B\), we let \(v_{x}\in B\) be an arbitrary vertex from \(N_{G}(x)\cap B\). For each \(x\in A_{1}\) such that \(x\) is adjacent in \(G\) only to a vertex from \(U\), we let \(v_{x}\in U\) be an arbitrary vertex from \(N_{G}(x)\cap U\). Let \(B_{1}=\{v_{x}:x\in A_{1}\}\). Then \(|B_{1}|\leq|A_{1}|\) and \(N_{G}(B_{1})\cap A_{1}=N_{G}(B\cup U)\cap A_{1}\). For each component \(D_{i}\in\mathcal{C}\setminus\mathcal{C}_{1}\) with \(e_{G}(B,D_{i})\geq 2\)
we let \(W_{i}\subseteq V(D_{i})\cap Y\) such that each of any \(e_{G}(B,D_{i})-1\) vertices from \(N_{G}(B)\cap V(D_{i})\) has in \(D_{i}\) a neighbor from \(W_{i}\). Then \(W_{i}\) dominates at least \(e_{G}(B,D_{i})-1\) vertices from \(N_{G}(B)\cap V(D_{i})\), and \(W_{i}\) can be chosen so that \(|W_{i}|\leq e_{G}(B,D_{i})-1\). Denote by \(W_{i}=\emptyset\) if \(e_{G}(B,D_{i})\leq 1\). Let \(\ell^{*}\) be the number of components in \({\cal C}\setminus{\cal C}_{1}\) that each contains a vertex of \(Y\) but is not adjacent in \(G\) to any vertex from \(B\), and let \(U^{*}\) be the set of vertices of those components. Denote by \(B_{1}^{*}=B_{1}\cap U^{*}\). Then by Lemma 2.4 (iv), \(|B_{1}^{*}|\leq\frac{1}{2}\ell^{*}\). Let
\[G_{1}=G\dot{-}\left(A_{2}\cup B_{1}\cup\left(\bigcup_{i=1}^{\ell}W_{i}\right) \right).\]
Since \(e_{G\dot{-}W_{i}}(B,\dot{D_{i}}\dot{-}W_{i})\leq 1\), it follows that
\[c(G_{1})\geq b-|B_{1}\cap B|+(|U^{*}|-|B_{1}^{*}|)\geq b-(a_{1}-|B_{1}^{*}|)+ \ell^{*}-|B_{1}^{*}|=b-a_{1}+\ell^{*}.\]
If \(b-a_{1}+\ell^{*}\geq 2\), then we get a contradiction to \(\tau_{Y}(G)\geq 1\), since \(\left|A_{2}\cup B_{1}\cup\left(\bigcup_{i=1}^{\ell}W_{i}\right)\right|\leq a_{ 2}+|B_{1}|+e_{G}(B,U\setminus U_{0})-\ell+\ell^{*}<b-a_{1}+\ell^{*}\). Thus we have \(b-a_{1}+\ell^{*}\leq 1\). As \(b-a_{1}+\ell^{*}>a_{1}+a_{2}+e_{G}(B,U\setminus U_{0})-\ell+\ell^{*}\geq 0\), we have \(a_{1}=a_{2}=0\), \(e_{G}(B,U\setminus U_{0})+\ell^{*}=\ell\) and \(b+\ell^{*}=1\). If \(b=0\) and therefore \((A,B)=(\emptyset,\emptyset)\). Then \(G\) is an odd component as \(\delta_{G}(A,B)=-h_{W}(\emptyset,\emptyset)<0\). This is a contradiction to Equation (2) and the assumption that \(k|Y|\) is even. Thus \(|B|=1,\ell^{*}=0\). Since \(k|Y|=|Y|\) is even and \(G\) has a \(Y\)-cutset, we have \(|Y|\geq 4\) and so \(|Y\cap(U\setminus U_{0})|\geq 3\) (recall \(U_{0}\subseteq X\)). If \(\ell\geq 2\), then \(G\dot{-}B\) has at least two components and so \(\tau_{Y}(G)\leq\frac{1}{2}\). Thus \(\ell=1\), \(D_{1}\) contains all the vertices from \(Y\setminus B\), and \(e_{G}(B,D_{1})=1\). Let \(y\in V(D_{1})\cap Y\) be a vertex such that the vertex from \(N_{G}(B)\cap V(D_{1})\) is adjacent in \(D_{1}\) to \(y\). Then \(G\dot{-}\{y\}\) has at least two components, a contradiction again to \(\tau_{Y}(G)\geq 1\). Therefore, when \(k=1\), \(G\) has a \((2,1)\)-factor.
**Case 2**: \(k=2\).
**Claim 3.2**.: _For every \(x\in A_{1}\) such that \(N_{G}(x)\cap B\neq\emptyset\), we have \(h(x)\geq 3\)._
Proof.: Otherwise, suppose there exists \(x\in A_{1}\) with \(N_{G}(x)\cap B\neq\emptyset\) but \(h(x)\leq 2\). Let \(y\in N_{G}(x)\cap B\), \(A^{\prime}=A\setminus\{x\}\) and \(B^{\prime}=B\setminus\{y\}\). Note that if \(e_{G}(x,B^{\prime})=1\) then \(e_{G}(x,U)=0\); and if \(e_{G}(x,B^{\prime})=0\) then \(e_{G}(x,U)\leq 0\). Thus \(h_{W}(A^{\prime},B^{\prime})\geq h_{W}(A,B)-e_{G}(y,U)\) if \(e_{G}(x,B^{\prime})=1\) and \(h_{W}(A^{\prime},B^{\prime})\geq h_{W}(A,B)-e_{G}(y,U)-1\) if \(e_{G}(x,B^{\prime})=0\). As \((A,B)\) is a biased barrier of \(G\) and \(f(x)=g(y)=2\), we get
\[\delta_{G}(A,B) \leq \delta_{G}(A^{\prime},B^{\prime})\] \[\leq \sum_{u\in A}f(u)-f(x)-\sum_{v\in B}g(v)+g(y)+\sum_{v\in B}d_{G-A }(v)-e_{G}(y,U)+e_{G}(x,B^{\prime})-h_{W}(A^{\prime},B^{\prime})\] \[= \delta_{G}(A,B)+h_{W}(A,B)-e_{G}(y,U)+e_{G}(x,B^{\prime})-h_{W}(A ^{\prime},B^{\prime})\] \[\leq \delta_{G}(A,B)+1.\]
As \(\delta_{G}(A^{\prime},B^{\prime})\) is even, we have \(\delta_{G}(A^{\prime},B^{\prime})=\delta_{G}(A,B)\). Thus \((A^{\prime},B^{\prime})\) is a barrier with \(|B^{\prime}|<|B|\), a contradiction.
We then define a subset \(R_{1}\) of \(B\cup(U\cap Y)\) iteratively as follows. Note that \(U\cap Y=(U\setminus U_{0})\cap Y\). For a vertex \(y\in B\cup(U\cap Y)\), if \(|N_{G}(y)\cap A_{1}|\geq 2\), we place \(y\) in \(R_{1}\). Then we continue this process by updating \(G\) as \(G\dot{-}R_{1}\), \(B\) as \(B\setminus R_{1}\) and \(U\) as \(U\setminus R_{1}\), and \(A_{1}\) as \(A_{1}\setminus N_{G}(R_{1})\). This process will stop as \(G\) is a finite graph. When it stops, we let
\[G^{*}=G\dot{-}R_{1},\quad B_{1}=R_{1}\cap B,\quad U_{1}=R_{1} \cap U,\quad U^{*}=U\setminus(U_{0}\cup U_{1}),\] \[B^{*}=B\setminus B_{1},\quad A_{11}=N_{G}(R_{1})\cap A_{1},\quad A _{1}^{*}=A_{1}\setminus A_{11}.\]
By the definition of \(R_{1}\) and \(G^{*}\), we know that \(|N_{G^{*}}(y)\cap A_{1}^{*}|\leq 1\) for every \(y\in B^{*}\cup U^{*}\) and \(|A_{11}|\geq 2|R_{1}|=2(|B_{1}|+|U_{1}|)\).
For vertices in \(A_{1}^{*}\), we define a subset \(U_{2}\) of \(U^{*}\cap N_{G^{*}}(A_{1}^{*})\) iteratively as follows. For each component \(D\) of \(G-(A\cup B)\), if \(D\) contains at least two vertices from \(N_{G^{*}}(A_{1}^{*})\), we pick exactly one vertex \(y\in V(D)\cap N_{G^{*}}(A_{1}^{*})\) and place \(y\) in \(U_{2}\). Then we continue this process by updating \(G^{*}\) as \(G^{*}\dot{-}U_{2}\), \(U^{*}\) as \(U^{*}\setminus U_{2}\), and \(A_{1}^{*}\) as \(A_{1}^{*}\setminus N_{G^{*}}(U_{2})\). This process will stop as \(G^{*}\) is a finite graph. When it stops, we let
\[G_{1}^{*}=G^{*}\dot{-}U_{2},\quad U_{1}^{*}=U\setminus(U_{0}\cup U _{1}\cup U_{2}),\] \[A_{12}^{\prime}=N_{G^{*}}(U_{2})\cap A_{1}^{*},\quad A_{12}=A_{1 }^{*}\setminus A_{12}^{\prime}.\]
By the choice of \(U_{2}\), we have \(|A_{12}^{\prime}|=|U_{2}|\), \(c(G-(A\cup B\cup U_{2}))\geq c(G-(A\cup B))\), and each component \(D\) of \(G-(A\cup B\cup U_{2})\) contains at most one vertex from \(N_{G_{1}^{*}}(A_{12})\).
We now define a partition of \(A_{12}\) as follows. Let
\[A_{12}^{1} = \{x\in A_{12}:|N_{G_{1}^{*}}(x)\cap B^{*}|\leq 1\},\] \[A_{12}^{2} = \{x\in A_{12}:|N_{G_{1}^{*}}(x)\cap B^{*}|=2\},\] \[A_{1}^{\prime} = \{x\in A_{12}:|N_{G_{1}^{*}}(x)\cap B^{*}|\geq 3\}.\]
By Claim 3.2, we have \(|N_{G_{1}^{*}}(x)\cap U_{1}^{*}|\geq 3-i\) for \(x\in A_{12}^{i}\) where \(i\in[1,2]\). For each \(x\in A_{12}^{2}\cup A_{1}^{\prime}\), we take exactly one vertex from \(N_{G_{1}^{*}}(x)\cap B^{*}\) and let the collection of those \(|A_{12}^{2}\cup A_{1}^{\prime}|\) vertices be \(B_{2}\). Let \(B^{\prime}=B^{*}\setminus B_{2}\). By the definition of \(A_{12}^{2}\), \(A_{1}^{\prime}\) and \(B_{2}\), we have
\[|B_{2}| = |A_{12}^{2}|+|A_{1}^{\prime}|, \tag{5}\] \[|B^{\prime}| \geq |A_{12}^{2}|+2|A_{1}^{\prime}|. \tag{6}\]
By Lemma 2.4(ii), in \(G\), all the edges between \(B^{\prime}\) and \(U\setminus U_{0}\) are exactly the edges between \(B^{\prime}\) and vertices of components from \(\mathcal{C}\setminus\mathcal{C}_{1}\). Without loss of generality, we let \(D_{1},\ldots,D_{\ell_{1}}\) be
all the components from \(\mathcal{C}\setminus\mathcal{C}_{1}\) such that \(e_{G}(B^{\prime},D_{i})\geq 1\) for each \(i\in[1,\ell_{1}]\), where \(0\leq\ell_{1}\leq\ell\). Assume further that \(D_{1},\ldots,D_{\ell_{2}}\) are all the components from \(\mathcal{C}\setminus\mathcal{C}_{1}\) such that \(e_{G}(B^{\prime},D_{i})\geq 2\) for each \(i\in[1,\ell_{2}]\), where \(0\leq\ell_{2}\leq\ell_{1}\).
We let \(A_{12}^{1a}\) be the set of vertices \(x\) of \(A_{12}^{1}\) such that \(N_{G_{1}^{*}}(x)\) contains a vertex from \(V(D_{i})\) for some \(i\in[1,\ell]\) such that \(i\leq\ell_{1}\) or \(V(D_{i})\cap U_{1}\neq\emptyset\), and let \(A_{12}^{1b}=A_{12}^{1a}\setminus A_{12}^{1b}\). For each vertex \(x\in A_{12}^{1a}\), we take exactly one vertex \(y\) from \(N_{G_{1}^{*}}(x)\cap U_{1}^{*}\) such that \(y\in V(D_{i})\), where \(i\in[1,\ell_{1}]\) or \(V(D_{i})\cap U_{1}\neq\emptyset\). We let the collection of those \(|A_{12}^{1a}|\) vertices be \(U_{3}^{a}\). For each vertex \(x\in A_{12}^{1b}\), we take exactly one vertex from \(N_{G_{1}^{*}}(x)\cap U_{1}^{*}\) and let the collection of those \(|A_{12}^{1b}|\) vertices be \(U_{3}^{b}\).
For each component \(D_{i}\) with \(i\in[1,\ell_{2}]\), we let \(W_{i}\subseteq V(D_{i})\cap Y\) be a set of size at most \(e_{G}(B^{\prime},D_{i})-1\) such that each but at most one vertex from \(N_{G}(B^{\prime})\cap D_{i}\) is adjacent in \(D_{i}\) to a vertex from \(W_{i}\). This is possible by Claim 3.1. Now we let
\[G_{1}=G\dot{-}\left(A_{2}\cup(B_{1}\cup B_{2})\cup(U_{1}\cup U_{2}\cup U_{3}^{ a}\cup U_{3}^{b})\cup\left(\bigcup_{i=1}^{\ell_{2}}W_{i}\right)\right).\]
**Claim 3.3**.: _We have \(c(G_{1})\geq|B^{\prime}|+\max\{\ell-\ell_{1}-|U_{1}|-|U_{3}^{b}|,|U_{3}^{b}|\}\)._
Proof.: Since \(N_{G}(A_{2}\cup B_{1}\cup B_{2}\cup U_{1}\cup U_{2}\cup U_{3}^{a}\cup U_{3}^{ b})=A_{1}\), for each component \(D_{i}\) with \(i\in[1,\ell_{1}]\), we have \(e_{G_{1}}(D_{i},V(G_{1})\setminus V(D_{i}))=e_{G_{1}}(D_{i},B^{\prime})\leq 1\). Thus \(G_{1}\) has at least \(|B^{\prime}|\) components that contain vertices from \(D_{1},\ldots,D_{\ell_{1}}\). The dot-deletion of each vertex from \(U_{1}\cup U_{3}^{b}\) vanishes at most one component from \(D_{\ell_{1}+1}\) to \(D_{\ell}\). Lastly, by the definition of \(A_{12}^{1b}\), we know that if it is nonempty, then its vertices together are connected in \(G\) to at least \(2|A_{12}^{1b}|\geq 2|U_{3}^{b}|\) components of \(G-(A\cup B)\) that are distinct from the ones containing a vertex from \(\bigcup_{i=1}^{\ell_{1}}V(D_{i})\cup U_{1}\). Thus dot-deleting vertices from \(U_{3}^{b}\) in \(G_{1}^{*}\) still leaves at least \(|U_{3}^{b}|\) components of \(G-(A\cup B)\) that are distinct from the ones containing a vertex from \(\bigcup_{i=1}^{\ell_{1}}V(D_{i})\cup U_{1}\). Thus we have \(c(G_{1})\geq|B^{\prime}|+\max\{\ell-\ell_{1}-|U_{1}|-|U_{3}^{b}|,|U_{3}^{b}|\}\).
Suppose for the moment that \(|B^{\prime}|=0\) and so \(|B_{2}|=0\). Then we get \(2|B|=2|B_{1}|\leq|A_{11}|\), showing a contradiction to (4). Then \(|B^{\prime}|\geq 1\).
Suppose that \(|B^{\prime}|=1\). Assume first that \(B_{2}=\emptyset\). As \(2|B_{1}|\leq|A_{11}|\), by (4), we have \(2|B^{\prime}|>2|B_{1}|+2a_{2}+2|A_{1}^{*}|+e_{G}(U\setminus U_{0},B)-\ell\). Since \(\delta_{G}(A,B)\) is even, we know that \(B=B^{\prime}\), \(A=\emptyset\), and \(e_{G}(U\setminus U_{0},B)=\ell\). Thus we have \(e_{G}(B,D_{i})=1\) for each \(i\in[1,\ell]\). If \(\ell\geq 2\), then \(c(G\dot{-}B)\geq 2\) and so \(\tau_{Y}(G)\leq\frac{1}{2}\). Thus \(\ell=1\). As \(|Y|\geq 3\), \(D_{1}\) contains at least two vertices from \(Y\). Let \(x\in V(D_{1})\) such that \(e_{G}(B,D_{1})=e_{G}(B,x)=1\) and let \(y\in V(D_{1})\cap Y\) such that \(x\) and \(y\) are adjacent in \(D_{1}\). Then again we get \(c(G\dot{-}\{y\})\geq 2\) and so \(\tau_{Y}(G)\leq\frac{1}{2}\).
Assume then that \(|B_{2}|=1\). Then as \(|B_{2}|=|A_{12}^{1}|+|A_{1}^{\prime}|\) and \(|B^{\prime}|\geq|A_{12}^{2}|+2|A_{1}^{\prime}|\) by (5) and (6), we conclude that \(|A_{1}^{\prime}|=0\) and \(|A_{12}^{2}|=1\). As \(2|B_{1}|\leq|A_{11}|\), by (4), we have
\(2|B^{*}|>2|B_{1}|+2a_{2}+2|A_{1}^{*}|+e_{G}(U\setminus U_{0},B)-\ell\). Since \(\delta_{G}(A,B)\) is even, we know that \(B=B^{*}\), \(|A|=|A_{12}^{2}|=1\), and \(e_{G}(U\setminus U_{0},B)=\ell\). Thus we have \(e_{G}(B,D_{i})=1\) for each \(i\in[1,\ell]\). If \(\ell\geq 2\), then \(c(G\dot{-}B)\geq 2\) and so \(\tau_{Y}(G)\leq\frac{2}{2}=1\). Thus \(\ell=1\). As \(|B^{*}|=2\), the only vertex from \(A_{12}^{2}\) is adjacent in \(G\) to both vertices from \(B=B^{*}\) in \(G\). By Claim 3.2, the vertex from \(A_{12}^{2}\) is also adjacent in \(G\) to a vertex from \(D_{1}\). We let \(y\in V(D_{1})\) be a vertex that is adjacent to the vertex from \(A_{12}^{2}\) in \(G\). As \(e_{G}(U\setminus U_{0},B)=1\), we have \(c(G\dot{-}\{y\})\geq|B|=2\). Then again we get \(\tau_{Y}(G)\leq\frac{1}{2}\), a contradiction.
Therefore we assume that \(|B^{\prime}|\geq 2\). Let \(\ell^{*}=\max\{\ell-\ell_{1}-|U_{1}|-|U_{3}^{b}|,|U_{3}^{b}|\}\). Note that \(\ell^{*}\geq\frac{1}{2}(\ell-\ell_{1}-|U_{1}|-|U_{3}^{b}|+|U_{3}^{b}|)=\frac{1 }{2}(\ell-\ell_{1}-|U_{1}|)\). Then by \(\tau_{Y}(G)\geq 2\), we have
\[2(|B^{\prime}|+\ell^{*}) \leq 2c(G_{1}) \tag{7}\] \[\leq \left|A_{2}\cup(B_{1}\cup B_{2})\cup(U_{1}\cup U_{2}\cup U_{3}^{a }\cup U_{3}^{b})\cup\left(\bigcup_{i=1}^{\ell_{2}}W_{i}\right)\right|\] \[\leq a_{2}+|B_{1}|+|B_{2}|+|U_{1}|+|U_{2}|+|U_{3}^{a}|+|U_{3}^{b}|+e_ {G}(U\setminus U_{0},B^{\prime})-\ell_{1}\] \[\leq a_{2}+\frac{1}{2}|A_{11}|+|B_{2}|+|U_{2}|+|U_{3}^{a}|+|U_{3}^{b}|+ e_{G}(U\setminus U_{0},B^{\prime})-\ell_{1},\]
where the last inequality above follows from the fact that \(|B_{1}|+|U_{1}|=|R_{1}|\leq\frac{1}{2}|A_{11}|\). From the inequality above we get
\[2|B^{\prime}| \leq a_{2}+\frac{1}{2}|A_{11}|+|B_{2}|+|U_{2}|+|U_{3}^{a}|+|U_{3}^{b}|+e _{G}(U\setminus U_{0},B^{\prime})-\ell_{1}-2\ell^{*}. \tag{8}\]
On the other hand, by Equation (4) and the fact that \(|A_{11}|\geq 2|R_{1}|\), we have
\[2|B^{\prime}| > 2a_{2}+2a_{1}+e_{G}(B,U\setminus U_{0})-\ell-2|B_{1}|-2|B_{2}|\] \[\geq 2a_{2}+2|A_{1}^{*}|+|A_{11}|+2|U_{1}|+e_{G}(B,U\setminus U_{0}) -\ell-2|B_{2}|\] \[= 2a_{2}+2|A_{1}^{*}|+|A_{11}|+2|U_{1}|+e_{G}(B^{\prime},U\setminus U _{0})+e_{G}(B_{1}\cup B_{2},U\setminus U_{0})-\ell-2|B_{2}|.\]
This, together with (8) implies
\[3|B_{2}|>a_{2}+2|A_{1}^{*}|+\frac{1}{2}|A_{11}|+2|U_{1}|-|U_{2}|-|U_{a}^{3}|-| U_{3}^{b}|+e_{G}(B_{1}\cup B_{2},U\setminus U_{0})-\ell+\ell_{1}+2\ell^{*}.\]
Since \(|B_{2}|=|A_{12}^{2}|+|A_{1}^{\prime}|\) by (5), \(|A_{1}^{*}|=|A_{12}^{\prime}|+|A_{12}^{1a}|+|A_{12}^{1b}|+|A_{12}^{2}|+|A_{1}^{ \prime}|\), \(|U_{2}|=|A_{12}^{\prime}|\), \(|U_{3}^{a}|=|A_{12}^{1a}|\), \(|U_{3}^{b}|=|A_{12}^{1b}|\), and \(\ell^{*}\geq\frac{1}{2}(\ell-\ell_{1}-|U_{1}|)\), we then get
\[|B_{2}| > a_{2}+\frac{1}{2}|A_{11}|+2|U_{1}|+e_{G}(B_{1}\cup B_{2},U \setminus U_{0})|-\ell+\ell_{1}+ \tag{9}\] \[|A_{12}^{\prime}|+|A_{12}^{1a}|+|A_{12}^{1b}|+(\ell-\ell_{1}-|U_{ 1}|)\] \[= a_{2}+\frac{1}{2}|A_{11}|+|U_{1}|+e_{G}(B_{1}\cup B_{2},U \setminus U_{0})+|A_{12}^{\prime}|+|A_{12}^{1a}|+|A_{12}^{1b}|.\]
**Claim 3.4**.: _We have \(|B_{2}|=1\)._
Proof.: By (9) we have \(|B_{2}|\geq 1\). Thus suppose to the contrary that \(|B_{2}|\geq 2\). For each \(x\in A^{\prime}_{1}\), we take exactly one vertex from \(N_{G^{*}}(x)\cap B^{\prime}\) and let the collection of those \(|A^{\prime}_{1}|\) vertices be \(B^{*}_{2}\). Then we have \(|B^{*}_{2}|=|B_{2}|\). For each \(D_{i}\) with \(i\in[1,\ell]\), we let \(W_{i}\subseteq V(D_{i})\cap Y\) be a set of size at most \(|N_{G^{*}}(B_{2})\cap V(D_{i})|\) such that every vertex from \(N_{G^{*}}(B_{2})\cap V(D_{i})\) (if any) is adjacent in \(D_{i}\) to a vertex from \(W_{i}\). Let
\[G_{1}=G\dot{\cdot}\left(A_{2}\cup B_{1}\cup B^{*}_{2}\cup\left(\bigcup_{i=1}^{ \ell}W_{i}\right)\right).\]
Then we have \(c(G_{1})\geq|B_{2}|\geq 2\). Thus by \(\tau_{Y}(G)\geq 2\), and by (9) and the fact that \(|B^{*}_{2}|=|B_{2}|\), we have
\[2|B_{2}|\leq\left|A_{2}\cup B_{1}\cup B^{*}_{2}\cup\left(\bigcup_{i=1}^{\ell} W_{i}\right)\right|=|B_{2}|+\left|A_{2}\cup B_{1}\cup\left(\bigcup_{i=1}^{\ell}W_{i} \right)\right|<2|B_{2}|,\]
a contradiction. Therefore \(|B_{2}|=1\).
As \(|B_{2}|=1\), Inequality (9) implies that \(A_{2}=A_{11}=B_{1}=U_{1}=\emptyset\), \(e_{G}(B_{1}\cup B_{2},U\setminus U_{0})=0\), and \(A^{\prime}_{12}=A^{a}_{12}=A^{b}_{12}=\emptyset\). Thus \(A=A^{2}_{12}\) and \(B=B^{*}=B_{2}\cup B^{\prime}\) and \(|B^{\prime}|\geq|A^{2}_{12}|+2|A^{\prime}_{1}|\geq|B_{2}|\). Also \(A^{\prime}_{12}=\emptyset\) and \(A^{\prime}_{12}=A^{a}_{12}=A^{b}_{12}=\emptyset\) implies that \(U_{2}=U^{a}_{3}=U^{b}_{3}=\emptyset\). From (7), we have \(2|B^{\prime}|\leq 1+e_{G}(B^{\prime},U\setminus U_{0})-\ell_{1}-2\ell^{*}=1+e_{G}(B ^{\prime},U\setminus U_{0})-2\ell+\ell_{1}\leq 1+e_{G}(B^{\prime},U \setminus U_{0})-\ell\). On the other hand, by (4), we have \(2|B^{\prime}|>2a_{1}+e_{G}(B,U\setminus U_{0})-\ell-2=2a_{1}+e_{G}(B^{\prime}, U\setminus U_{0})-\ell-2\). This inequality combined with \(2|B^{\prime}|\leq 1+e_{G}(B^{\prime},U\setminus U_{0})-\ell\) gives \(a_{1}=0\) and
\[2|B^{\prime}|=e_{G}(B^{\prime},U\setminus U_{0})-\ell,\]
since \(\delta_{G}(A,B)=2a_{1}-2(1+|B^{\prime}|)+e_{G}(B^{\prime},U\setminus U_{0})-\ell\) is even.
Since \(\tau_{Y}(G)\geq 2\) and \(G\) has a \(Y\)-cutset, for any \(y\in Y\) we know that \(y\) is adjacent in \(G\) to at least \(4\) vertices from \(X\) that each is adjacent in \(G\) to a distinct vertex from \(Y\). As \(A_{1}=\emptyset\) and \(e_{G}(B_{1}\cup B_{2},U\setminus U_{0})=0\), we have \(e_{G}(B,U\setminus U_{0})=e_{G}(B^{\prime}_{1},U\setminus U_{0})\geq 4|B|\). This, combined with \(2|B^{\prime}|=e_{G}(B^{\prime},U\setminus U_{0})-\ell\), gives \(\ell\geq 4|B|-2|B^{\prime}|=2|B^{\prime}|+4\). However, \(c(G\dot{-}B)=\ell\geq 2|B^{\prime}|+4>|B|\), indicating \(\tau_{Y}(G)<1\), a contradiction.
**Case 3**: \(k\geq 3\).
The proof basically follows the proof ideas of Theorem 1.1 by Enomoto, Jackson, Katerinis, and Saito. However, the proof is more involved as vertices of \(X\) can have degree other than \(2\) in \(G\), which in contrast, can only have degree \(2\) under the setting in [5].
Let \(S_{1}\subseteq B\) be a maximal independent subset such that \(N_{G}(x)\cap B\not\subseteq S_{1}\) for any vertex \(x\in A_{1}\). Let \(T_{1}=B\setminus S_{1}\). Note that by this choice of \(T_{1}\), we have \(N_{G}(x)\cap T_{1}\neq\emptyset\) if \(x\in A_{1}\)
and satisfies \(N_{G}(x)\cap B\neq\emptyset\). Then \(T_{1}\) is a minimal dominating set of vertices in \(A_{1}\) that each has in \(G\) a neighbor from \(B\).
**Claim 3.5**.: \(k|S_{1}|\leq a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell.\)__
Proof.: By Lemma 2.4 (ii), in \(G\), all the edges between \(S_{1}\) and \(U\setminus U_{0}\) are exactly the edges between \(S_{1}\) and vertices of components from \(\mathcal{C}\setminus\mathcal{C}_{1}\). Without loss of generality, we let \(D_{1},\ldots,D_{\ell_{1}}\) be all the components from \(\mathcal{C}\setminus\mathcal{C}_{1}\) such that \(e_{G}(S_{1},D_{i})\geq 1\) for each \(i\in[1,\ell_{1}]\), where \(0\leq\ell_{1}\leq\ell\). Assume further that \(D_{1},\ldots,D_{\ell_{2}}\) are all the components from \(\mathcal{C}\setminus\mathcal{C}_{1}\) such that \(e_{G}(S_{1},D_{i})\geq 2\) for each \(i\in[1,\ell_{2}]\), where \(0\leq\ell_{2}\leq\ell_{1}\). For each component \(D_{i}\) with \(i\in[1,\ell_{2}]\), we let \(W_{i}\subseteq V(D_{i})\cap Y\) be a set of size at most \(e_{G}(S_{1},D_{i})-1\) such that each but at most one vertex from \(N_{G}(S_{1})\cap D_{i}\) is adjacent in \(D_{i}\) to a vertex from \(W_{i}\). This is possible by Claim 3.1.
Suppose without loss of generality, that for some \(\ell_{3}\in[0,\ell_{2}]\) and each \(i\in[1,\ell_{3}]\), \(D_{i}\dot{-}W_{i}\) still contains a vertex from \(Y\). Let
\[U_{1}=\bigcup_{i=1}^{\ell_{3}}V(D_{i})\cup(\bigcup_{i=\ell_{1}+1}^{\ell}V(D_{i }))\quad\text{and}\quad Z=\{x\in A_{1}:N_{G}(x)\subseteq U_{1}\}.\]
By Lemma 2.4 (iv), we have \(h(Z)\geq 2|Z|\). As \(\ell_{3}+\ell-\ell_{1}\geq h(Z)\), we get
\[\ell_{3}+\ell-\ell_{1}\geq 2|Z|.\]
We let \(U_{1}^{*}\subseteq U_{1}\cap Y\) such that \(N_{G}(U_{1}^{*})\cap A_{1}=Z\). Furthermore, we can choose \(U_{1}^{*}\) such that \(|U_{1}^{*}|\leq|Z|\). Now we let
\[G_{1}=G\dot{-}\left(A_{2}\cup T_{1}\cup U_{1}^{*}\cup\left(\bigcup_{i=1}^{ \ell_{2}}W_{i}\right)\right).\]
For each component \(D_{i}\) with \(i\in[1,\ell_{1}]\), we have \(e_{G_{1}}(D_{i},V(G_{1})\setminus V(D_{i}))=e_{G_{1}}(D_{i},S_{1})\leq 1\). Each component \(D_{i}\), with \(i\in[\ell_{1}+1,\ell]\), which still contains a vertex from \(Y\) after deleting vertices in \(U_{1}^{*}\) satisfies \(e_{G_{1}}(D_{i},V(G_{1})\setminus V(D_{i}))=e_{G_{1}}(D_{i},S_{1})=0\). Since \(A_{1}\subseteq N_{G}\left(A_{2}\cup T_{1}\cup U_{1}^{*}\cup\left(\bigcup_{i=1} ^{\ell_{2}}W_{i}\right)\right)\) and \(|U_{1}^{*}|\leq|Z|\leq(\ell_{3}+\ell-\ell_{1})/2\), we have
\[c(G_{1})\geq|S_{1}|+\ell_{3}+(\ell-\ell_{1})-|U_{1}^{*}|\geq|S_{1}|+\frac{1}{ 2}(\ell_{3}+\ell-\ell_{1}).\]
First consider the case \(c(G_{1})\geq 2\). By the toughness of \(G\), we get
\[k(|S_{1}|+\frac{1}{2}(\ell_{3}+\ell-\ell_{1})) \leq \left|A_{2}\cup T_{1}\cup U_{1}^{*}\cup\left(\bigcup_{i=1}^{\ell _{2}}W_{i}\right)\right|\] \[\leq a_{2}+|T_{1}|+|U_{1}^{*}|+e_{G}(U\setminus U_{0},S_{1})-\ell_{1}.\]
Therefore, as \(k\geq 3\) and \(|U_{1}^{*}|\leq(\ell_{3}+\ell-\ell_{1})/2\), we get
\[k|S_{1}| \leq a_{2}+|T_{1}|+|U_{1}^{*}|+e_{G}(U\setminus U_{0},S_{1})-\frac{k}{2}( \ell_{3}+\ell-\ell_{1})-\ell_{1}\] \[= a_{2}+|T_{1}|+|U_{1}^{*}|+e_{G}(U\setminus U_{0},S_{1})-\ell-( \frac{k}{2}-1)(\ell-\ell_{1})-\frac{k}{2}\ell_{3}\] \[= a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell+|U_{1}^{*}|-( \frac{k}{2}-1)(\ell_{3}+\ell-\ell_{1})-\ell_{3}\] \[\leq a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell+|U_{1}^{*}|- \frac{1}{2}(\ell_{3}+\ell-\ell_{1})\] \[\leq a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell,\]
as desired.
Next consider the case \(c(G_{1})=1\). Suppose to the contrary that \(k|S_{1}|>a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell\). Then \(c(G_{1})\geq|S_{1}|+\frac{1}{2}(\ell_{3}+\ell-\ell_{1})\) and \(\ell\geq\ell_{1}\) implies that \(|S_{1}|=0\) and \(2\geq\ell_{3}+\ell-\ell_{1}\geq 1\) or \(|S_{1}|=1\) and \(\ell_{3}+\ell-\ell_{1}=0\). Since \(G\) has a \(Y\)-cutset and \(\tau_{Y}(G)\geq k\), we have \(|Y|\geq k+2\). As \(a_{2}+|B|\leq a_{2}+|T_{1}|+1\leq k\), we conclude that \(\ell\geq 1\).
Define \(Z^{\prime}=\{x\in A_{1}:N_{G}(x)\subseteq U\}\). We let \(U_{1}^{\prime}\subseteq U\cap Y\) such that \(N_{G}(U_{1}^{\prime})\cap A_{1}=Z^{\prime}\). Furthermore, we can choose \(U_{1}^{\prime}\) such that \(|U_{1}^{\prime}|\leq|Z^{\prime}|\). If \(\ell=2\) (so \(|U_{1}^{\prime}|\leq|Z^{\prime}|\leq 1\)) and one of \(|V(D_{1})\cap Y|\) and \(|V(D_{2})\cap Y|\) contains at least two vertices, we can choose \(U_{1}^{\prime}\) so that both \(D_{1}-U_{1}^{\prime}\) and \(D_{2}-U_{1}^{\prime}\) contain a vertex from \(Y\). Now we let
\[G_{1}^{\prime}=G\dot{-}\left(A_{2}\cup B\cup U_{1}^{\prime}\right).\]
Then \(c(G_{1}^{\prime})\geq\ell-|U_{1}^{\prime}|\geq\ell/2\), and \(c(G_{1}^{\prime})=2\) when \(\ell=2\) and one of \(|V(D_{1})\cap Y|\) and \(|V(D_{2})\cap Y|\) contains at least two vertices.
If \(\ell\geq 3\), we have \(c(G_{1}^{\prime})\geq 2\). Thus by the toughness of \(G\), \(|A_{2}\cup B\cup U_{1}^{\prime}|\geq k(\ell-|U_{1}^{\prime}|)\). As \(a_{2}+|T_{1}|<k|S_{1}|\leq k\), we get \(a_{2}+|B|=a_{2}+|T_{1}|+1\leq k\). Therefore, \(|A_{2}\cup B\cup U_{1}^{\prime}|\geq k(\ell-|U_{1}^{\prime}|)\) implies that \(k\ell\leq k+(k+1)|U_{1}^{\prime}|\leq k+(k+1)\lfloor\ell/2\rfloor\), this gives a contradiction as \(k\geq 3\) and \(\ell\geq 3\).
If \(\ell=2\) and one of \(|V(D_{1})\cap Y|\) and \(|V(D_{2})\cap Y|\) contains at least two vertices, then we have \(c(G_{1}^{\prime})=2\). However, \(|A_{2}\cup B\cup U_{1}^{\prime}|\leq k+1<2k\), a contradiction.
Thus we assume that \(\ell\leq 2\), and when \(\ell=2\) we have that both \(|V(D_{1})\cap Y|\) and \(|V(D_{2})\cap Y|\) contain exactly one vertex. Note that when \(|S_{1}|=1\), we have \(e_{G}(U\setminus U_{0},S_{1})\geq\ell_{1}=\ell\). Thus we have \(|A_{2}\cup B|\leq k\).
If \(\ell=2\), then we have \(|Y|=|A_{2}\cup B|+2\leq k+2\). This implies \(|Y|=k+2\) by \(|Y|\geq k+2\) from our previous assumption. As \(G\) has a \(Y\)-cutset, there exist distinct \(y_{1},y_{2}\in Y\) such that no vertex \(x\in X\) satisfying \(N_{G}(x)=\{y_{1},y_{2}\}\). Then \(c(G\dot{-}(Y\setminus\{y_{1},y_{2}\}))=2\) but \(|Y\setminus\{y_{1},y_{2}\}|=k<2k\), a contradiction.
Thus we assume \(\ell=1\). This also implies that \(Z^{\prime}=U_{1}^{\prime}=\emptyset\).
If \(|S_{1}|=1\), we claim that for the vertex \(y\in S_{1}\) and any \(z\in V(D_{1})\cap Y\), there exists \(x\in V(D_{1})\cap X\) such that \(xy,xz\in E(G)\). For otherwise, we let \(W_{1}\subseteq V(D_{1})\cap Y\) with size at most \(e_{G}(y,D_{1})\) such that every vertex from \(N_{G}(y)\cap V(D_{1})\) has in \(D_{1}\) a neighbor from \(W_{1}\). Then \(c(\dot{G^{\cdot}}(A_{2}\cup(B\setminus\{y\})\cup W_{1}))\geq 2\), where one component is the vertex \(y\) and the other contains the vertex \(z\). Thus \(|A_{2}\cup T_{1}\cup W_{1}|=a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})<k|S_{1 }|+1\leq k+1<2k\), a contradiction. Thus \(e_{G}(U\setminus U_{0},S_{1})=|V(D_{1})\cap Y|\). As \(|Y|\geq k+2\), we get \(|V(D_{1})\cap Y|\geq k+2-(a_{2}+|B|)=k+2-(a_{2}+|T_{1}|+1)\). Thus we get \(k|S_{1}|=k>a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell\geq a_{2}+|T_{1}|+ k+2-(a_{2}+|T_{1}|+1)=k+1\), a contradiction.
Thus we assume \(|S_{1}|=0\). Then we get \(1=\ell>a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})\) from \(k|S_{1}|>a_{2}+|T_{1}|+e_{G}(U\setminus U_{0},S_{1})-\ell\). Thus we have \(a_{2}=|T_{1}|=0\). Therefore \(|B|=0\), a contradiction to (4).
For \(2\leq i\leq k-1\), let \(S_{i}\subseteq T_{i-1}\) such that for any vertex \(x\in A_{1}\), \(N_{G}(x)\cap B\not\subseteq S_{i}\) and \(T_{i}=T_{i-1}\setminus S_{i}\). Let \(T_{k-1}\subseteq T_{k-2}\) be smallest such that for any \(x\in A_{1}\) we have \(N_{G}(x)\cap B\not\subseteq T_{k-2}\setminus T_{k-1}\). If \(T_{k-1}\neq\emptyset\), by its minimality and the definitions of \(S_{i}\) for \(i\in[1,k-2]\), we have the following property: for any \(v\in T_{k-1}\), there exist \(x_{0},x_{1},\ldots,x_{k-2}\in A_{1}\) such that \(N_{G}(x_{0})\cap B\subseteq(T_{k-2}\setminus T_{k-1})\cup\{v\}\) and \(N_{G}(x_{i})\cap B\subseteq S_{i}\cup\{v\}\) for each \(i\in[1,k-2]\). Let
\[A_{11}=N_{G}(T_{k-1})\cap A_{1}.\]
Then \(|A_{11}|\geq(k-1)|T_{k-1}|\). Let \(S_{k-1}=T_{k-2}\setminus T_{k-1}\).
For each \(i\in[2,k-1]\), we let \(e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})\) denote the number of vertices \(x\in A_{1}\) such that \(N_{G}(x)\cap B\subseteq(B\setminus T_{i-1})\cup S_{i}\) and \(N_{G}(x)\cap(B\setminus T_{i-1})\neq\emptyset\) and \(N_{G}(x)\cap S_{i}\neq\emptyset\). Note that by the choice of \(S_{i}\), we have \(N_{G}(x)\cap B\not\subseteq S_{i}\) for any vertex \(x\in A_{1}\).
**Claim 3.6**.: _For each \(i\in[2,k-1]\), we have_
\[k|S_{i}|\leq a_{2}+|T_{i}|+e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})+e_{G}(S_{i},U\setminus U_{0}).\]
Proof.: If \(S_{i}=\emptyset\), the claim holds. So we assume \(S_{i}\neq\emptyset\). Let \(B_{i}\subseteq B\setminus T_{i-1}\) with size at most \(e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})\) such that \(e_{[G^{\cdot}B_{i},A_{1}]}(B\setminus(T_{i-1}\cup B_{i}),S_{i})=0\). We assume, without loss of generality, that for some \(\ell_{1}\in[0,\ell]\), \(D_{1},\ldots,D_{\ell_{1}}\) are all the components among \(D_{1},\ldots,D_{\ell}\) that each is connected to some vertex of \(S_{i}\) by an edge of \(G\). For each \(j\in[1,\ell_{1}]\), we let \(W_{j}\subseteq V(D_{j})\cap Y\) of size at most \(e_{G}(S_{i},D_{j})\) such that each vertex from \(N_{G}(S_{i})\cap V(D_{j})\) is adjacent in \(D_{j}\) to a vertex from \(W_{j}\). Let \(W^{*}=A_{2}\cup T_{i}\cup B_{i}\cup(\bigcup_{i=1}^{\ell_{1}}W_{i})\). Then we have \(c(\dot{G^{\cdot}}W^{*})\geq|S_{i}|\). If \(|S_{i}|\geq 2\), we get the desired inequality by \(\tau_{Y}(G)\geq k\). Thus we have \(|S_{i}|\leq 1\). As \(S_{i}\neq\emptyset\), we have \(|S_{i}|=1\) and \(N_{G}(N_{G^{*}}(S_{i}))=Y\setminus S_{i}\) by the choice of \(W^{*}\). Assume
that the claim does not hold, then \(k=k|S_{i}|>|Y|-1.\) This contradicts to the assumption that \(|Y|\geq k+1\).
Let \(A_{1}^{*}=A_{1}\setminus A_{11}\). Since \(T_{k-1}\cap(S_{i}\cup(B\setminus T_{i-1}))=\emptyset\) for any \(i\in[2,k-2]\), we have
\[\sum_{i=2}^{k-2}e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})=\sum_{i=2}^{k-2}e_{[G,A _{1}\setminus A_{11}]}(B\setminus T_{i-1},S_{i}).\]
Thus \(\sum\limits_{i=2}^{k-2}e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})\leq|A_{1}^{*}|\). By Claim 3.5 and Claim 3.6, we get
\[k(b-|T_{k-1}|) \leq (k-1)a_{2}+\sum_{i=1}^{k-1}|T_{i}|+\sum_{i=2}^{k-2}e_{[G,A_{1}]}(B \setminus T_{i-1},S_{i})+\sum_{i=1}^{k-1}(U\setminus U_{0},S_{i})-\ell\] \[\leq (k-1)a_{2}+\sum_{i=1}^{k-1}|T_{i}|+\sum_{i=2}^{k-2}e_{[G,A_{1}]}(B \setminus T_{i-1},S_{i})+e_{G}(U\setminus U_{0},B\setminus T_{k-1})-\ell\] \[\leq (k-1)a_{2}+\sum_{i=1}^{k-1}|T_{i}|+|A_{1}^{*}|+e_{G}(U\setminus U _{0},B\setminus T_{k-1})-\ell.\]
Recall that \(A_{11}=N_{G}(T_{k-1})\cap A_{1}\) and \(|A_{11}|\geq(k-1)|T_{k-1}|\). By (4) we have
\[k(b-|T_{k-1}|) > ka_{2}+2a_{1}+e_{G}(U\setminus U_{0},B)-\ell-k|T_{k-1}|\] \[\geq ka_{2}+2|A_{1}^{*}|+2(a_{1}-|A_{1}^{*}|)+e_{G}(U\setminus U_{0}, B\setminus T_{k-1})-\ell-k|T_{k-1}|\] \[\geq ka_{2}+2|A_{1}^{*}|+2|A_{11}|+e_{G}(U\setminus U_{0},B\setminus T _{k-1})-\ell-k|T_{k-1}|\] \[\geq ka_{2}+2|A_{1}^{*}|+(k-2)|T_{k-1}|+e_{G}(U\setminus U_{0},B \setminus T_{k-1})-\ell.\]
Therefore, \(ka_{2}+2|A_{1}^{*}|+(k-2)|T_{k-1}|+e_{G}(U\setminus U_{0},B\setminus T_{k-1}) -\ell<(k-1)a_{2}+\sum\limits_{i=1}^{k-1}|T_{i}|+|A_{1}^{*}|+e_{G}(U\setminus U _{0},B\setminus T_{k-1})-\ell\), and so \(a_{2}+|A_{1}^{*}|+(k-2)|T_{k-1}|<\sum\limits_{i=1}^{k-1}|T_{i}|\). Hence as \(k\geq 3\) we get
\[|A_{1}^{*}|<\sum_{i=1}^{k-2}|T_{i}|. \tag{10}\]
On the other hand,
\[|A_{1}^{*}| \geq \sum_{i=2}^{k-2}e_{[G,A_{1}]}(B\setminus T_{i-1},S_{i})\] \[= \sum_{i=2}^{k-2}e_{[G,A_{1}]}(S_{i},\bigcup_{j=i+1}^{k-1}S_{j})\] \[= \sum_{j=1}^{k-2}e_{[G,A_{1}]}(S_{j},T_{j})\] \[\geq \sum_{j=1}^{k-2}|T_{j}|,\]
where the last inequality above follows by the following arguments: let \(T_{0}=B\). For each \(j\in[1,k-1]\), since \(S_{j}\) is a maximal subset of \(T_{j-1}\) such that for any \(x\in A_{1}\), we have \(N_{G}(x)\cap B\not\subseteq S_{j}\). Thus for any distinct \(y\in T_{j}\), there exists \(x_{y}\in A_{1}\) such that \(N_{G}(x_{y})\cap B\subseteq S_{j}\cup\{y\}\). Hence \(e_{[G,A_{1}]}(S_{j},T_{j})\geq|T_{j}|\) and so \(\sum\limits_{j=1}^{k-2}e_{[G,A_{1}]}(S_{j},T_{j})\geq\sum\limits_{j=1}^{k-2} |T_{j}|.\) However, this gives a contradiction to (10). The proof is now completed.
|
2301.00901 | Towards Modeling and Influencing the Dynamics of Human Learning | Humans have internal models of robots (like their physical capabilities), the
world (like what will happen next), and their tasks (like a preferred goal).
However, human internal models are not always perfect: for example, it is easy
to underestimate a robot's inertia. Nevertheless, these models change and
improve over time as humans gather more experience. Interestingly, robot
actions influence what this experience is, and therefore influence how people's
internal models change. In this work we take a step towards enabling robots to
understand the influence they have, leverage it to better assist people, and
help human models more quickly align with reality. Our key idea is to model the
human's learning as a nonlinear dynamical system which evolves the human's
internal model given new observations. We formulate a novel optimization
problem to infer the human's learning dynamics from demonstrations that
naturally exhibit human learning. We then formalize how robots can influence
human learning by embedding the human's learning dynamics model into the robot
planning problem. Although our formulations provide concrete problem
statements, they are intractable to solve in full generality. We contribute an
approximation that sacrifices the complexity of the human internal models we
can represent, but enables robots to learn the nonlinear dynamics of these
internal models. We evaluate our inference and planning methods in a suite of
simulated environments and an in-person user study, where a 7DOF robotic arm
teaches participants to be better teleoperators. While influencing human
learning remains an open problem, our results demonstrate that this influence
is possible and can be helpful in real human-robot interaction. | Ran Tian, Masayoshi Tomizuka, Anca Dragan, Andrea Bajcsy | 2023-01-02T23:59:45Z | http://arxiv.org/abs/2301.00901v1 | # Towards Modeling and Influencing
###### Abstract.
Humans have _internal models_ of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions _influence_ what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
robot influence, human internal model, dynamics of human learning +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal Journal
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal Journal
+
FootnoteFootnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
FootnoteFootnote †: journal: Journal Journal
+
FootnoteFootnote
understanding of the robot or the world changes as a function of what they observe. For example, at first you may mistakenly believe that the robot doesn't experience any inertia. However, as soon as you gesture to move the robot forward, you see the robot lagging behind. This observation controls the evolution of your internal robot physics model. The same holds true for your internal model of the world and personal preferences. In other words, we can model _human learning as a dynamical system where the human's internal model is the state, and the observations--which the robot can influence-- evolve the internal model._
Of course, this does not prescribe the functional form of the dynamical system. One idea is to draw on computational cognitive science work to define this function. A predominant lens is that of probabilistic models (Grover and Leskovec, 2010), which posits that humans perform some form of approximate Bayesian inference based on the observations they receive. In reality, people have been shown to have a plethora of cognitive biases which deviate from perfect Bayesian inference: they might use gradient information (Srivastava et al., 2014), might not process the entire observation due to sensory overload (Srivastava et al., 2014), or exhibit systematic bias like over- or under- estimation (Srivastava et al., 2014). Instead of committing to a specific model, in our work we treat this as a general dynamics learning problem, which has roots in controls and robotics (Grover and Leskovec, 2010). We leverage demonstrations which _naturally_ exhibit human learning (e.g., humans teleoperating a robot they have never interacted with before), to fit a human learning model under the assumption that observed human actions are approximately optimal given their current internal model. This enables flexibility of capturing different possible learning updates, at the cost of being domain-specific.
Although the most general model learning problem remains computationally intractable, we introduce a tractable approximation that is readily solvable via gradient-based optimization, and is compatible with neural network representations of the human learning dynamics. Leveraging our approximate dynamics model of human learning, we formalize robot influence over the human's internal model as a Markov Decision Process (MDP) where the human's internal model is part of the state and the human's learning dynamics are part of the transition function. The solution yields robot actions that change the human's internal model by changing the human's observations in a way that rewards the robot.
We run experiments with simulated humans to study the fidelity of the inferred human learning dynamics and investigate robot teaching and assistance in settings where the human's understanding of robot physics, motion preferences, or goals can be influenced. Finally, we conduct a user study with a Kinova Jaco 7DOF robot arm and find that our method can help teach humans to be better teleoperators. Overall, while influencing human learning remains an open problem, we are excited to have taken a step in this domain via a principled yet tractable learning and planning method.
## 2. Related Work
**Inferring human preferences and beliefs.** A large body of work has focused on learning human reward functions via inverse reinforcement learning (IRL) (Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010). This includes inferring human driving preferences (Grover and Leskovec, 2010; Grover and Leskovec, 2010), desired exoskeleton gaits (Grover and Leskovec, 2010), intended goals (Grover and Leskovec, 2010), motion preferences (Srivastava et al., 2014), and human understanding about physics (Srivastava et al., 2014). A key assumption in these works is that people have _static_ internal models of preferences or physics. Instead, we are interested in learning a _dynamic_ model of how humans change their preferences, goals, and understanding of physics.
**Models of human learning for robot decision-making.** Prior works in robotics model human learning as Bayesian inference when updating goals or preferences (Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010), a linear Gaussian system when updating trust (Grover and Leskovec, 2010), gradient-based IRL when learning rewards (Grover and Leskovec, 2010), or as a multi-armed bandit algorithm when updating preferences (Grover and Leskovec, 2010). Instead of assuming a known model of how people learn, in this work we seek to _learn_ a model of how humans learn. Most related to our work is (Srivastava et al., 2014) which learns a model of how people estimate the state of the world. In this work, we propose a generalization where the human is not estimating world state, but updating their preferences, goals, and internal physics model. This induces a significantly harder model learning problem, for which we propose a tractable approximation.
**Cognitive theories of human learning.** Models of human inference have been extensively studied in both computational cognitive science (Grover and Leskovec, 2010; Grover and Leskovec, 2010) and psychology (Srivastava et al., 2014; Srivastava et al., 2014). While human cognition can be broadly modeled at three levels (computational, algorithmic, and hardware) (Srivastava et al., 2014), most relevant to us are the algorithmic works. (Grover and Leskovec, 2010) posits that modeling human reasoning as "implementing" an exact Bayesian posterior or a gradient-based point estimate are both compatible with probabilistic models of human cognition, and are a potential source of rational process models (Srivastava et al., 2014). Further, (Srivastava et al., 2014) finds evidence that humans may update their forward models using the models' prediction error as loss functions. Inspired by these works, our simulated human experiments leverage exact and approximate probabilistic inference models, and we study if our flexible, learning-based method can effectively recover such models.
**Robot influencing human behavior.** While there are many ways a robot can influence humans (e.g., through nonverbal cues, appearance, visuals, or curriculum design (Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010)), we focus on robot influence through physical action (Srivastava et al., 2014). A common approach towards this models human-robot interaction as a game (Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010; Grover and Leskovec, 2010). While these approaches can capture reactions from the human, they do not address the internal learning problem: over repeated interactions, the human may not have learned anything and is only reacting. Alternatively, model-free methods learn a latent representation of the human's policy and then leverage the latent dynamics to influence the human (Srivastava et al., 2014; Srivastava et al., 2014). Here the human's internal model is _implicitly_ captured by the latent representation, and the internal model evolves between interaction episodes. In contrast, in our work the human's internal model is an _explicit_ parameterization (e.g., high-dimensional parameterization like dynamics) and the human internal model can evolve continuously during an interaction episode. This enables robot behaviors like teaching the human the correct internal model, which would otherwise not be possible with implicit, latent representations.
## 3. Modeling How Humans Learn & Act
We begin by mathematically modelling the dynamics of human learning, before diving into how the robot can infer this dynamics model and use it influence the human's internal model evolution.
**Notation.** Let \(x\in\mathbb{R}^{n}\) be the state of the world including the robot (e.g., robot end-effector position, objects, etc.). Both the human and robot can take actions, \(u_{\text{H}}\in\mathbb{R}^{m}\) and \(u_{\text{R}}\in\mathbb{R}^{m}\) respectively, that affect the next state. Let the deterministic world dynamics be
\[x^{t+1}=f(x^{t},u_{\text{H}}^{t},u_{\text{R}}^{t}). \tag{1}\]
**Human internal model.** We model the human as having an internal parameter vector, \(\theta_{\text{H}}\), which captures a latent aspect of the task that the human is uncertain about but _continuously learns about_. Going back to our motivating example where the human teleoperates a robot, \(\theta_{\text{H}}\) can model the human's current estimate of the robot's physical properties, like its inertia. Or, \(\theta_{\text{H}}\) could model the human's current preferences for teleoperation: they start off wanting to move the robot to one goal, but then change their mind to a new goal after realizing it is easier to reach. Regardless of what \(\theta_{\text{H}}\) represents, it is important to remember that it is _time-varying_ and that it _evolves as a function of what the human observes_.
**Human policy: acting under the internal model.** In our work, we model the human actions as driven by some reward function, \(R_{\text{H}}(x,u_{\text{H}};\theta_{\text{H}})\), which depends on the current state, the human's action, and their internal parameter \(\theta_{\text{H}}\). Following prior works (Boges and Voss, 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), we treat the human as a noisily-optimal actor:
\[\mathbb{P}(u_{\text{H}}\mid x;\theta_{\text{H}})=e^{\zeta_{\text{H}}(x,u_{ \text{H}};\theta_{\text{H}})}\Big{(}\int_{\tilde{u}}e^{\zeta_{\text{H}}(x, \tilde{u};\theta_{\text{H}})}d\tilde{u}\Big{)}^{-1}, \tag{2}\]
where the optimal state-action value is denoted by \(Q_{\text{H}}(x,u_{\text{H}};\theta_{\text{H}})\) and \(x\) is the current state, \(u_{\text{H}}\) is the human action, and \(\theta_{\text{H}}\) the human's current parameter estimate.
We make two simplifying assumptions in this model. First, the human does not explicitly account for the actions \(u_{\text{R}}\) the robot could take. Instead, the human reacts to the current state \(x\), which _implicitly_ captures the effect of any robot actions that change the state. This models scenarios where the human is doing the task on their own, or where the human is not aware of how the robot is providing guidance. Second, when the human plans their action, we assume that they separate the estimation of \(\theta_{\text{H}}\) from policy generation and they plan with their current estimate.
**Dynamics of human learning: updating the internal model.** As the human acts in the environment, they receive new observations: they may see the next state, including that of the robot's, or experience how much they enjoy something (i.e. observe "reward signal"). This naturally lets the human update their understanding of the robot, physical aspects of the world, or their preferences.
Leveraging our core idea, we model the human's learning process as a nonlinear dynamical system over the human's internal model parameter. Let \(\theta_{\text{H}}^{0}\) be the human's initial internal model, and \(x^{0,t}\) and \(u_{\text{H}}^{0,t}\) be the state and action history until timestep \(t\) and \(x^{t+1}\) be the resulting state at the next timestep, possibly including the influence of robot actions. Given the initial parameter estimate, the state and action history, and next state data, the human evolves their internal model to the next estimate, \(\theta_{\text{H}}^{t+1}\). Let the true dynamics of the human's learning process be:
\[\theta_{\text{H}}^{t+1}=f_{L}(\theta_{\text{H}}^{0},x^{0,t+1},u_{\text{H}}^{0, t}). \tag{3}\]
Here we are faced with the question _"What f?"_ _models how the human learns?"_ Instead of committing to a specific model, here we take a robotics perspective and view this question as an instance of a dynamics learning problem. By looking to human data, we aim to _learn_ an approximate \(f_{L}\) model that is domain-specific.
## 4. Inferring the dynamics of human learning
In this section we focus on inferring the dynamics of human learning by leveraging demonstrations which _naturally_ exhibit human learning: for example, initial trials of a human teleoperating a robot they have never interacted with before. We assume these demonstrations contain only the state and action histories and do not contain ground-truth human internal model data (since this is not possible in practice). However, we do assume that the observed actions are coupled with the human's internal model, allowing us to leverage demonstrations to infer the dynamics of the human's internal model. Given this dataset, we seek to fit a nonlinear model to represent the dynamics of human learning,
\[f_{L}^{\phi}\approx f_{L}, \tag{4}\]
where \(\phi\) are the parameters of the approximate model. In the following sections, we formalize inferring \(f_{L}^{\phi}\) as a maximum likelihood estimation (MLE) problem and propose a tractable approximation.
### Formalizing the Inference Problem
Let \(\mathcal{D}_{demo}\coloneqq\{(\mathbf{x},u_{\text{H}})_{i}\}_{i=0}^{N}\) be a collection of \(N\) demonstrations containing state and human action trajectories of length \(T\) time steps. We want to infer the parameter of the human's learning dynamics, \(\phi\), and the initial human parameter estimate, \(\theta_{\text{H}}^{0}\), which maximizes the likelihood of the observed demonstrations. We formulate this inference via the constrained optimization problem:
\[\max_{\phi,\theta_{\text{H}}^{t}} \sum_{(\mathbf{x},u_{\text{H}})\in\mathcal{D}_{demo}}\sum_{t=0}^{T -1}\log\Big{[}\mathbb{P}(u_{\text{H}}^{t}\mid x^{t};\theta_{\text{H}}^{t}) \Big{]}, \tag{6}\] \[\text{s.t.} \theta_{\text{H}}^{t+1}=f_{L}^{\phi}(\theta_{\text{H}}^{0},x^{0,t +1},u_{\text{H}}^{0,t}), \tag{5}\]
where \(\mathbb{P}(u_{\text{H}}^{t}\mid x^{t},\theta^{t})\) is the human action likelihood from Equation (2) and the constraint ensures that the human's internal parameter evolves according to the human's learning dynamics model.
### Solving the Inference Problem
Unfortunately, the inference problem in Equation (5) is intractable to solve directly for two main reasons. First, recall that the human's internal model \(\theta_{\text{H}}\) of their preferences, dynamics, or goals, changes over time. This means that at each timestep the human is generating data \(u_{\text{H}}\) under a possibly different \(\theta_{\text{H}}\). In other words, the human acts under a _new_ action policy \(\mathbb{P}(u_{\text{H}}^{t}\mid x^{t};\theta_{\text{H}}^{t})\) at each \(t\), requiring us to solve an entirely _new_ reinforcement learning problem to obtain the action policy at each time step along the inference horizon. In the case where \(\theta_{\text{H}}\) is a continuous, high-dimensional parameter (e.g., physical properties of the robot dynamics), this is intractable to compute per-timestep. Secondly, even if we could obtain the human's policy infinitely fast, our optimization problem still requires searching over the the high-dimensional space of \(\phi\) and \(\theta_{\text{H}}\). Gradient-based optimization is a natural choice, but we need to be able to compute the gradient of the MLE objective and, therefore, differentiate through \(Q_{\text{H}}\) with respect to \(\theta_{\text{H}}\).
In the following subsections, we introduce several approximations to arrive at a tractable solution to the inference problem. Our key idea is to use a linear-quadratic (LQ) approximation of the physical dynamics and the human reward. This enables us to derive a closed-form expression of the human policy as a function of \(\theta_{\mathrm{H}}^{t}\) at any time and yields a differentiable inference objective.
#### 4.2.1. Linear-Quadratic approximation
We take inspiration from infinite-horizon linear-quadratic (LQ) control (Kang et al., 2017) and assume that the human's reward is quadratic and their model of the physical dynamics is linear. Let the linear physical dynamics be:
\[x^{t+1}=f(x^{t},u_{\mathrm{H}}^{t},u_{\mathrm{R}}^{t}\equiv 0)\approx Ax^{t} +Bu_{\mathrm{H}}^{t} \tag{7}\]
where \(A\in\mathbb{R}^{n\times n},B\in\mathbb{R}^{n\times m}\) are matrices governing the physical dynamics. Note that in the human's mind, the robot is not exerting any control effort, and hence \(u_{\mathrm{R}}\equiv 0\). Let the human's reward be approximated by a quadratic function:
\[r_{\mathrm{H}}(x,u_{\mathrm{H}};\theta_{\mathrm{H}})\approx-x^{ T}Qx-u_{\mathrm{H}}^{T}Ru_{\mathrm{H}}, \tag{8}\]
where the matrices \(Q\in\mathbb{R}^{m\times n}\) and \(R\in\mathbb{R}^{m\times m}\) tradeoff the state reward (e.g., how much reward the human gets for reaching a state) and the action reward (e.g., how much effort the human wants to exert), respectively. Note that \(\theta_{\mathrm{H}}\) enters in different ways depending on what the human is learning about. For example, if \(\theta_{\mathrm{H}}\) encodes **reward weights** (i.e., the human's preferences about how to do a task), then \(\theta_{\mathrm{H}}\coloneqq(Q,R)\). If the parameter encodes a human's **goal state**, then \(\theta_{\mathrm{H}}\in\Theta\subset\mathbb{R}^{n}\) and the human's reward function regulates the human towards their desired goal: \(r_{\mathrm{H}}(x,u_{\mathrm{H}};\theta_{\mathrm{H}})\approx-(x-\theta_{ \mathrm{H}})^{T}Q(x-\theta_{\mathrm{H}})-u_{\mathrm{H}}^{T}Ru_{\mathrm{H}}\). Finally, if \(\theta_{\mathrm{H}}\) encodes aspects of the **physical dynamics** that the human is estimating, then \(\theta_{\mathrm{H}}\coloneqq(A,B)\) from the dynamics in Equation (7), and governs how the human imagines the physical dynamics evolving.
#### 4.2.2. Closed-form \(Q_{\mathrm{H}}\)
Recall that the human plans a policy using their current estimate \(\theta_{\mathrm{H}}\); at every step, \(\theta_{\mathrm{H}}\) changes, resulting in a new policy. In general, obtaining the exact \(Q_{\mathrm{H}}\)-value via dynamic programming in continuous state, action, and \(\theta_{\mathrm{H}}\)-spaces is computationally demanding. However, under our infinite-horizon LQ-approximation the human's \(Q_{\mathrm{H}}\)-value is:
\[Q_{\mathrm{H}}(x,u_{\mathrm{H}};\theta_{\mathrm{H}})=r_{\mathrm{H}}(x,u_{ \mathrm{H}};\theta_{\mathrm{H}})-(x^{\prime})^{\top}P_{\theta_{\mathrm{H}}}(x ^{\prime}) \tag{9}\]
where the instantaneous reward is quadratic from Equation (8) and \(x^{\prime}\) is the next physical state as a result of applying \(u_{\mathrm{H}}\) from state \(x\). Note that \(-(x^{\prime})^{\top}P_{\theta_{\mathrm{H}}}(x^{\prime})\) is the infinite-horizon optimal value where \(P_{\theta_{\mathrm{H}}}\) is the well-known positive-definite fixed point of the discrete-time algebraic Riccati equation (DARE) (Bertson, 2010):
\[P=A^{\top}PA-A^{\top}PB(R+B^{\top}PB)^{-1}B^{\top}PA+Q. \tag{10}\]
Obtaining \(P_{\theta_{\mathrm{H}}}\) also yields the optimal human action: \(u_{\mathrm{H}}^{*}(x;\theta_{\mathrm{H}})=-K_{\theta_{\mathrm{H}}}x\) where \(K_{\theta_{\mathrm{H}}}=(R+B^{\top}P_{\theta_{\mathrm{H}}}B)^{-1}B^{\top}P_{ \theta_{\mathrm{H}}}A\). Note that in all of the equations above, \(\theta_{\mathrm{H}}\) enters differently depending on what the human's internal model represents.
#### 4.2.3. Closed-form human policy
In general, obtaining the human policy in Equation (2) is computationally intractable in continuous action spaces due to the integral over \(u_{\mathrm{H}}\). However, plugging in our closed-form \(Q_{\mathrm{H}}\), we see that the exponent is quadratic in \(u\), allowing us to take a Gaussian integral (Kang et al., 2017). Overall, this yields a closed-form human policy (see full derivation in Appendix A.1.):
\[\mathbb{P}(u_{\mathrm{H}}\mid x;\theta_{\mathrm{H}})=|\mathbf{H}|^{1/2}(2\pi)^ {-m_{\mathrm{H}}/2}e^{Q_{\mathrm{H}}(x,u_{\mathrm{H}};\theta_{\mathrm{H}})-Q_{ \mathrm{H}}(x,u^{\prime};\theta_{\mathrm{H}})}. \tag{11}\]
#### 4.2.4. Representing the dynamics of human learning
Finally, we are faced with the question of how to functionally represent the dynamics of human learning; for example, we could take inspiration from computational cognitive science and model \(\hat{f}_{L}^{\phi}\) as Bayesian inference (Kang et al., 2017). Instead of committing to a specific functional form, in this work we seek a model that has the potential to capture a broad range of "learning algorithms" that the human could use to update their internal parameter. Recently, self-attention based transformer models (Kang et al., 2018) have shown success at predicting high-dimensional sequential tasks (Kang et al., 2018), at the cost of being domain-specific. Inspired by this, we represent \(f_{L}^{\phi}\) as a transformer encoder where \(\phi\) are the weights of the neural network. At each time step \(t\), a collection of the state \(x^{t}\), the human's action \(u_{\mathrm{H}}^{t}\), and the next state \(x^{t+1}\) are fed into an encoder to extract embeddings which are fed into a transformer encoder that predicts the human's next internal model. Training details are in Appendix A.3.
#### 4.2.5. Deriving an efficient, gradient-based solution
To optimize the transformer-based model of human learning dynamics, we need the gradient of our inference objective with respect to the neural network parameters. Here a key challenge lies in the human's policy gradient because it requires differentiating through the DARE function, which is non-obvious. However, we leverage recent work (Kang et al., 2017) to obtain the relevant closed-form Jacobians, enabling us to efficiently infer the parameters of \(f_{L}^{\phi}\) via gradient-based optimization. More details on this approach are in Appendix A.2.
## 5. Influencing Human Learning with Robot Actions
Inferring how humans learn presents an opportunity for human-robot interaction. For example, when a human teleoperator is mistaken about the robot's inertia, it may take them many interactions to learn and become better. Instead, could the robot _influence_ the human so that their understanding improves faster? Here, we mathematically formalize this influence by embedding the approximate dynamics model of human learning into robot planning.
**Formalizing the Influence Problem.** We formalize the robot influence problem as a Markov Decision Process (MDP) where the human's internal model parameter is part of the state. Our MDP is a tuple \(<S,U_{\mathrm{R}},T,r_{\mathrm{R}}>\) where the state \(s=(x,\theta_{\mathrm{H}})\in S\) is the joint physical state and human internal model parameter and the robot's actions are \(u_{\mathrm{R}}\in U_{\mathrm{R}}\). The stochastic state transition function is defined as \(T(s^{t+1}\mid s^{t},u_{\mathrm{H}}^{t})\coloneqq\sum_{u_{\mathrm{H}}}\mathbb{P} (u_{\mathrm{H}}\mid s^{t})\tilde{f}(s^{t},u_{\mathrm{H}}^{t},u_{\mathrm{H}}^{t,s^{t+1}},s^{t+1})\) which accounts for the human policy from Equation (2). Importantly, \(\tilde{f}(s^{t},u_{\mathrm{H}}^{t},u_{\mathrm{H}}^{t,s^{t+1}})\) is a deterministic function that evolves \(x^{t}\) via the physical dynamics \(f\) from Equation (1) and the human's internal model parameter \(\theta_{\mathrm{H}}^{t}\) via the human learning dynamics \(f_{L}^{\phi}\) from Equation (6). Finally, the robot optimizes its reward function \(r_{\mathrm{R}}(s,u_{\mathrm{R}},u_{\mathrm{H}};\theta^{*})\) where \(\theta^{*}\) is the robot's _true_ internal model parameters (e.g., the robot's true physical dynamics). Note that
because \(s=(x,\theta_{\rm H})\), the robot's reward depends on the human's time-varying internal model, \(\theta_{\rm H}\), at each timestep.
The robot seeks an optimal policy \(\pi_{\rm R}^{*}\) which maximizes it's reward in expectation over the human's action sequence, \(\mathbf{u}_{\rm H}\):
\[\pi_{\rm R}^{*}=\arg\max_{\pi_{\rm R}}\mathbb{E}_{\rm u_{H}}\left[\sum_{f=0}^{ \infty}r_{\rm R}(s^{t},u_{\rm R}^{t},u_{\rm H}^{t};\theta^{\prime})\right]s.t.\; \mathcal{T}(s^{t+1}\mid s^{t},u_{\rm R}^{t}), \tag{12}\]
Because human's internal model parameter \(\theta_{\rm H}^{t}\) is part of the state and the state transition function \(T(s^{t+1}\mid s^{t},u_{\rm R}^{t})\) includes the inferred dynamics model of human learning, \(\pi_{\rm R}^{*}\) should automatically influence the human's internal model if it yields higher reward.
**Computing Solutions to the Influence Problem** The presence of the human's nonlinear learning dynamics \(f_{L}^{\phi}\) in the transition function results in a nonconvex optimization problem. To obtain the optimal robot policy, we would have to solve the MDP either exactly with dynamic programming (which suffers from the curse of dimensionality) (Bishop, 2016) or approximately via receding-horizon control (which requires trading off optimality with computational efficiency) (Bishop, 2016). To achieve both long-horizon reasoning and efficient runtime performance, we use a Dyna-style algorithm (Dyna, 2017) that uses the samples generated by the transition \(T(s^{t+1}\mid s^{t},u_{\rm R}^{t})\) to train \(\pi_{\rm R}^{*}\) using model-free learning (Proximal Policy Optimization (Zhu et al., 2017)).
## 6. Simulated Human Experiments
We want to test two aspects of our approach: our ability to infer the dynamics of human learning and the effectiveness of our robot influencing algorithm. To fully validate both, we need access to the ground-truth human learning dynamics (\(f_{L}\)). For this reason, we first perform a series of simulation experiments with simulated humans. We explore two shared autonomy contexts: a robot teaching a human about physics-based robot dynamics (Section 6.1) and a robot that implicitly influences human objectives, like their goal or motion preferences (Section 6.2).
Similar to prior work in shared autonomy (Han et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), the robot combines the human's commanded action, \(u_{\rm H}\), with the robot's planned guidance, \(u_{\rm R}\), and executes the action:
\[u=\alpha\cdot u_{\rm R}+(1-\alpha)\cdot u_{\rm H} \tag{13}\]
where \(\alpha\in[0,1]\) trades off how much guidance the robot can exert. In all experiments, we use \(\alpha=0.5\). To generate human demonstrations and infer the human learning dynamics, we simulate a suite of human learners (see 6.1.1 and 6.2.1). In each experimental environment we collect 50 demonstrations for model learning. We randomize the initial state of the robot for each demonstration, and randomize the robot actions during each interaction.1
Footnote 1: We randomize \(u_{\rm R}\) to diversely cover how human’s internal model changes.
### Teaching Physical Dynamics
We focus on shared autonomy settings where the human knows the task objective (e.g., control a robot arm to follow a path), but they learn about the true robot dynamics (e.g., inertia). We want to understand how the human learns about the physical robot dynamics, and if a robot that actively _teaches_ the human about its physics can help the human quickly improve their task performance.
#### 6.1.1. Dynamics of human learning
Motivated by computational cognitive science models (Krause et al., 2017), we simulated two types of human learners: **gradient-based** learners and **threshold** learners. All humans update their internal model via Equation (3), but the structure of \(f_{L}\) takes various forms. After observing a new state-action pair \((x^{t},u^{t})\), the **gradient-based** learner updates their parameter \(\theta_{\rm H}^{t}\) according to a gradient-ascent update rule: \(f_{L}^{\rm grad}\coloneqq\theta_{\rm H}^{t}+\eta\nabla_{\theta_{\rm H}}p(u^{t }\mid x^{t};\theta_{\rm H}^{t})\) where \(\eta\in\mathbb{R}_{+}\) is the step size. Note that \(u^{t}\) is the observed, total executed control, possibly combining \(u_{\rm R}\) and \(u_{\rm H}\). Intuitively, this learner can be viewed as doing gradient-based maximum likelihood estimation of their latent parameter, similarly to prior IRL methods (Bishop, 2016). The **threshold** learner also uses a gradient-based learning rule, but only updates their internal parameters if they observe a "large enough" change: \(f_{L}^{\rm thresh}\coloneqq\theta_{\rm H}^{t}+\eta\mathbb{1}_{|\,\nabla p(u^{t }\mid x^{t},\theta_{\rm H}^{t})|>e}\big{[}\nabla_{\theta_{\rm H}}p(u^{t}\mid x ^{t},\theta_{\rm H}^{t})\big{]}\) where \(\mathbb{1}\) is an indicator determining if the magnitude of the gradient is deemed large enough to induce a learning update and \(e\) is a threshold parameter.
#### 6.1.2. Human internal model
In all experiments, the simulated humans are learning about the robot's physical dynamics and thus \(\theta_{\rm H}\) encodes various aspects from Equation (7).
#### 6.1.3. Simulated environments
Figure 3 shows our simulated environments, all or which have continuous state and action spaces.
**(1) Lunar Lander.** The human controls the Lunar Lander's engines to change its tilt. The human wants to keep the lander upright during its descent. Let the state be the tilt angle with respect to the ground and tilt angular velocity \(x=(\dot{\psi},\omega)\) and \(u\) be the engine force. The dynamics are \(x^{t+1}=Ax^{t}+Bu^{t}\) where the ground-truth dynamics are \(A^{*}=[1,0.2;0,1],B^{*}=[0;0.5]\). Here, the human's internal model represents the control matrix \(\theta_{\rm H}\coloneqq B\), which depends on the human's inertia estimate.
**(2) Robot Arm Teleoperation.** The human controls the end-effector of a 7DOF robot arm via hand gestures (see Figure 3). They want to control the robot to reach a series of known goals, \(x_{q}\). However, one of the robot motors is slightly defective, causing the robot to consistently lag in one direction. Let the state be the robot end-effector position \(x=(p^{x},p^{y},p^{x})\) and the control \(u\) be linear velocity. The robot's end-effector dynamics can be described by the goal-dependent system2: \(x^{t+1}=Ax^{t}+B\big{[}u^{t}-\mathrm{sign}(x^{t}-x_{g})\odot w\big{]}\) where \(w\) is the bias induced by the defective robot motor and \(\odot\) is the Hadamard product. Intuitively, this describes a dynamical system that consistently experiences lag in the \(x\)-direction. The ground-truth dynamics are \(A^{*}=I^{3\times 3}\), \(B^{*}=\mathrm{diag}(0.4,0.4,0.4)\), and \(w^{*}=[-0.15,0,0]^{\top}\). The human's internal model is \(\theta_{\rm H}\coloneqq(B,w)\), which captures their system responsiveness and bias estimates.
Figure 2. Our inference problem lets us learn to predict \(\theta_{\rm H}^{t}\).
#### 6.1.4. Human objective
We assume the human always knows the objective, and their reward function is quadratic as in (8). For **Lunar Lander** the human was rewarded for keeping the lander upright and stable (\(\psi=0\), \(\omega=0\)), and for **Robot Arm** they were rewarded for reaching all the goals and tracking the path shown in Figure 3.
#### 6.1.5. Robot objective
The robot objective is to align human's internal model with the true robot dynamics model while minimally intervening. Mathematically, the robot's reward function is:
\[r_{\text{R}}(s,u_{\text{R}},u_{\text{H}};\theta^{s})=-\|\theta_{\text{H}}- \theta^{s}\|_{2}^{2}-\|u-u_{\text{H}}\|_{2}^{2}, \tag{14}\]
where the true dynamics are \(\theta^{s}\coloneqq B^{s}\) in the **Lunar Lander** environment and \(\theta^{s}\coloneqq(B^{s},w^{s})\) in the **Robot Arm** setting.
#### 6.1.6. Baselines
We compare our method where the robot actively teaches by planning with the inferred learning dynamics \(f_{L}^{\phi}\) (**Active Teach**) to a robot that teaches with the true learning dynamics \(f_{L}\) (**Oracle**), no robot intervention (**Passive Learn**), and a robot that randomly perturbs the human actions (**Random**).
#### 6.1.7. Hypotheses
**H1:**_We can learn to predict \(\theta_{\text{H}}^{t}\) well by maximizing the MLE objective._**H2: Active Teach outperforms Passive Learn and Random in aligning the human's internal model._**H3: Robot stops intervening when the human's internal model is well-aligned.
#### 6.1.8. Results
For **H1**, we study the relationship between the MLE objective in (5) and our inferred model's \((f_{L}^{\phi})\) ability to predict \(\theta_{\text{H}}\). Figure 2 shows these curves for both the **Robot Arm** and **Lunar Lander** environments over 50 epochs. We see that across both **gradient** and **threshold** human learners, the log likelihood of the human's actions increases (shown in pink) while the \(\theta_{\text{H}}\) prediction error decreases (shown in blue), supporting **H1**.
Figure 3 shows the human's internal model error, the robot's effort, and the difference between the human's action and the optimal action in the **Robot Arm Teleoperation** and the **Lunar Lander** environment for both types of human learners. We see that across all environments, our method performs comparably to **Oracle** model, and is able to align the human's internal model of the robot's dynamics with the true dynamics significantly faster than **Passive Learn** or **Random** (supporting **H2**). Interestingly, in all but one setting does the robot automatically stop teaching the human since the human's internal model is sufficiently correct (supporting **H3**). The one exception is in the **Robot Arm Teleoperation** environment with the **threshold** human. Since this human doesn't learn when the gradient is too small, the robot must continue to exert effort to maximize its reward.
### Implicitly Influencing Human Objectives
We now turn to scenarios where the human has an accurate understanding of the robot's dynamics, but their objective (i.e., their reward function \(r_{\text{H}}\)) can be changed by the robot. Specifically, we study how assistive robots can implicitly influence human motion preferences and desired goals. Importantly, in this setting influencing or teaching the human is not explicitly in the robot's objective: the robot simply wants to perform the desired task with minimal assistance. Thus, getting the human to want to reach a goal or change their preferences should be an emergent behavior of robots planning with the dynamics of human learning.
#### 6.2.1. Dynamics of human learning
We simulate3 the **gradient** human learner from 6.1.1 and introduce a new human, the **Bayesian** learner4 which is inspired by probabilistic models of cognition (Groves et al., 2018; Goyal et al., 2019). This human's learning produces a full posterior, \(b^{t+1}(\theta_{\text{H}})\), over the model parameters given a state-action observation, and the dynamics of learning are: \(f_{L}^{\text{Bayes}}\propto P(u^{t}\mid x^{t},\theta_{\text{H}})b^{t}(\theta_{ \text{H}})\).
Footnote 3: While we simulate the human as changing their reward, but the human’s reward could be viewed as static while their subgraphs change Nonetheless, it will be common for a robot to not fully represent this hierarchy.
Footnote 4: **Bayesian** humans act under their belief \(\mathbb{P}(u_{\text{H}}\mid x)=\sum_{\theta_{\text{H}}}b(\theta_{\text{H}}) \mathbb{P}(u_{\text{H}}\mid x,\theta_{\text{H}})\).
#### 6.2.2. Human internal model
Since the human's objectives are influenceable, we model \(\theta_{\text{H}}\) as a reward parameter encoding the motion preferences \(\theta_{\text{H}}\coloneqq(Q,R)\) or a desired goal state \(\theta_{\text{H}}\in\Theta\).
#### 6.2.3. Simulated environments
We assume the human knows the physical robot dynamics (the bias-free **RobotArm** dynamics from 6.1.3), but can have their reward influenced by new observations.
**(1) Goal Influence.** The human wants to teleoperate the robot to put an object in one of the three trays (upper left Figure 4). However,the human doesn't notice that only one of the trays is empty enough. Unlike the human, the robot's sensors detect that only one of the trays is empty. We investigate if the robot can
Figure 3. (left) Visualization of both simulation environments. (right) Mean and standard deviation of human internal model error, robot effort, and human action optimality for both dynamics teaching environments, and both simulated humans.
influence the human to change their preferences about which tray (i.e., goal location) to place their object in.
**(2) Preference Influence.** The human wants to teleoperate the robot to pick up a cup on the table. Their initial preference is to move the robot's end-effector in a straight line from start to the cup (lower left Figure 4). However, the robot knows that grasps tend to fail with this kind of motion. Instead, the robot knows that first moving directly above the can and then straight down to grasp has a higher chance of success. We investigate if the robot can influence the human to change their preferences about how to reach the cup.
#### 6.2.4. Human objective
In all simulations the human has a quadratic cost function (from (8)). In **Goal Influence** the simulated human receives reward for moving the robot end-effector to their desired tray, and in **Preference Influence** the human receives reward according to their current preference matricies, \((Q,R)\).
#### 6.2.5. Robot objective
We implement an _assistive robot_ that wants to help the human perform the task while minimally intervening. However, we assume that the robot knows best: the robot knows which goal or reward weights lead to success. Let \(\theta^{*}\) capture this aspect of the robot's reward. In the **Goal** setting the robot's reward \(r_{\text{R}}(s,u_{\text{R}},u_{\text{H}};\theta^{*})=-(x-\theta^{*})^{\top}Q( x-\theta^{*})-u^{\top}Ru-||u-u_{\text{H}}||_{2}^{2}\) and in **Preference** the robot's reward parameter is \(\theta^{*}=(Q^{*},R^{*})\), yielding \(r_{\text{R}}(s,u_{\text{R}},u_{\text{H}};\theta^{*})=-x^{\top}Q^{*}x-u^{\top} R^{*}u-||u-u_{\text{H}}||_{2}^{2}\) where \(u\) is the combined human and robot action from Equation (13).
#### 6.2.6. Baselines
We implement our method where the robot assists the human and plans with the inferred dynamics of human learning (**Learning Assist**). We compare to a robot assisting with the ground-truth dynamics of human learning (**Oracle**), robot assistance that is unaware that humans learn (**Static Assist**), and a robot that randomly perturbs the human actions (**Random**).
2.7. Hypotheses. **H4: Learning Assist** aligns the human's mental model faster. **H5:** Assistance that accounts for human learning enables the human-robot team to achieve higher reward under the true \(\theta^{*}\).
#### 6.2.8. Results
Figure 4 shows the human's internal model error, robot effort, and task cost (i.e., just the task-component of \(r_{\text{R}}\), negated) for both environments. Because the **Learning Assist** robot knows that the human's internal model can be changed, it _automatically_ exerts higher effort early on to align the human's internal model with it's own, resulting in less long-term assistance and lower task cost (supporting **H4** and **H5**). In contrast, the **Static Assist** robot is not aware that the human can change their mind, and thus does not exert enough effort to influence the human's internal model. After repeatedly incurring task cost because the two agents are at odds with each other, the **Static Assist** robot "gives up" and starts executing the human's control directly: in other words, \(u=u_{\text{H}}\).
## 7. User Study: Teaching to Teleoperate
So far we conducted experiments with simulated human behavior, allowing us to analyze the quality of our inferred human learning dynamics model, and the robot's ability to influence simulated humans. Here we investigate if we can infer the dynamics of _real_ human learning, and enable robots to influence real users.
We focus on scenarios where the robot's physical dynamics are different from what the human is used to; for example, perhaps the human was used to teleoperating a robotic wheelchair, but is now teleoperating a robotic arm. As they interact with the robotic arm, they will naturally learn about the new robot dynamics. In our IRB-approved user study, we investigate if a robot can actively teach a human the physical dynamics and improve their teleoperation performance faster than if the human does the task on their own. In other words, we aim to understand if a robot can _align_ the human's internal model with the robot's.
**Experimental Setup.** We designed a teleoperation task where the human controls a 7DOF Jaco robot arm through a webcam-based gesture interface (Figure 1). The participant uses their index finger to indicate how the end-effector should move parallel to the tabletop. The task is to move the end-effector to reach four goals on the table in a counter-clockwise pattern, tracing out a diamond pattern. All participants experience a familiarzation task where they perform the task unassisted, with the default robot dynamics in order to understand the gesture interface. In software, we then simulate two "new" robots, each with different physical properties.
**Independent Variables.** We manipulated the _robot strategy_ with two levels: _no-teaching_ and _active-teaching_. The robot either let the human do the task on their own, or it modified the human's input to teach them about the physical robot dynamics via Equation (12). We also manipulate the _robot physical dynamics_ with two levels: end-effector dynamics _bias in x-direction_ and _bias in y-direction_.
**Dependent Measures.** A challenge in evaluating our experiment is that we do not have access to the human's ground-truth internal model. As a proxy, we measure _human action optimality distance_: \(||\hat{u}_{H}-u^{*}||_{2}^{2}\). Intuitively, the better the human understands the robot, the more optimally they should be able to control it to reach the goals. Since we cannot directly measure a human's internal understanding, we instead look at their actions to measure their deviation from the optimal action under the robot's true physics. We also measured subjective measures via a Likert scale survey.
**Hypotheses.****H6:**_Participants in the active teaching condition become optimal teleoperators faster than passively learning on their own._
Figure 4. (left) Environments for influencing human objectives. (right) Internal model error, robot effort, and task cost.
**H7**: _Participants feel they learned to teleoperate faster and understood the robot dynamics better in the active teaching condition._
**Participants.** We recruited two groups of participants from the campus community: the first for providing data for inferring the dynamics of human learning (12 participants; 2 female, 10 male, age 18-34, all with technical backgrounds), and the second for the user study (10 participants; 1 female, 8 male, 1 non-binary, age 18-34, all with technical backgrounds). For inferring the human learning dynamics, all participants learned to teleoperate the robot unassisted and we counterbalanced the _robot physical dynamics_.
**Procedure.** A within-subjects design is challenging, since humans who experience one condition will learn about the robots and then carry over that experience to the next condition. To study the effect of this confound, each participant experienced a combination of _robot strategy_ and _physical dynamics_ conditions, but in a random order. For example, one group of participants would interact with the _(active-teaching, bias-x)_ condition and then _(no-teaching, bias-y)_ condition. Thus, each participant experiences both robot strategies and biases. We counterbalance the order in which the participants experience the combination. All participants experienced a familiarization round at the start and between each experimental condition, to "reset" their mental model of the robot. Each participant gave 3 demonstrations per condition, each lasting \(\sim\)1 minute.
**Quantitative Results.** Figure 5 shows how _human action optimality distance_ varies over time with each robot strategy. We conducted an ANOVA with _robot strategy_ and stage (first or second half of interaction) as factors and _robot physical dynamics_ as random effect. We found a significant main effect of the robot strategy (\(F(1,19)=12.943,p=0.001\)) and a marginal interaction effect between the robot strategy and the interaction stage (\(p=0.098\)), so we did not run a post-hoc analysis. However, we hypothesize that this marginal interaction effect comes from the fact that early-stage changes in robot behavior (induced by either robot strategy) influences the human's later-stage action optimality. Ultimately, the quantitative results indicate a significant improvement in the human's action optimality when the robot actively teaches them compared to when the human passively learns (supporting **H6**).
**Qualitative & Subjective Results.** On the right of Figure 5 we visualize the executed trajectories from all participants in the active-teaching (orange) and no-teaching (grey) conditions. The highlighted trajectories are two representative examples, the color gradient indicates time along the trajectory, and the dashed line is the desired path. When participants passively learn on their own, their trajectories are consistently suboptimal, weaving around the optimal path. In contrast, in the active teaching condition, the initial portion of the trajectory exhibits the robots teaching behavior: the robot intentionally _exaggerates_ the dynamics bias to change the human's internal model faster. After this initial exaggerated deviation, the human trajectory is closer to optimal compared to the passive learning trajectory at comparable timesteps (see Appendix A.4 for a detailed visualization of human and robot actions).
We also ran an ANOVA on the Likert survey questions. Survey questions investigated perceived performance improvements (e.g., "By the end of the interaction, it was easy to control the robot to do the task.") and robot understanding (e.g., "By the end of the interaction, I understood the robot's physical properties."). Across all questions, we did not find a significant effect of the robot strategy (rejecting **H7**). What we found surprising was that even though participants were _quantitatively_ performing better in the teaching condition, they did not _perceive_ an improvement in performance (\(p=0.689\)) nor in their understanding of the robot physics (\(p=0.299\)). We hypothesize that this could be because participants only interacted with each robot strategy for one minute, making the differences hard to notice. In the future, investigating longer-term interactions with the robot would shed light on the disconnect.
## 8. Conclusion
In this work we took a step towards enabling robots to understand the influence that they have over human internal models. We do this by modeling human learning as a nonlinear dynamical system that evolves as a function of new observations that the robot can influence. We propose a tractable method for inferring approximate human learning dynamics from demonstrations that naturally exhibit human learning, and propose how robots can influence human learning by embedding the approximate dynamics into robot planning. Our experimental results indicate that robot influence is possible and can help humans learn better internal models.
**Limitations & Future Work.** A strength and limitation of our approach is representing the dynamics of human learning via a transformer. As a general function approximator, it poses no assumptions on the structure of the human's learning dynamics; in fact, we are excited that our results indicate that it is possible to infer a useful model of human learning from real data, without prior assumptions. However, since neural networks require abundant human data, they are not appropriate for low-data settings and may fail when encountering humans that are out of distribution. A further limitation is that if the person is not noisily-optimal as in (2) and has a specific bias (e.g., myopia), then the transformer will learn parameters that compensate for this; in turn, this could lead the robot to influence the human in unintended ways. In the future we are excited to combine the strengths of data-driven models and cognitive science models of human learning. While our user study relies on an "average" dynamics model of human learning trained
Figure 5. (left) Avg. human action optimality distance and 95% confidence interval. (right) Dashed line is desired path. Participant trajectories reveal that an active teaching robot initially _exaggerates_ the dynamics bias to teach the human.
from all participants' data, humans may exhibit unique ways of learning. Inferring personalized learning dynamics is an exciting future direction, and pre-trained models of humans could serve as a useful starting point for adapting to new humans. Finally, while the LQ approximation enables tractable inference, extensions into non-LQ settings will unlock more settings (e.g., autonomous cars).
|
2302.06820 | Quark clusters, QCD vacuum and the cosmological 7Li, Dark Matter and
Dark Energy problems | We propose a non-exotic electromagnetic solution (within the standard model
of particle physics) to the cosmological 7Li problem based upon a narrow 2 MeV
photo-emission line from the decay of light Glueballs (LGBs). These LGBs form
within color superconducting, tens of Fermi in size, quark clusters (SQCs) in
the radiation-dominated post-BBN epoch. The mono-chromatic line from the LGB ->
gamma+gamma decay reduces Big-Bang nucleosynthesis (BBN) 7Be by 2/3 without
affecting other abundances or CMB physics, provided the combined mass of the
SQCs is greater than the total baryonic mass in the Universe. Following the LGB
emission, the in-SQC Quantum-ChromoDynamics (QCD) vacuum becomes unstable and
"leaks" (via quantum tunnelling) into the external space-time (trivial) vacuum
inducing a decoupling of SQCs from hadrons. In seeking a solution to the 7Li
problem, we uncovered a solution which also addresses the dark energy (DE) and
dark matter (DM) problem making these critical problems intertwined in our
model. Being colorless, charge neutral, optically thin and transparent to
hadrons, SQCs interact only gravitationally making them a viable CDM candidate.
The quantum tunnelling of the in-SQC QCD vacuum to the trivial vacuum offers an
explanation of DE in our model and allows for a cosmology which evolves into a
LambdaCDM universe at low redshift with a possible resolution of the Hubble
tension. Our model distinguishes itself by proposing that the QCD vacuum within
SQCs possesses the ability to tunnel into the exterior trivial vacuum,
resulting in the generation of DE. This implies the possibility that DM and
hadrons might represent distinct phases of quark matter within QCD,
characterized by different vacuum properties. We discuss SQC formation in
heavy-ion collision experiments at moderate temperatures and the possibility of
detection of MeV photons from LGB -> gamma+gamma. | Rachid Ouyed, Denis Leahy, Nico Koning, Prashanth Jaikumar | 2023-02-14T04:22:03Z | http://arxiv.org/abs/2302.06820v3 | # Quark nuggets, QCD vacuum and the cosmological \({}^{7}\)Li, Dark Matter and Dark Energy problems
###### Abstract
We propose a non-exotic electromagnetic solution to the cosmological \({}^{7}\)Li problem based upon a 2 MeV photoemission line from color superconducting quark nuggets (CSCQN) that destroys Big-Bang nucleosynthesis (BBN) \({}^{7}\)Be without affecting other abundances or Cosmic Microwave Background (CMB) physics. Conversion of CSCQN gluonic (rest-mass) energy to 2 MeV photons in the radiation-dominated post-BBN epoch reduces the primordial abundance of \({}^{7}\)Be (and thus cosmological \({}^{7}\)Li) by 2/3, provided the combined mass of the nuggets is greater than the total baryonic mass in the Universe. CSCQNs in our model are colorless, charge neutral, optically thin and decouple from the strong interaction (i.e. have minimal interaction with baryons and stars) making them a viable cold dark matter (CDM) candidate. The drainage (i.e. quantum tunnelling) of CSCQN Quantum-ChromoDynamics (QCD) vacuum to the external space-time (trivial QCD) vacuum offers a natural explanation of dark energy in our model and allows for a cosmology which evolves into a \(\Lambda\)CDM universe at low redshift with a possible resolution of the Hubble tension. The connection between CDM and DE in our model supports the notion that in-hadron confinement contains QCD condensates and disagrees with the conventional view of (space-filling) QCD condensates.
## 1 Introduction
The primordial abundances of the light elements produced in the first few minutes of the universe predicted by standard hot Big-Bang cosmology (Hoyle & Tayler, 1964; Peebles, 1966; Wagoner et al., 1967) are in excellent agreement with the abundances inferred from data (e.g. Tytler et al., 2000). BBN starts when the deuteron (D) bottleneck is overcome at1\(k_{\rm B}T\sim 100\) keV and terminates at \(k_{\rm B}T\sim 30\) keV (redshift \(z\sim 4\times 10^{8}\)) due to electrostatic repulsion between nuclei (e.g. Lang, 1999); \(k_{\rm B}\) is the Boltzmann constant. Subsequently, significant amounts of D, \({}^{3}\)H and \({}^{4}\)He build up followed by the production of much less-abundant elements such as \({}^{7}\)Be. With a half-life of \(\sim 53\) days, \({}^{7}\)Be decays into \({}^{7}\)Li via bound electron capture, with emission of a neutrino (e.g. Khatri & Sunyaev, 2011). This cannot occur, however, until recombination at \(z\sim 1100\) when \({}^{7}\)Be becomes singly ionized. The measured \({}^{7}\)Li abundance is \(\sim 1/3\) of what is expected from this process (Spite & Spite, 1982) defining the cosmological \({}^{7}\)Li problem (see Fields, 2011 for a review).
Footnote 1: Dimensionless quantities are defined as \(f_{x}=f/10^{x}\) with quantities in cgs units unless specified otherwise.
The standard theory of electromagnetic cascades onto a photon background predicts a quasi-universal shape for the resulting non-thermal photon spectrum (e.g. Berezinsky et al., 1990). In the case of non-thermal big bang nucleosynthesis (BBN), cosmological constraints using this quasi-universal shape make purely electromagnetic solutions to the \({}^{7}\)Li problem impossible unless injected photon energy falls below the pair-production threshold in which case the spectral shape is very different (see Poulin & Serpico, 2015, and references therein). The effective pair production threshold is \(E_{\gamma,{\rm pair}}\simeq(m_{e}c^{2})^{2}/22k_{\rm B}T\sim 12~{}{\rm MeV} \times({\rm keV}/T)\) below which the double photon pair-creation process receives a Boltzmann suppression (Kawasaki & Moroi, 1995; see also Protheroe et al., 1995); \(m_{\rm e}\) is the electron mass. If injected when BBN is over, sub-threshold photons act to post-process the abundances computed in the standard scenario. Photons injected with \(1.59~{}{\rm MeV}<E_{\gamma}<2.22\) MeV can destroy \({}^{7}\)Be (suppressing \({}^{7}\)Li production) without affecting other BBN abundances, distorting the CMB background or injecting excess entropy (Poulin & Serpico, 2015).
In this paper, we propose a model for sub-threshold photon injection that relies on the properties of color su
perconducting (CSC) nuggets of quark matter that are tens of Fermis (fm) in size. These quark nuggets (QNs)2 form in the early universe and enter the CSC phase (i.e. become CSCQNs) in the radiation-dominated post-BBN epoch when the universe's temperature is in the keV range. Through conversion of gluonic energy, via condensation, into electromagnetically unstable light glueballs (LGBs), they emit a narrow mono-chromatic \(\sim 2\) MeV line capable of reducing post-BBN \({}^{7}\)Be by 2/3 and thus solving the \({}^{7}\)Li problem. The total mass of CSCQNs required to solve the \({}^{7}\)Li problem exceeds that of baryons in the Universe.
Footnote 2: The word “nugget” here refers to quark-gluon plasma blobs. Their binding energy compared to free baryons is about \(\sim 10\%\); the binding energy per baryon is in the tens of MeVs.
The properties of CSCQNs (colorless, charge-neutral, optically thin and gradually decoupling from the strong interaction), makes them a viable CDM candidate. The weakening of the CSCQN-hadron interaction is a consequence of the loss, through quantum tunnelling, of the CSCQN Quantum-ChromoDynamics (QCD) vacuum into the external (trivial QCD) vacuum3 of spacetime, which hereafter we label "drainage". The loss of an important fraction of the gluonic content of the nuggets due to LGB decay we hypothesize may destabilize the in-CSCQN QCD condensates and trigger the drainage. The leaked QCD vacuum behaves as Dark Energy (DE) in the exterior space-time in our model and yields a two-epoch (pre- and post-drainage) cosmology which behaves as a \(\Lambda\)CDM at low redshift.
Footnote 3: Meaning a vacuum with vanishing QCD (quark and gluon) condensates.
The paper is organized as follows: In SS2, we derive the equations for the destruction of \({}^{7}\)Be by a 2 MeV monochromatic line. We show how such a line could be produced by a lukewarm 2-flavor superconducting (2SC-like) quark phase where a percentage (\(\eta_{\rm G}\)) of gluonic energy is converted to 2 MeV photons. The drainage of the CSCQN QCD condensates (vacuum) is analyzed in SS3 where we present the resulting cosmology with a plausible resolution of the Hubble tension. We discuss CSCQNs as a CDM candidate in SS4 and list our model's limitations and predictions in SS5. We conclude in SS6.
## 2 Post-Bbn \({}^{7}\)Be Destruction
In our model, the CSC phase produces gluon condensation (i.e. \(M_{\rm LGB}c^{2}\sim 4\) MeV mass LGBs) which decay to \(E_{0}=M_{\rm LGB}c^{2}/2\sim 2\) MeV mono-energetic photons via the \({\rm LGB}\to\gamma+\gamma\) channel. The LGBs form on a hadronic timescale when the quark nugget enters the CSC phase and they decay to photons on timescales \(\tau_{\rm LGB}\sim 10^{-17}\) which is instantaneous compared to the Hubble expansion timescale (see SS2.1). This occurs in the radiation-dominated post-BBN era at time \(t_{\rm G}\) when the temperature is \(T_{\rm G}\). Hereafter quantities indexed with the letter "G" refer to values associated with the LGB-decay/photon-burst event; the corresponding redshift is \(z_{\rm G}=z(t_{\rm G})\). Subscripts "sG" and "eG" refer to the start and end, respectively of this phase with \(t_{\rm eG}-t_{\rm sG}\sim\tau_{\rm LGB}\) such that \(z_{\rm sG}=z_{\rm eG}+\delta z\) with \(\delta z<<z_{\rm G}\); effectively, \(z(t_{\rm sG})=z(t_{\rm G})=z(t_{\rm G})\).
The photon production is a delta function, \(\delta(t-t_{\rm G})\), in our model at time \(t=t_{\rm G}\). As shown in Appendix A, the destruction rates for \({}^{7}\)Be nuclei due to such a sudden release of mono-energetic \(E_{0}\) photons is
\[\ln\left(\frac{Y_{\rm Be,eG}}{Y_{\rm Be,sG}}\right)\sim-\frac{n_{\gamma}(E_{0},t_{\rm G})}{n_{\rm B}(t_{\rm G})}\times\frac{\sigma_{\rm Be}(E_{0})}{\sigma_ {\rm CS}(E_{0})}\, \tag{1}\]
where \(Y\) is the abundance. Here, \(n_{\rm B}(t_{\rm G})\) is the universe's co-moving baryon number density at \(t_{\rm G}\) while \(n_{\gamma}(E_{0},t_{\rm G})\) is the co-moving number density of photons from the LGB decay. The \({}^{7}\)Be photo-dissociation cross-section is \(\sigma_{\rm Be}(E_{0})\) and \(\sigma_{\rm CS}(E_{0})\) the Compton scattering cross-section. A reduction of \({}^{7}\)Be by 2/3 imposes that the RHS in Eq. (1) be unity.
Let us define \(\eta_{\rm DM,sG}\) as the initial total amount in mass of CSCQNs (the DM in our model) compared to baryons; i.e. before LGB decay. The corresponding CSCQN co-moving number density is
\[n_{\rm cscqn}(t_{\rm G})=\frac{\eta_{\rm DM,sG}n_{\rm B}(t_{\rm G})}{A_{\rm cscqn }}\, \tag{2}\]
with \(A_{\rm cscqn}\) the nugget's baryon number. The emitted photons co-moving number density is then
\[n_{\gamma}(E_{0},t_{\rm G})=\frac{\eta_{\rm DM,sG}n_{\rm B}(t_{\rm G})}{A_{\rm cscqn }}\times N_{E_{0}}\, \tag{3}\]
with \(N_{E_{0}}\) the total number of \(E_{0}\) photons emitted per CSCQN; \(N_{E_{0}}\sim\eta_{\rm G}A_{\rm cscqn}m_{\rm p}c^{2}/E_{0}\). Here, \(\eta_{\rm G}\) is the fraction of the gluonic energy (i.e. of the CSCQN rest-mass energy) converted to the mono-chromatic line at energy \(E_{0}\); \(m_{\rm p}\) is the proton mass.
Eq. (1) becomes
\[\ln\left(\frac{Y_{\rm Be,eG}}{Y_{\rm Be,sG}}\right) \sim-\eta_{\rm DM,sG}\frac{\eta_{\rm G}m_{\rm p}c^{2}}{E_{0}}\times \frac{\sigma_{\rm Be}(E_{0})}{\sigma_{\rm CS}(E_{0})} \tag{4}\] \[\sim-\frac{\eta_{\rm G}}{1-\eta_{\rm G}}\times\frac{\eta_{\rm DM, eG}m_{\rm p}c^{2}}{E_{0}}\times\frac{\sigma_{\rm Be}(E_{0})}{\sigma_{\rm CS}(E_{0})}\,\]
where, in order to get the last expression, we made use of the fact that the CSCQN total mass after conversion of gluonic energy to photons is \(\eta_{\rm DM,eG}=(1-\eta_{\rm G})\eta_{\rm DM,sG}\) or \(\eta_{\rm DM,sG}=\eta_{\rm DM,eG}/(1-\eta_{\rm G})\).
The \({}^{7}\)Be photo-dissociation cross-section, \(\sigma_{\rm Be}(E_{0})\), is given by Eq. (III.8) in Ishida et al. (2014). The ratio of \(\sigma_{\rm Be}(E_{0})\) to Compton scattering cross-section, \(\sigma_{\rm Be}(E_{0})/\sigma_{\rm CS}(E_{0})\), varies widely as shown in Figure 1. We see that for 2 MeV \(<E_{0}<2.2\) MeV on average \(\sigma_{\rm Be}(E_{0})/\sigma_{\rm CS}(E_{0})\sim 5\times 10^{-4}\). If the CSCQN total mass after conversion of gluonic energy to photons is the observed CDM amount, \(\eta_{\rm DM,eG}\sim 5\), then \(5m_{\rm p}c^{2}/E_{0}\sim 2.5\times 10^{3}\). In this case, the \({}^{7}\)Li problem is solved if
\[\frac{\eta_{\rm G}}{1-\eta_{\rm G}}\sim 1\, \tag{5}\]
which requires that \(\sim 50\%\) (i.e. \(\eta_{\rm G}\sim 0.5\)) of the CSCQNs gluonic energy is converted to \(\sim 2\) MeV photons.
As explained later in SS5, the value of \(\eta_{\rm G}\) can be lower (less gluonic energy shed by CSCQNs) while still solving the \({}^{7}\)Li puzzle. This is because in addition to losing gluonic energy via LGB decay, CSCQNs will lose their QCD condensates so that if \(\eta_{\rm DM,0}\sim 5\) in today's universe4 then \(\eta_{\rm DM,eG}\) must have been higher due to drainage (see first bullet point in SS5).
Footnote 4: The subscript “0” refers to values at redshift \(z=0\).
### The mono-chromatic \(\sim 2\) MeV line
The QCD phase diagram is complex (e.g. Ruster et al., 2004; Alford et al., 2008; Baym et al., 2018) and in principle one cannot exclude the existence of a CSC phase where gluon condensation would yield electromagnetically unstable \(X\) "particles" (here LGBs), as required in our model. Figure 2 shows a hypothetical phase diagram with the dashed line depicting a possible trajectory leading a quark nugget from its state at birth (in an unpaired phase) to the CSC phase. We require an unpaired phase at \(\mu<\mu_{\rm csc}\) which bridges the hadronic phase and the CSC phase. I.e. the CSC phase is accessed, at low-\(T\), from low density unpaired phase with a first-order line separating the two phases. Once in the CSC phase, a nugget becomes a CSCQN and produces LGBs in the radiation-dominated post-BBN epoch at keV temperatures.
In this paper, we will use the neutral 2SC phase as a reference CSC phase. It has the interesting property of converting a percentage (\(\eta_{\rm G}=3/8\)) of its gluonic energy to LGBs (i.e. gluonic condensation) at low temperature with a subsequent decay to a mono-chromatic line via the LGB \(\rightarrow\gamma+\gamma\) channel (see Appendix B for details). In the 2SC phase, an LGB with mass \(M_{\rm LGB}c^{2}\sim 4\) MeV would decay to two \(E_{0}=(M_{\rm LGB}c^{2}/2)\sim 2\) MeV photons on timescales of \(\tau_{\rm LGB}\sim 10^{-17}\) s. We set the CSC chemical potential at \(\mu_{\rm csc}=500\) MeV wth a corresponding number density \(n_{\rm csc}=\mu_{\rm csc}^{3}/\pi^{2}\sim 10^{39}\) cm\({}^{-3}\). Thus a CSCQN has a density5\(n_{\rm cscqn}=n_{\rm csc}\) of about ten times nuclear saturation density when it enters the CSC phase at time \(t_{\rm G}\).
Footnote 5: Not to be confused with the CSCQN co-moving density given in Eq. (2).
Once the nugget crosses into the CSC phase, the first order transition proceeds on hadronic timescales. During the transition, and because of latent heat released, a CSCQN gets heated to a temperature \(k_{\rm B}T_{\rm cscqn}\sim\Delta_{\rm csc}^{2}/\mu_{\rm csc}\) where \(\Delta_{\rm csc}\) is the CSC superconducting gap; here, \(\Delta_{\rm csc}\) and \(\mu_{\rm csc}\) are in units of MeV (see Eq. B7). The resulting LGB energy is \(E_{\rm LGB}=\sqrt{(M_{\rm LGB}c^{2})^{2}+(k_{\rm B}T_{\rm cscqn})^{2}}\). LGBs cannot form at temperatures exceeding the melting temperature which is of the order of the LGB's rest-mass energy; \(k_{\rm B}T_{\rm LGB,m}\sim M_{\rm LGB}c^{2}\). Thus we must ensure that \(T_{\rm cscqn}<T_{\rm LGB,m}\) in addition to \(E_{0}=E_{\rm LGB}/2<2.2\) MeV where the 2.2 MeV upper limit keeps the line below the Deuteron photo-ionization threshold (see SS1). In general for \(\Delta_{\rm csc}^{2}/\mu_{\rm csc}<2\) MeV, or \(\Delta_{\rm csc}<31.6\) MeV \(\times(\mu_{\rm csc}/500\) MeV\()^{1/2}\), these conditions are satisfied while guaranteeing a narrow \(\sim 2\) MeV line. \(E_{\rm LGB}\simeq M_{\rm LGB}c^{2}\sim 4\) MeV is achieved for a reasonable range in \(\mu\) as shown in Figure 3 (see Appendix B for details).
### The CSCQN size and baryon number
The CSCQN has to be optically thin to the 2 MeV photons. To first order, the photon mean-free-path is \(\lambda_{\rm csc,\gamma}=1/(n_{\rm csc}\sigma_{\rm q\gamma})\sim 10^{2}\) fm/\(n_{\rm csc,39}\) with \(\sigma_{\rm q\gamma}\sim 10^{-28}\) cm\({}^{2}\) the Thomson cross-section for photon scattering off up and down quarks. We set a typical CSCQN
Figure 1: The \(\sigma_{\rm Be}/\sigma_{\rm CS}\) ratio versus photon energy. The \({}^{7}\)Be photo-dissociation cross-section \(\sigma_{\rm Be}\) is from equation III.8 in Ishida et al. (2014). The Compton scattering cross-section, is given in Appendix (IV) in Kawasaki & Moroi (1995).
radius to be \(R_{\rm cscqn,thin}\sim\lambda_{\rm csc,\gamma}\) with a corresponding baryon number \(A_{\rm cscqn}\sim 10^{6}\).
## 3 CSCQNs and Dark Energy (De)
It is not unreasonable to conjecture that the loss of a substantial fraction (\(\eta_{\rm G}\)) of gluonic energy by the CSCQN could modify the QCD condensates potential within the CSCQN and trigger drainage of the CSCQN vacuum into the exterior (trivial QCD) vacuum. To investigate the implications of drainage, we adopt the scenario that the ground-state which fills up space-time is empty and that QCD condensates are supported within the CSCQNs as suggested for hadrons (e.g. Brodsky et al. 2012, and references therein). The in-hadron condensates has the advantage of removing the extremely large (a factor of \(\sim 10^{46}\)) contribution to the cosmological constant predicted when using the conventional view of (space-filling) QCD condensates. The novel idea introduced in this paper, namely the leakage of the CSCQN QCD condensates into the surrounding space-time vacuum, could offer an explanation of Dark Energy (DE) and yield a cosmology which may help solve the Hubble tension while yielding a \(\Lambda\)CDM universe at low redshifts.
The total amount of QCD vacuum energy stored in the nuggets is \(\rho_{\rm QCD}^{\rm vac}V_{\rm escqn}^{\rm tot.}\) with \(\rho_{\rm QCD}^{\rm vac}\) the density of the QCD condensates and \(V_{\rm escqn}^{\rm tot.}\) the total volume occupied by the CSCQNs which is constant in time. Drainage means that the CSCQN density decreases from
\[\rho_{\rm cscqn,eG}=(1-\eta_{\rm G})\rho_{\rm esc}\, \tag{6}\]
at \(t=t_{\rm G}\) to
\[\rho_{\rm cscqn,0}=(1-\eta_{\rm G})\rho_{\rm esc}-\rho_{\rm QCD}^{\rm vac.}= \rho_{\rm cscqn,eG}\times\left(1-\frac{f_{\rm V}}{1-\eta_{\rm G}}\right)\, \tag{7}\]
at redshift \(z=0\) with the assumption that the drainage timescale is a fraction of the age of the universe (see below). The parameter \(f_{\rm V}=\rho_{\rm QCD}^{\rm vac}/\rho_{\rm esc}\) is a measure of the contribution of the QCD condensates to the CSCQN rest-mass energy.
We can get a rough estimate of the time \(t_{\rm tun.}\) it would take the CSCQN QCD vacuum to leak into the exterior space-time trivial QCD vacuum. A rigorous calculation would follow for example Coleman (1977) and is beyond the scope of this paper. We consider instead a simple tunnelling problem across a square barrier with height given by the expectation value of the QCD vacuum and width \(L\sim R_{\rm cscqn}\). The corresponding tunnelling probability is then \(P_{\rm tun.}\sim e^{-R_{\rm cscqn}/\delta}\) where \(\delta\) is the penetration depth (e.g. Cohen-Tannoudji et al. 2006). The tunnelling timescale is \(t_{\rm tun.}\simeq(R_{\rm cscqn}/c)\times(1/P_{\rm tun.})\) giving us
\[t_{\rm tun.}\sim\frac{\delta}{c}\times xe^{x}\, \tag{8}\]
where \(x=R_{\rm cscqn}/\delta\). With \(\delta\) of the order of a Fermi (which is not unrealistic), solutions with tunnelling timescales of a few billion years (\(1~{}{\rm Gyr}\leq t_{\rm tun.}\leq 4~{}{\rm Gyr}\)) require \(R_{\rm cscqn}\sim 88.5~{}{\rm fm}\) consistent with our model where optically thin CSCQNs with \(R_{\rm cscqn}<10^{2}/n_{\rm esc,39}\) fm are necessary to solve the cosmological \({}^{7}{\rm Li}\) problem. Thus, the drainage occurs on astrophysical timescales
Figure 3: LGB mass versus chemical potential (\(\mu\)) for \(\Delta=0.1\mu\) and for \(\Lambda_{\rm QCD}=245~{}{\rm MeV}\) (the scalar parameter of QCD; see §2.1 and Appendix B for details). The two dashed lines show the \(4.0\leq M_{\rm LGBG}c^{2}({\rm MeV})\leq 4.4\) range yielding a mono-chromatic photon line in the \(2.0\leq E_{0}({\rm MeV})\leq 2.2\) range.
Figure 2: A possible QCD phase diagram. The dashed curve (“1 to 2”) depicts the cooling path of the nugget (formed at “1” with \(T_{\rm QCD}\sim 150~{}{\rm MeV}\)) traversing the unpaired phase and entering the CSC phase, at keV temperature, from the low density phase (“2”). A first-order line separates the unpaired phase from the CSC phase releasing heat (the vertical “2 to 3” arrow). A CSCQN gets heated to a temperature not exceeding the LGB melting temperature \(k_{\rm B}T_{\rm cscqn,m}\sim M_{\rm LGBG}c^{2}\) (see §2.1). The \(\sim 4~{}{\rm MeV}\) LGBs decay to the 2 MeV narrow photon line via \({\rm LGB}\to\gamma+\gamma\). “1” shows the initial state of a NS core making its way to “2” via cooling an compression (see §5 and Appendix B).
so that \(t_{\rm G}<<t_{\rm tun.}\) or in terms of redshift, \(z_{\rm G}>>z_{\rm tun.}\) with \(z_{\rm tun.}\) the characteristic drainage redshift; this also validates our assumption in deriving Eq. (7).
The time evolution of the CSCQN density we can then write as
\[\rho_{\rm cscqn}(t) =(1-\eta_{\rm G})\rho_{\rm csc}-\rho_{\rm QCD}^{\rm vac.}(1-e^{-(t -t_{\rm G})/t_{\rm tun.}}) \tag{9}\] \[=\rho_{\rm cscqn,eG}\left(1-\frac{f_{\rm V}}{1-\eta_{\rm G}}(1-e^{ -(t-t_{\rm G})/t_{\rm tun.}})\right)\.\]
The equation above incorporates the key parameters in our cosmology namely, \(\rho_{\rm csc},\eta_{\rm G},f_{\rm V}\) and \(t_{\rm tun.}\) which are all fundamentally related to QCD.
The time evolution of the DM density, with \(V_{\rm univ.}(t)\) being the Hubble volume at time \(t\), is
\[\rho_{\rm DM}(t)=\begin{cases}\frac{\rho_{\rm cscqn}(t)V_{\rm cscqn}^{\rm tot. }}{V_{\rm univ.}(t)}&\text{if}\quad t>t_{\rm G}\ (\text{or}\ z<z_{\rm G})\\ \frac{\rho_{\rm csc}V_{\rm cscqn}^{\rm tot.}}{V_{\rm univ.}(t)}&\text{if}\quad t \leq t_{\rm G}\ (\text{or}\ z\geq z_{\rm G})\,\end{cases} \tag{10}\]
with the resulting cosmology analyzed in Appendix C.
Our model offers a resolution of the Hubble tension (see Kamionkowski and Riess, 2022 for a recent review and references therein) and can be understood as a consequence of a CDM universe converting into a \(\Lambda\)CDM universe at \(z_{\rm tun.}\). Figure 4 shows that \(H_{0}\sim 73\ \rm km\ s^{-1}\) Mpc\({}^{-1}\) can be obtained for a range in \(\eta_{\rm G}\) and \(f_{\rm V}\) values with a drainage characteristic redshift \(2<z_{\rm tun.}<10\) (i.e. \(1<t_{\rm tun.}(\rm Gyr)<4\)). Our cosmology yields a universe which is younger than the \(\Lambda\)CDM universe with \(12.8<\rm age\ (Gyrs)<13.3\) for the parameters used in Figure 4. It remains to be shown whether our cosmology is in agreement with other cosmological data and measurements which the flat \(\Lambda\)CDM model explains extremely well (see e.g. metrics and tests suggested in Schoneberg et al., 2022). Furthermore, we caution that the details of how the QCD vacuum mixes with the space-time vacuum and how it evolves while preserving flatness remains to be understood. Nevertheless, being a vacuum the "DE" component in our model obeys an equation-of-state with parameter \(w=-1\).
## 4 CSCQNS as Cold Dark Matter (CDM)
Evidence for CDM is abundant but its nature remains unknown despite many theoretical investigations and dedicated experiments which have yet to detect any associated particle (e.g. Garrett and Gintaras, 2011; Schumann, 2015; Bertone and Hooper, 2018; Kisslinger and Das, 2019; Oks, 2021; Arbey and Mahmoudi, 2021). In our model, the \({}^{7}\)Li problem is solvable if the total mass in CSCQNs is of the order of the measured CDM value. In addition, CSCQNs are colorless, cold, optically thin, electrically neutral and would make an ideal CDM candidate if they decouple from the strong force (or interact minimally with baryons) following drainage.
The decoupling from the strong interaction may have already set in at \(t_{\rm G}\) once a CSCQN starts loosing some of its gluonic energy (a percentage \(\eta_{\rm G}\)) via LGB decay and continued gradually via drainage. The key point is that CSCQNs must interact weakly from early times (i.e. prior to the time of structure formation before recombination) in order to not affect the growth of density perturbation and preserve CDM structure formation; i.e. \(z_{\rm G}>z_{\rm eq.}\) with \(z_{\rm eq.}\) the redshift of matter-radiation equality.
In the pre-LGB and pre-drainage epoch (i.e. at \(z>z_{\rm G}\)), CSCQNs should interact with hadrons. The CSCQN-baryon interaction cross-section is \(\sigma_{\rm cscqn}=\pi R_{\rm cscqn}^{2}\times(\tau_{\rm crossing}/\tau_{\rm conv.})\). Here, \(\tau_{\rm crossing}=R_{\rm cscqn}/v_{\rm cscqn}\) is the baryon CSCQN crossing time and \(\tau_{\rm conv.}\) the conversion timescale of a baryon to the CSC phase; \(v_{\rm cscqn}\) is the nugget's speed. The rate of baryons swept by a CSCQN in the pre-drainage era is \(\sigma_{\rm cscqn}n_{\rm B}v_{\rm cscqn}\sim(A_{\rm cscqn}/\tau_{\rm conv.}) \times(n_{\rm B}/n_{\rm esc})\) with \(A_{\rm cscqn}=\frac{4\pi}{3}R_{\rm cscqn}^{3}n_{\rm esc.}\) Using \(n_{\rm B}(T)=\eta_{\rm B}n_{\gamma}(T)\) and \(n_{\gamma}(T)\sim 6.2\times 10^{32}\ \rm cm^{-3}\times T_{\rm MeV}^{3}\), the rate is \(\sim(3.8\times 10^{-16}\ \rm s^{-1})\times(\eta_{\rm B,-9.2}/n_{\rm csc,39}) \times T_{\rm MeV}^{3}\times(A_{\rm cscqn}/\tau_{\rm conv.})\); \(\eta_{\rm B}\simeq 6.1\times 10^{-10}\) is the baryon-to-photon ratio. For conversion timescales exceeding \(10^{-16}\) s, the nuggets will not grow in size in the era prior to their decoupling from the strong interaction. Although \(\tau_{\rm conv.}\) exceeds the strong interaction timescales, it is not unrealistic because the conversion is a two-step process with the baryon first deconfining and crossing the unpaired quark phase before reaching the CSC phase. In particular, the unpaired phase may act as an energy
Figure 4: The Hubble constant \(H_{0}\) as a function of \(z_{\rm tun.}\) (the drainage characteristic redshift) in our model for different values of \(\eta_{\rm G}\) and \(f_{\rm V}\). The \(H_{0}=73\ \rm km\ s^{-1}\) Mpc\({}^{-1}\) value is shown as the horizontal line. The resulting age of the universe is shown for each case and is younger than the \(\Lambda\)CDM universe.
barrier that requires that hadronic matter is injected with energy in order to be deconfined.
CSCQNs could constitute a viable CDM candidate and may be responsible for the formation of mini-halos and of the larger structures in the universe (Navarro, Frenk & White, 1996; Abel et al., 2002). Once halos form and virialize, CSCQNs will interact the strongest with stars and in particular with matter at the highest density, i.e. neutron stars (NSs). However, if \(t_{\rm tun.}\) is a fraction of the age of the universe then, by the time compact stars start to form in the universe, CSCQNs would have lost an important fraction of their QCD condensates thus minimizing their interaction with hadrons (see SS3). In other words, a NS is unlikely to convert by CSCQN capture. If the core of NSs can access the CSC phase via cooling and mass accretion this would have interesting implications to astrophysics (see Eq. (13)).
## 5 Discussion
Here we briefly discuss some limitations and distinctive features of our model and leave these as future investigations.
* **The \({}^{7}\)Li problem revisited**: We first recall that \(\eta_{\rm DM,eG}=(1-\eta_{\rm G})\eta_{\rm DM,sG}\) is the ratio of total amount of DM to that of the baryonic matter after the LGB decay to photons just before drainage starts (see SS2); \(\eta_{\rm DM,sG}\) is the ratio before LGB decay. From observations, today's ratio of total amount of DM to that of the baryonic matter is \(\eta_{\rm DM,0}\sim 5\) which means \((1-f_{\rm V})\eta_{\rm DM,eG}\sim 5\). Thus \(\eta_{\rm DM,sG}\sim\frac{5}{(1-f_{\rm V})(1-\eta_{\rm G})}\) and Eq. (5) becomes \[\frac{\eta_{\rm G}}{(1-\eta_{\rm G})(1-f_{\rm V})}\sim 1\,\] (11) or equivalently \[\eta_{\rm G}\sim\frac{1-f_{\rm V}}{2-f_{\rm V}}\.\] (12) I.e. a smaller percentage of the gluonic energy of the CSCQNs converted to LGBs could resolve the \({}^{7}\)Li problem. E.g. for \(f_{\rm V}\sim 0.7\) we need \(\eta_{\rm G}\sim 0.23\) which is less than the maximum \(\eta_{\rm G}\sim 3/8\) expected from the 2SC-phase. For \(f_{\rm V}=0\) we recover the \(\eta_{\rm G}=1/2\) value we arrived at in SS2. The above reduces the number of free parameters in our model.
* **Drainage and structure formation**: Because the \(\sim\)2 MeV photons co-moving number density is \(n_{\gamma}(E_{0},t_{\rm G})\propto n_{\rm cscqn}(t_{\rm G})\propto n_{\rm B}(t_ {\rm G})\) in our model, the \({}^{7}\)Li problem will be solved regardless of when the nuggets enter the CSC phase in the radiation-dominated post-BBN era. Eq. (4) is independent of \(z_{\rm G}\) and only on the other model's parameters which can be adjusted so that the RHS of Eq. (1) is unity. This is also valid in the matter-dominated post-BBN era where the \(n_{\gamma}(E_{0},t_{\rm G})\propto n_{\rm cscqn}(t_{\rm G})\propto n_{\rm B}(t_ {\rm G})\) relationship also applies. I.e., the model should work as long as the LGB decay occurs before recombination. However, as discussed in SS4 and in order to preserve structure formation, DM (i.e. CSCQNs) must decouple from the strong force much before the epoch of recombination in which case \(t_{\rm G}\) is in the radiation-dominated era. Nevertheless, we find that our model and in particular our cosmology does not depend critically on whether the LGB-decay/photon-burst occurs during the radiation-dominated or the matter-dominated post-BBN era (i.e. before recombination) because \(t_{\rm G}<<t_{\rm tun.}\).
* **The CSC phase and Neutron stars**: Figure 2 shows a suggested pathway, starting at point "1", a NS core could take to enter the CSC phase since conversion following CSCQN capture is suppressed (see SS4). A NS born with (or which acquires through evolution) a core in the unpaired phase could transition to the CSC phase by a sequence of cooling (to keV temperature via the URCA process; e.g. Paczynski, 1972) and compression (to \(\mu_{\rm esc}=500\) MeV via mass accretion). Take a NS with a core making up a fraction \(\eta_{\rm c}\) of the total mass. The energy released from conversion of gluonic condensation (e.g. LGBs) to photons is \((\eta_{\rm c}M_{\rm NS}/m_{\rm p})\times(\eta_{\rm G}m_{\rm p}c^{2})=\eta_{\rm c }\eta_{\rm G}M_{\rm NS}c^{2}\). Comparing this to the NS binding energy \(\frac{3}{5}GM_{\rm NS}^{2}/R_{\rm NS}\), we conclude that NSs with compactness parameter \[\frac{M_{\rm NS,\odot}}{R_{\rm NS,6}}<0.11\times\frac{\eta_{\rm c}}{0.1}\times \frac{\eta_{\rm G}}{0.1}\,\] (13) may be completely obliterated in the process; the NS mass and radius are in units of solar mass, \(M_{\odot}\), and \(10^{6}\) cm, respectively. NSs with higher compactness parameter would loose mass leaving behind a pure CSC core. In this latter case, the conversion to a CSC star puts a constraint on \(\rho_{\rm esc}\) due to the black hole limit \(2GM_{\rm NS}/c^{2}<R_{\rm csc}\) with \(R_{\rm csc}\) the radius of the CSC star. With \(\rho_{\rm esc}R_{\rm csc}^{3}\sim\rho_{\rm NS}R_{\rm NS}^{3}\) this gives \(\rho_{\rm esc}<10^{16}\ {\rm g\ cm^{-3}}/M_{\rm NS,\odot}^{2}\) which is consistent with the \(\mu_{\rm esc}\sim 500\) MeV (i.e. a 2SC-like phase) adopted in our model. Thus, if some NSs follow a path as suggested in Figure 2, the resulting photon fireball may have interesting implications to explosive astrophysics.
* **CSCQN-hadron decoupling**: If an important percentage of the quarks condensate (\(\bar{q}q\)) is lost during the drainage then it is not unreasonable to assume that the quarks within the CSCQN become undressed and should in principle partially decouple from the strong interaction. They would still rely on gluons to remain bound while exhibiting minimal interaction with baryons. The exact details of this decoupling remain to be worked out.
* **CSCQNs in today's detectors**: The hypothesized decoupling of CSCQNs from the strong interaction (i.e. they would only interact gravitationally) means that CDM in our model will be undetectable by current DM experiments. A possible indirect detection is through its impact on the evolution of galaxies as the DM gradually disappears into the space-time trivial QCD vacuum. It is worth mentioning however that if CSCQNs could be made in laboratories on Earth today, at sub-BBN temperatures, they could be detected via the 2 MeV photon line from the LGBs decay.
* **The stability of the CSC phase**: Our findings seem to hint at the standard neutral 2SC phase (adopted in this proof-of-principle paper) as the unspecified CSC phase. However, the 2SC phase may be unstable at small \(\Delta\) values due to the mismatch in the up and down quarks chemical potential (Huang & Shovkovy, 2004). It suggests that either the 2SC phase is stable in the regime of chemical potential (\(\mu\)) and the superconducting gap (\(\Delta\)) values we used or that another stable 2SC-like phase exits in nature and remains to be identified.
* **Matter-Antimatter annihilation and CSCQN size**: We hypothesize that each nugget is born with an anti-matter deficit of \(\eta_{\rm B}\simeq 6.1\times 10^{-10}\) meaning that there is one extra baryon per \(\eta_{\rm B}^{-1}\) quark-antiquark pairs; \(\eta_{\rm B}\) is the baryon-to-photon ratio. After annihilation (on timescales of \(1/n_{\rm cscq}\sigma_{\rm annih.}\sim 10^{-13}\) s with \(\sigma_{\rm annih.}\sim\) mbarn), a nugget has only baryons left in it; this assumes that annihilation does not destroy the nugget and instead it reduces it into a pure baryon nugget of radius \(R_{\rm cscqn,f}\sim R_{\rm cscqn,thin}\) where "f" stands for final. Here, \(R_{\rm cscqn,thin}\sim 10^{2}\) fm is the typical size of a CSCQN set by the photon mean-free-path in the 2SC phase. In other words, we claim that the maximum size of the "shrapnel" of the annihilated much bigger parent nugget to be of the order of \(R_{\rm cscqn,thin}\) (see SS2.2). In this case, a nugget's birth radius can be obtained from \(n_{\rm csc}R_{\rm cscqn,thin}^{3}/n_{\rm csc}R_{\rm cscqn,b}^{3}=\eta_{\rm B}\) which gives \(R_{\rm cscqn,b}\sim 10^{5}\) fm/\(n_{\rm csc,39}\). Some constraints and implications to consider in the future include: (i) Annihilation should also yield pions. These would decay on weak-interaction timescales and if their mean-free-path turns out to be much smaller than that of photons they may affect the \(\sim 10^{2}\) fm CSCQNs; (ii) While nugget formation (when the universe has aged such that its temperature is in the tens of MeV) is followed rapidly by annihilation, we must avoid re-creation of matter-antimatter pairs. I.e. ensure that pair-creation timescales exceed the Hubble expansion timescale; (iii) In the picture we propose here, CSCQNs would require a formation mechanism which is quite different from how cosmic strange-quark nuggets (Witten, 1984) and Axion quark nuggets (Zhitnitsky, 2003) form.
## 6 Conclusion
In this proof-of-principle paper, we have proposed that a color superconducting (CSC) phase of lukewarm QCD matter could offer a non-exotic solution to the cosmological \({}^{7}\)Li, the CDM and the Dark Energy enigmas. The narrow 2 MeV photon line which destroys \({}^{7}\)Be in the radiation-dominated post-BBN epoch we attribute to gluonic condensation (i.e. light glueballs or LGBs) in the CSC phase and its electro-magnetic decay modes (SS2). The detailed properties of the CSC phase (a neutral 2SC-like phase is hinted at) remain to be scrutinized.
CDM, according to our model, consists of colorless, charge neutral, optically thin cosmic quark nuggets in the CSC phase (CSCQNs) with \(R_{\rm cscqn}\sim 100\) fm in size and baryon number \(A_{\rm cscqn}\sim 10^{6}\). They decouple from the strong interaction and interact only gravitationally thus evading detection in currents DM experiments6. The decoupling is due to drainage of the QCD condensates (vacuum) within the CSCQNs into the trivial QCD vacuum of the exterior space-time which yields Dark Energy. As drainage proceeds, our cosmology gradually transitions from a non-DE to a DE (\(\Lambda\)CDM-like) universe at moderate redshift while allowing for a possible resolution of the Hubble tension (SS3).
Footnote 6: If made in laboratories on Earth, CSCQNs could be detected via the \(\sim 2\) MeV photon line from the LGBs decay.
Our model does not introduce new physics to solve the \({}^{7}\)Li, DM and DE problems but instead makes use of still uncertain properties of QCD phases and its vacuum properties. We are not the first to propose a connection between QCD in-hadron condensates and cosmology. It has been argued based on empirical properties of
hadrons that confinement is a pre-requisite for retaining condensates inside hadrons, which then largely eliminates the problem of the smallness of the cosmological constant. Our model, while evading rigorous calculation for an efficacious idea, is in line with such arguments, while the concept of drainage of the in-CSCQN vacuum, and the decoupling of CSCQNs from the strong force, is indeed a new idea (see SS3) that may turn out to have other useful physical applications.
R.O., D.L. and N.K. acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). P.J. is supported by a grant from the Natural Science Foundation PHY-1913693. We thank R. Rapp for interesting discussions on various aspects of the paper. R.O. thanks P. Serpico for brief comments on Appendix A.
|
2304.12517 | The 2-MAXSAT Problem Can Be Solved in Polynomial Time | By the MAXSAT problem, we are given a set $V$ of $m$ variables and a
collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to
maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard
even for its restricted version, the 2-MAXSAT problem by which every clause
contains at most 2 literals. In this paper, we discuss an efficient algorithm
to solve this problem. Its average time complexity is bounded by O($n^2m^4).
This shows that the 2-MAXSAT problem can be solved in polynomial time. | Yangjun Chen | 2023-04-25T02:13:01Z | http://arxiv.org/abs/2304.12517v19 | # The 2-MAXSAT Problem Can Be Solved in Polynomial Time
###### Abstract
By the MAXSAT problem, we are given a set \(V\) of \(m\) variables and a collection \(C\) of \(n\) clauses over \(V\). We will seek a truth assignment to maximize the number of satisfied clauses. This problem is _NP_-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss an efficient algorithm to solve this problem. Its worst case time complexity is bounded by O(\(n^{2}m^{3}(log_{2}~{}nm)^{log_{2}~{}nm}\)). This shows that the 2-MAXSAT problem can be solved in polynomial time.
satisfiability problem, maximum satisfiability problem, NP-hard, NP-complete, conjunctive normal form, disjunctive normal form.
## I Introduction
The satisfiability problem is perhaps one of the most well-studied problems that arise in many areas of discrete optimization, such as artificial intelligence, mathematical logic, and combinatorial optimization, to just name a few. Given a set \(V\) of Boolean (_true/false_) variables and a collection \(C\) of clauses over \(V\), or say, a logic formula in _CNF_ (Conjunctive Normal Form), the satisfiability problem is to determine if there is a truth assignment that satisfies all clauses in \(C\)[1]. The problem is _NP_-complete even when every clause in \(C\) has at most three literals [3]. The maximum satisfiability (MAXSAT) problem is an optimization version of satisfishitiy that seeks a truth assignment to maximize the number of satisfied clauses [5]. This problem is _NP_-hard even for its restricted version, the so-called 2-MAXSAT problem, by which every clause in \(C\) has at most two literals [4]. Its application can be seen in an extensive bibliography [16, 10, 11, 2, 2, 4, 7, 12, 13].
Over the past several decades, a lot of research on the MAXSAT has been conducted. Almost all of them are the approximation methods [6, 14, 15], such as (1-1/\(e\))-approximation, 3/4-approximation [15], as well as the method based on the integer linear programming.
In this paper, we discuss a polynomial time algorithm to solve the 2-MAXSAT problem. Its time complexity is bounded by O(\(n^{2}m^{3}(log_{2}~{}nm)^{log_{2}~{}nm}\)), where \(n\) and \(m\) are the numbers of clauses and the number of variables in \(C\), respectively. Thus, our algorithm is in fact a proof of \(P\) = _NP_.
The main idea behind our algorithm can be summarized as follows.
1. Given a collection \(C\) of \(n\) clauses over a set of variables \(V\) with each containing at most 2 literals. Construct a formula \(D\) over another set of variables \(U\), but in _DNF_ (Disjunctive Normal Form) contating \(2n\) conjunctions with each of them having at most 2 literals such that there is a truth assignment for \(V\) that satisfies at least \(n\)* \(\leq n\) clauses in \(C\) if and only if there is a truth assignment for \(U\) that satisfies at least \(n\)* conjunctions in \(D\).
2. For each \(D_{i}\) in \(D\) (\(i\in\{1,\)..., \(2n\}\)), construct a graph, called a \(p\)*-graph to represent all those truth assignments \(\sigma\) of variables such that under \(\sigma\)\(D_{i}\) evaluates to _true_.
3. Organize the \(p\)*-graphs for all \(D_{i}\)'s into a trie-like graph \(G\). Searching \(G\) bottom up, we can find a maximum subset of satisfied conjunctions in polynomial time.
The organization of the rest of this paper is as follow. First, in Section II, we restate the definition of the 2-MAXSAT problem and show how to reduce it to a problem that seeks a truth assignment to maximize the number of satisfied conjunctions in a formula in _DNF_. Then, we discuss our algorithm in Section III. Next, in Section IV, we discuss how the basic algorithm can be improved. Section V is devoted to the analysis of the time complexity of the algorithm. Finally, a short conclusion is set forth in Section VI.
## II 2-MAXSAT Problem
We will deal solely with Boolean variables (that is, those which are either _true_ or _false_), which we will denote by \(c_{1}\), \(c_{2}\), etc. A literal is defined as either a variable or the negation of a variable (e.g., \(c_{7}\), \(\neg c_{11}\) are literals). A literal \(\neg c_{i}\) is _true_ if the variable \(c_{i}\) is _false_. A clause is defined as the OR of some literals, written as (\(l_{1}\lor l_{2}\vee....\lor l_{k}\)) for some \(k\), where each \(l_{i}\) (\(1\leq i\leq k\)) is a literal, as illustrated in \(\neg c_{1}\lor c_{11}\). We say that a Boolean formula is in conjunctive normal form (_CNF_) if it is presented as an AND of clauses: \(C_{1}\wedge...\wedge C_{n}\) (\(n\geq 1\)). For example, (\(\neg c_{1}\lor c_{7}\vee\neg c_{11})\wedge(c_{5}\lor\neg c_{2}\vee\neg c_{3})\) is in _CNF_. In addition, a disjunctive normal form (_DNF_) is an OR of conjunctions: \(D_{1}\lor D_{2}\vee...\lor D_{m}\) (\(m\geq 1\)). For instance, (\(c_{1}\wedge c_{2}\)) \(\vee\) (\(\neg c_{1}\wedge c_{11}\)) is in _DNF_.
Finally, the MAXSAT problem is to find an assignment to the variables of a Boolean formula in _CNF_ such that the maximum number of clauses are set to _true_, or are satisfied. Formally:
2-MAXSAT
* Instance: A finite set \(V\) of variables, a Boolean formula \(C\) = \(C_{1}\wedge...\wedge C_{n}\) in _CNF_ over \(V\) such that each \(C_{i}\) has
0 < |\(C_{i}\)| \(\leq\) 2 literals (\(i\) = 1,..., \(n\)), and a positive integer \(n\)* \(\leq\)\(n\).
* Question: Is there a truth assignment for \(V\) that satisfies at least \(n\)* clauses?
In terms of [4], 2-MAXSAT is _NP_-complete.
To find a truth assignment \(\sigma\) such that the number of clauses set to \(true\) is maximized under \(\sigma\), we can try all the possible assignments, and count the satisfied clauses as discussed in [12]. We may also use a heuristic method to find an approximate solution to the problem as described in [5].
In this paper, we propose a quite different method, by which for \(C\) = \(C_{1}\)\(\wedge\)... \(\wedge\)\(C_{n}\), we will consider another formula \(D\) in _DNF_ constructed as follows.
Let \(C_{i}\) = \(c_{i1}\)\(\vee\)\(c_{i2}\) be a clause in \(C\), where \(c_{i1}\) and \(c_{i2}\) denote either variables in \(V\) or their negations. For \(C_{i}\), define a variable \(x_{i}\). and a pair of conjunctions: \(D_{i1}\), \(D_{i2}\), where
\(D_{i1}\) = \(c_{i1}\)\(\wedge\)\(x_{i}\),
\(D_{i2}\) = \(c_{i2}\)\(\wedge\)\(\neg\)\(x_{i}\).
Let \(D\) = \(D_{11}\)\(\vee\)\(D_{12}\)\(\vee\)\(D_{21}\)\(\vee\)\(D_{22}\)\(\vee\)\(...\)\(\vee\)\(D_{n1}\)\(\vee\)\(D_{n2}\). Then, given an instance of the 2-MAXSAT problem defined over a variable set \(V\) and a collection \(C\) of \(n\) clauses, we can construct a logic formula \(D\) in _DNF_ over the set \(V\)\(\cup\)\(X\) in polynomial time, where \(X\) = {\(x_{i}\)\(|\)\(i\) = 1,..., \(n\)}. \(D\) has \(m\) = 2\(n\) conjunctions.
Concerning the relationship of \(C\) and \(D\), we have the following proposition.
**Proposition 1**.: _Let \(C\) and \(D\) be a formula in CNF and a formula in DNF defined above, respectively. No less than \(n\)* clauses in \(C\) can be satisfied by a truth assignment for \(V\) if and only if no less than n* conjunctions in \(D\) can be satisfied by some truth assignment for \(V\)\(\cup\)\(X\)._
Proof.: Consider every pair of conjunctions in \(D\): \(D_{i1}\) = \(c_{i1}\)\(\wedge\)\(x_{i}\) and \(D_{i2}\) = \(c_{i2}\)\(\wedge\)\(\neg\)\(x_{i}\) (\(i\) \(\in\) {1,..., \(n\)}). Clearly, under any truth assignment for the variables in \(V\)\(\cup\)\(X\), at most one of \(D_{i1}\) and \(D_{i2}\) can be satisfied. If \(x_{i}\) = _true_, we have \(D_{i1}\) = \(c_{i1}\) and \(D_{i2}\) = _false_. If \(x_{i}\) = _false_, we have \(D_{i2}\) = \(c_{i2}\) and \(D_{i1}\) = _false_.
"\(\Rightarrow\)" Suppose there exists a truth assignment \(\sigma\) for \(C\) that satisfies \(p\)\(\geq\)\(n\)* clauses in \(C\). Without loss of generality, assume that the \(p\) clauses are \(C_{1}\), \(C_{2}\),..., \(C_{p}\).
Then, similar to Theorem 1 of [7], we can find a truth assignment \(\tilde{\sigma}\) for \(D\), satisfying the following condition:
For each \(C_{j}\) = \(c_{i1}\)\(\vee\)\(c_{i2}\) (\(j\) = 1,..., \(p\)), if \(c_{j1}\) is _true_ and \(c_{j2}\) is _false_ under \(\sigma\), (1) set both \(c_{j1}\) and \(x_{j}\) to _true_ for \(\tilde{\sigma}\). If \(c_{j1}\) is _false_ and \(c_{j2}\) is _true_ under \(\sigma\), (2) set \(c_{j2}\) to _true_, but \(x_{j}\) to _false_ for \(\tilde{\sigma}\). If both \(c_{i1}\) and \(c_{i2}\) are _true_, do (1) or (2) arbitrarily.
Obviously, we have at least \(n\)* conjunctions in \(D\) satisfied under \(\tilde{\sigma}\).
"\(\Leftarrow\)" We now suppose that a truth assignment \(\tilde{\sigma}\) for \(D\) with \(q\)\(\geq\)\(n\)* conjunctions in \(D\) satisfied. Again, assume that those \(q\) conjunctions are \(D_{1b_{1}}\), \(D_{2b_{2}}\),..., \(D_{q_{b_{q}}}\), where each \(b_{j}\) (\(j\) = 1,..., \(q\)) is 1 or 2.
Then, we can find a truth assignment \(\sigma\) for \(C\), satisfying the following condition:
For each \(D_{jb_{j}}\) (\(j\) = 1,..., \(q\)), if \(b_{j}\) = 1, set \(c_{j1}\) to _true_ for \(\sigma\); if \(b_{j}\) = 2, set \(c_{j2}\) to _true_ for \(\sigma\).
Clearly, under \(\sigma\), we have at lease \(n\)* clauses in \(C\) satisfied.
The above discussion shows that the proposition holds.
As an example, consider the following logic formula in _CNF_:
\[\begin{array}{l}C=C_{1}\)\(\wedge\)\(C_{2}\)\(\wedge\)\(C_{3}\\ =(c_{1}\)\(\vee\)\(c_{2}\)\(\vee\)\(\neg\)\(c_{3}\)\(\wedge\)\((c_{3}\)\(\vee\)\(\neg\)\(c_{1}\)) \\ \end{array} \tag{1}\]
Under the truth assignment \(\sigma\) = {\(c_{1}\) = 1, \(c_{2}\) = 1, \(c_{3}\) = 1}, \(C\) evaluates to _true_, i.e., \(C_{i}\) = 1 for \(i\) = 1, 2, 3. Thus, \(n\)* = 3.
For \(C\), we will generate another formula \(D\), but in _DNF_, according to the above discussion:
\[\begin{array}{l}D=D_{11}\)\(\vee\)\(D_{12}\)\(\vee\)\(D_{21}\)\(\vee\)\(D_{22}\)\(\vee\)\(D_{31}\)\(\vee\)\(D_{32}\\ =(c_{1}\)\)\(\vee\)\((c_{2}\)\(\wedge\)\(\neg\)\(c_{4}\))\(\vee\\ (c_{3}\)\(\wedge\)\(c_{6})\)\(\vee\)\((\neg\)\(c_{1}\)\(\wedge\)\(\neg\)\(c_{6}\)).
According to Proposition 1, \(D\) should also have at least \(n\)* = 3 conjunctions which evaluates to _true_ under some truth assignment. In the opposite, if \(D\) has at least 3 satisfied conjunctions under a truth assignment, then \(C\) should have at least three clauses satisfied by some truth assignment, too. In fact, it can be seen that under the truth assignment \(\sigma\) = {\(c_{1}\) = 1, \(c_{2}\) = 1, \(c_{3}\) = 1, \(c_{4}\) = 1, \(c_{5}\) = 1, \(c_{6}\) = 1}, \(D\) has three satisfied conjunctions: \(D_{11}\), \(D_{21}\), and \(D_{31}\), from which the three satisfied clauses in \(C\) can be immediately determined.
In the following, we will discuss a polynomial time algorithm to find a maximum set of satisfied conjunctions in any logic formular in _DNF_, not only restricted to the case that each conjunction contains up to 2 conjuncts.
## III Algorithm description
In this section, we discuss our algorithm. First, we present the main idea in Section III-A. Then, in Section III-B, the basic algorithm for solving the problem will be described in great detail. The further improvement of the basic algorithm will be discussed in the next section.
### _Main idea_
To develop an efficient algorithm to find a truth assignment that maximizes the number of satisfied conjunctions in formula \(D\) = \(D_{1}\)\(\vee\)\(...\), \(\vee\)\(D_{n}\), where each \(D_{i}\) (\(i\) = 1,..., \(n\)) is a conjunction, we need to represent each \(D_{i}\) as a variable sequence. For this purpose, we introduce a new notation:
(\(c_{j}\), *) = \(c_{j}\)\(\vee\)\(\neg\)\(c_{j}\) = _true_,
which will be inserted into \(D_{i}\) to represent any missing variable \(c_{j}\) in \(D_{i}\). Obviously, the truth value of each \(D_{i}\) remains unchanged.
In this way, the above \(D\) can be rewritten as a new formula in _DNF_ as follows:
\[\begin{array}{l}D=D_{1}\lor D_{2}\lor D_{3}\lor D_{4}\lor D_{5}\lor D_{6}\\ =(c_{1}\wedge(c_{2},*)\wedge(c_{3},*)\wedge c_{4}\wedge(c_{5},*)\wedge(c_{6},*) \vee\\ ((c_{1},*)\wedge c_{2}\wedge(c_{3},*)\wedge\neg c_{4}\wedge(c_{5},*)\wedge(c_{6},* )))\vee\\ ((c_{1},*)\wedge c_{2}\wedge(c_{3},*)\wedge(c_{4},*)\wedge c_{5}\wedge(c_{6},* )))\vee\\ ((c_{1},*)\wedge(c_{2},*)\wedge\neg c_{3}\wedge(c_{4},*)\wedge\neg c_{5}\wedge( c_{6},*)))\vee\\ ((c_{1},*)\wedge(c_{2},*)\wedge\neg c_{3}\wedge(c_{4},*)\wedge(c_{5},*)\wedge(c_ {6},*))\vee\\ ((c_{1},*)\wedge(c_{2},*)\wedge\neg c_{3}\wedge(c_{4},*)\wedge(c_{5},*)\wedge \neg c_{6})\vee\\ (\neg c_{1}\wedge(c_{2},*)\wedge(c_{3},*)\wedge(c_{4},*)\wedge(c_{5},*)\wedge \neg c_{6})\end{array} \tag{3}\]
Doing this enables us to represent each \(D_{i}\) as a variable sequence, but with all the negative literals being removed. It is because if the variable in a negative literal is set to _true_, the corresponding conjunction must be _false_. See Table I for illustration.
First, we pay attention to the variable sequence for \(D_{2}\) (the second sequence in the second column of Table I), in which the negative literal \(\neg c_{4}\) (in \(D_{2}\)) is eliminated. In the same way, you can check all the other variable sequences.
Now it is easy for us to compute the appearance frequencies of different variables in the variable sequences, by which each (\(c\), *) is counted as a single appearance of \(c\) while any negative literals are not considered, as illustrated in Table II, in which we show the appearance frequencies of all the variables in the above \(D\).
According to the variable appearance frequencies, we will impose a global ordering over all variables in \(D\) such that the most frequent variables appear first, but with ties broken arbitrarily. For instance, for the \(D\) shown above, we can specify a global ordering like this: \(c_{2}\to c_{3}\to c_{1}\to c_{4}\to c_{5}\)\(\to c_{6}\).
Following this general ordering, each conjunction \(D_{i}\) in \(D\) can be represented as a sorted variable sequense as illustrated in the third column of Table I, where a start symbol \(\#\) and an end symbol \(\$\) are used for technical convenience. In fact, any ordering of variables works well, based on which a graph representation of assignments can be established. However, ordering variables according to their appearance frequencies can greatly improve the efficiency when searching the trie (to be defined in the next subsection) constructed over all the variable sequences for conjunctions in \(D\).
Later on, by a variable sequence, we always mean a sorted variable sequence. Also, we will use \(D_{i}\) and the variable sequence for \(D_{i}\) interchangeably without causing any confusion.
In addition, for our algorithm, we need to introduce a graph structure to represent all those truth assignments for each \(D_{i}\) (_i_ = 1,..., \(n\)) (called a \(p\)*-graph), under which \(D_{i}\) evaluates to _true_. In the following, however, we first define a simple concept of \(p\)-graphs for ease of explanation.
**Definition 1.** (\(p\)-graph) Let \(\alpha=c_{0}c_{1}\)... \(c_{k}c_{k+1}\) be an variable sequence representing a \(D_{i}\) in \(D\) as described above (with \(c_{0}\)\(=\)\(\#\) and \(c_{k+1}\)\(=\$\)). A \(p\)-graph over \(\alpha\) is a directed graph, in which there is a node for each \(c_{j}\) (\(j=0\),..., \(k+1\)); and an edge for (\(c_{j}\), \(c_{j+1}\)) for each \(j\) {0, 1,..., \(k\)}. In addition, there may be an edge from \(c_{j}\) to \(c_{j+2}\) for each \(j\) {0,..., \(k\) - 1} if \(c_{j+1}\) is a pair of the form (c, *), where c is a variable name.
In Fig. 1(a), we show such a \(p\)-graph for \(D_{1}\) = \(\#\).(\(c_{2}\), *).(\(c_{3}\), *).\(c_{1}\).\(c_{4}\).(\(c_{5}\), *).(\(c_{6}\), *).\(\otimes\). Beside a main path going through all the variables in \(D_{1}\), there are four off-path edges (edges not on the main path), referred to as \(spans\), corresponding to (\(c_{2}\), *), (\(c_{3}\), *), (\(c_{5}\), *), and (\(c_{6}\), *), respectively. Each span is represented by the sub-path covered by it. For example, we will use the sub-path \(<\)\(v_{0}\), \(v_{1}\), \(v_{2}\)\(>\) (sub-path going three nodes: \(v_{0}\), \(v_{1}\), \(v_{2}\)) to stand for the span connecting \(v_{0}\) and \(v_{2}\); \(<\)\(v_{1}\), \(v_{2}\), \(v_{3}\)\(>\) for the span connecting \(v_{2}\) and \(v_{3}\); \(<\)\(v_{4}\), \(v_{5}\), \(v_{6}\)\(>\) for the span connecting \(v_{4}\) and \(v_{6}\), and \(<\)\(v_{5}\), \(v_{6}\), \(v_{7}\)\(>\) for the span connecting \(v_{6}\) and \(v_{7}\). By using spans, the meaning of *'s (it is either 0 or 1) is appropriately represented since along a span we can bypass the corresponding variable (then its value is set to 0) while along an edge on the main path we go through the corresponging variable (then its value is set to 1).
In fact, what we want is to represent all those truth assignments for each \(D_{i}\) (\(i\) = 1,..., \(n\)) in an efficient way, under which \(D_{i}\) evaluates to _true_. However, \(p\)-graphs fail to do so since when we go through from a node \(v\) to another node \(u\) through a span, \(u\) must be selected. If \(u\) represents a (\(c\), *) for some variable name \(c\), the meaning of this '*' is not properly rendered.
For this reason, we introduced the concept of \(p\)*-graphs, described as below.
Let \(s_{1}\) = \(<\)\(v_{1}\),..., \(v_{k}\)\(>\) and \(s_{2}\) = \(<\)\(u_{1}\),..., \(u_{l}\)\(>\) be two spans attached on a same path. We say, \(s_{1}\) and \(s_{2}\) are overlapped, if \(u_{1}\) = \(v_{j}\) for some \(j\) {1,..., \(k\) - 1}, or if \(v_{1}\) = \(u_{j^{\prime}}\) for some \(j^{\prime}\) {1,..., \(l\) - 1}. For example, in Fig. 1(a), \(<\)\(v_{0}\), \(v_{1}\), \(v_{2}\)\(>\) and \(<\)\(v_{1}\), \(v_{2}\), \(v_{3}\)\(>\) are overlapped. \(<\)\(v_{4}\)\(v_{5}\), \(v_{6}\)\(>\) and \(<\)\(v_{5}\), \(v_{6}\), \(v_{7}\)\(>\) are also overlapped.
Figure 1: A \(p\)-path and a \(p\)*-path.
Here, we notice that if we had one more span, \(<\)\(v_{3}\), \(v_{4}\), \(v_{5}\)\(>\), for example, it would be connected to \(<\)\(v_{1}\), \(v_{2}\), \(v_{3}\)\(>\), but not overlapped with \(<\)\(v_{1}\), \(v_{2}\), \(v_{3}\)\(>\). Being aware of this difference is important since the overlapped spans imply the consecutive '*'s, just like \(<\)\(v_{1}\), \(v_{1}\), \(v_{2}\)\(>\) and \(<\)\(v_{1}\), \(v_{2}\), \(v_{3}\)\(>\), which correspond to two consecutive "*'s: (\(c_{2}\), *) and (\(c_{3}\), *). Therefore, the overlapped spans exhibit some kind of _transitivity_. That is, if \(s_{1}\) and \(s_{2}\) are two overlapped spans, the \(s_{1}\)\(\cup\)\(s_{2}\) must be a new, but bigger span. Applying this operation to all the spans over a \(p\)-path, we will get a '_transitive closure_' of overlapped spans. Based on this observation, we give the following definition.
**Definition 2**.: (\(p\)*-graph) Let \(P\) be a \(p\)-graph. Let \(p\) be its main path and \(S\) be the set of all spans over \(p\). Denote by \(S\)* the 'transitive closure' of \(S\). Then, the \(p\)*-graph with respect to \(P\) is the union of \(p\) and \(S\)*, denoted as \(P\)* = \(p\)\(\cup\)\(S\)*.
In Fig. 1(b), we show the \(p\)*-graph with respect to the \(p\)-graph shown in Fig. 1(a). Concerning \(p\)*-graphs, we have the following lemma.
**Lemma 1**.: _Let \(P\)* be a \(p\)*-graph for a conjunction \(D_{i}\) (represented as a variable sequence) in \(D\). Then, each path from \(\#\) to \(\$\) in \(P\)* represents a truth assignment, under which \(D_{i}\) evaluate to true._
Proof.: (1) Corresponding to any truth assignment \(\sigma\), under which \(D_{i}\) evaluates to \(true\), there is definitely a path from \(\#\) to \(\$\) in \(p\)*-path. First, we note that under such a truth assignment each variable in a positive literal must be set to 1, but with some '*'s set to 1 or 0. Especially, we may have more than one consecutive '*'s that are set 0, which are represented by a span that is the union of the corresponding overlapped spans. Therefore, for \(\sigma\) we must have a path representing it.
(2) Each path from \(\#\) to \(\$\) represents a truth assignment, under which \(D_{i}\) evaluate to _true_. To see this, we observe that each path consists of several edges on the main path and several spans. Especially, any such path must go through every verable in a positive literal since for each of them there is no span covering it. But each span stands for a '*' or more than one successive '*'s.
### _Algorithm_
To find a truth assignment to maximize the number of satisfied \(D^{\prime}_{j}\)s in \(D\), we will first construct a _trie-like_ structure \(G\) over \(D\), and then search \(G\) bottom-up to find answers.
Let \(P_{1}\)*, \(P_{2}\)*,..., \(P_{n}\)* be all the \(p\)*-graphs constructed for all \(D_{j}\)'s in \(D\), respectively. Let \(p_{j}\) and \(S_{j}\)* (\(j\) = 1,..., \(n\)) be the main path of \(P_{j}\)* and the transitive closure over its spans, respectively. We will construct \(G\) in two steps. In the first step, we will establish a _trie_[9], denoted as \(T\) = \({\it trie}(R)\) over \(R\) = \(\{\it p_{1}\),..., \(p_{n}\}\) as follows.
If \(|R|\) = 0, \({\it trie}(R)\) is, of course, empty. For \(|R|\) = 1, \({\it trie}(R)\) is a single node. If \(|R|\) \(>\) 1, \(R\) is split into \(m\) (possibly empty) subsets \(R_{1}\), \(R_{2}\),..., \(R_{m}\) so that each \(R_{i}\) (\(i\) = 1,..., \(m\)) contains all those sequences with the same first variable name. The tries: \({\it trie}(R_{1})\), \({\it trie}(R_{2})\),..., \({\it trie}(R_{m})\) are constructed in the same way except that at the \(k\)th step, the splitting of sets is based on the \(k\)th variable name (along the global ordering of variables). They are then connected from their respective roots to a single node to create \({\it trie}(R)\).
In Fig. 2, we show the trie constructed for the variable sequences shown in the third column of Table I. In such a trie, special attention should be paid to all the leaf nodes each labeled with \(\$\), representing a conjunction (or a subset of conjunctions, which can be satisfied under the truth assignment represented by the corresponding main path.)
Each edge in the trie is referred to as a tree edge. In addition, the variable \(c\) associated with a node \(v\) is referred to as the label of \(v\), denoted as \(l(v)\) = \(c\). Also, we will associate each node \(v\) in the trie \(T\) a pair of numbers (_pre_, _post_) to speed up recognizing ancestor/descendant relationships of nodes in \(T\), where _pre_ is the order number of \(v\) when searching \(T\) in preorder and _post_ is the order number of \(v\) when searching \(T\) in postorder.
These two numbers can be used to characterize the ancestor-descendant relationships in \(T\) as follows.
* Let \(v\) and \(v^{\prime}\) be two nodes in \(T\). Then, \(v^{\prime}\) is a descendant of \(v\) iff _pre\((v^{\prime})\)_ > _pre\((v)\)_ and _post\((v^{\prime})\)_ < _post\((v)\)_.
For the proof of this property of any tree, see Exercise 2.3.2-20 in [8].
For instance, by checking the label associated with \(v_{2}\) against the label for \(v_{9}\) in Fig. 2, we see that \(v_{2}\) is an ancestor of \(v_{9}\) in terms of this property. We note that \(v_{2}\)'s label is (3, 12) and \(v_{9}\)'s label is (10, 6), and we have 3 < 10 and 12 > 6. We also see that since the pairs associated with \(v_{14}\) and \(v_{6}\) do not satisfy the property, \(v_{14}\) must not be an ancestor of \(v_{6}\) and _vice versa_.
In the second step, we will add all \(S_{i}\)* (\(i\) = 1,..., \(n\)) to the trie \(T\) to construct a trie-like graph \(G\), as illustrated in Fig. 3. This trie-like graph is constructed for all the variable sequences
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline variables & \(c_{1}\) & \(c_{2}\) & \(c_{3}\) & \(c_{4}\) & \(c_{5}\) & \(c_{6}\) \\ \hline \hline appearance frequencies & 5/6 & 6/6 & 5/6 & 5/6 & 5/6 & 5/6 \\ \hline \end{tabular}
\end{table} TABLE II: Appearance frequencies of variables.
\begin{table}
\begin{tabular}{|l|l|l|} \hline conjunction & variable sequences & sorted variable sequences. \\ \hline \hline \(D_{1}\) & c.\(\{\)\(c_{2}\), \(\}\)(\(v_{3}\), c.\(\{\)\(v_{4}\),\(c_{5}\), \(v_{(6}\), *)\}\) & \#.\(\{\)\(c_{2}\), \(\}\)(\(v_{3}\), \(v_{1}\),\(c_{4}\),\(c_{5}\), \(v_{(6}\), *)\}\) \\ \(D_{2}\) & (\(c_{1}\), \(v_{2}\),\(c_{3}\), (\(v_{6}\), *) & \#.\(\{\)\(c_{2}\),\(c_{3}\), \(c_{1}\), \(c_{5}\), \(v_{(6}\), *)\}\) & \#.\(\{\)\(c_{2}\),\(\}\)(\(v_{3}\), \(v_{1}\),\(c_{4}\),\(c_{5}\), \(v_{(6}\), *)\}\) \\ \(D_{3}\) & (\(c_{1}\), \(v_{2}\),\(c_{3}\), *),\(c_{4}\),\(c_{5}\), * & \#.\(\{\)\(c_{2}\), (\(v_{3}\), *),\(c_{1}\), *),\(c_{4}\),\(c_{5}\), (\(c_{6}\), *),8 \\ \(D_{4}\) & (\(c_{1}\), *),(\(c_{2}\), *),(\(c_{4}\), *),(\(c_{6}\), *) & \#.\(\{\)\(c_{2}\), (\(v_{1}\), *),(\(c_{4}\), *),(\(c_{6}\), *),8 \\ \(D_{5}\) & (\(c_{1}\), *),(\(c_{2}\), *),\(c_{3}\),(\(c_{4}\), *),(\(c_{5}\), *) & \#.\(\{\)\(c_{2}\), *),(\(c_{3}\), *),(\(c_{1}\), *),\(c_{4}\), *),(\(c_{5}\), *),\(c_{6}\), \\ \(D_{6}\) &
given in Table I, in which each span is associated with a set of numbers used to indicate what variable sequences the span belongs to. For example, the span \(<\)\(v_{0}\), \(v_{1}\), \(v_{2}\)\(>\) (in Fig. 3) is associated with three numbers: 1, 5, 6, indicating that the span belongs to 3 conjunctions: \(D_{1}\), \(D_{5}\), and \(D_{6}\). But no numbers are associated with any tree edges. In addition, each \(p\)*-graph itself is considerred to be a simple trie-like graph.
From Fig. 3, we can see that although the number of truth assignments for \(D\) is exponential, they can be represented by a graph with polynomial numbers of nodes and edges.
In a next step, to find the answer, we will search \(G\) bottom-up level by level. First of all, for each leaf node, we will figure out all its parents. Then, all such parent nodes will be categorized into different groups such that the nodes in the same group will have the same label (variable name), which enables us to recognizes all those conjunctions which can be satisfied by a same assignment efficiently. All the groups containing only a single node will not be further explored. (That is, if a group contains only one node \(v\), the parent of \(v\) will not be checked.) Next, all the nodes with more than one node will be explored. We repeat this process until we reach a level at which each group contains only one node. In this way, we will find a set of subgraphs, each rooted at a certain node \(v\), in which the nodes at the same level must be labeled with the same variable name. Then, the path in the trie from the \(root\) to \(v\) and any path from \(v\) to a leaf node in the subgraph correspond to an assignment satisfying all the conjunctions labeling a leaf node in it.
See Fig. 4 for illustration.
In Fig. 4, we show part of the bottom-up process of searching the trie-like graph \(G\) shown in Fig. 3.
* step 1: The leaf nodes of \(G\) are \(v_{7}\), \(v_{10}\), \(v_{13}\), \(v_{17}\) (see level 1), representing the 6 variable sequences in \(D\) shown in Table I, respectively. (Especially, node \(v_{7}\) alone represents three of them: \(D_{1}\), \(D_{3}\), \(D_{5}\).) Their parents are all the remaining nodes in \(G\) (see level 2 in Fig. 4). Among them, \(v_{6}\), \(v_{9}\), \(v_{16}\) are all labeled with the same variable name '\({}_{6}\)' and will be put in a group \(g_{1}\). The nodes \(v_{5}\), \(v_{8}\), and \(v_{15}\) are labeled with '\({}_{5}\)' and will be put in a second group \(g_{2}\). The nodes \(v_{4}\), \(v_{11}\), and \(v_{15}\) are labeled with '\({}_{4}\)' and will be put in the third group \(g_{3}\). Finally, the nodes \(v_{3}\) and \(v_{14}\) are labeled with '\({}_{1}\)' and are put in group \(g_{4}\). All the other nodes: \(v_{0}\), \(v_{1}\), \(v_{2}\) each are differently labeled and therefore will not be further explored.
* step 2: The parents of the nodes in all groups \(g_{1}\), \(g_{2}\), \(g_{3}\), and \(g_{4}\) will be explored. We first check \(g_{1}\). The parents of the nodes in \(g_{1}\) are shown at level 3 in Fig. 4. Among them, the nodes \(v_{5}\) and \(v_{8}\) are labeled with '\({}_{5}\)' and will be put in a same group \(g_{11}\); the nodes \(v_{4}\) and \(v_{15}\) are labeled with '\({}_{4}\)' and put in another group \(g_{12}\); the nodes \(v_{3}\) and \(v_{14}\) are labeled with '\({}_{1}\)' and put in group \(g_{13}\). Again, all the remaining nodes are differently labeled and will not be further considered. The parents of \(g_{2}\), \(g_{3}\), and \(g_{4}\) will be handled in a similar way.
* step 3: The parents of the nodes in \(g_{11}\), \(g_{12}\), \(g_{13}\), as well as the parents of the the nodes in any other group whose size is larger than 1 will be checked. The parents of the nodes in \(g_{11}\) are \(v_{2}\), \(v_{3}\), and \(v_{4}\). They are differently labeled and will not be further explored. However, among the parents of the nodes in group \(g_{12}\), \(v_{4}\) and \(v_{15}\) are labeled with '\({}_{1}\)' and will be put in a group \(g_{121}\). The parents of the nodes in \(g_{13}\) are also differently labeled and will not be searched. Again, the parents of all the other groups at level 3 in Fig. 4 will be checked similarly.
* step 4: The parents of the nodes in \(g_{121}\) and in any other
Figure 3: A trie-like graph \(G\).
Figure 2: A trie and tree encoding.
Figure 4: Illustration for the layered representation \(G^{\prime}\) of \(G\).
groups at level 4 in Fig. 4 will be explored. Since the parents of the nodes in \(g_{121}\) are differently labeled the whole working process terminates if the parents of the nodes in any other other groups at this level are also differently labeled.
We call the graph illustrated in Fig. 4 a layered representation \(G^{\prime}\) of \(G\). From this, a maximum subset of conjunctions satisfied by a certain truth assignment represented by a subset of variables that are set to 1 (while all the remaining variables are set to 0) can be efficiently calculated. As mentioned above, each node which is the unique node in a group will have no parents. We refer to such a node as a _s-root_, and the subgraph made up of all nodes reachable from the _s-root_ as a rooted subgraph. For example, the subgraph made up of the blackened nodes in Fig. 4 is one of such subgraphs.
Denote a rooted subgraph rooted at node \(v\) by \(G_{v}\). In \(G_{v}\), the path labels from \(v\) to a leaf node are all the same. Then, any conjunction \(D_{i}\) associated with a leaf node \(u\) is satisfied by a same truth assignment \(\sigma\):
\(\sigma\) = {the lables on the path \(P\) from \(v\) to \(u\)} \(\cup\) {the labels on the path from the the root of the whole trie to \(v\)},
if any edge on \(P\) is a tree edge or the set of numbers assiicated with it contains \(i\). We call this condition the _assignment condition_.
For instance, in the rooted subgraph mentioned above (represented by the blackened nodes in Fig. 4), we have two root-to-leaf paths: \(v_{1}\stackrel{{ 6}}{{\rightarrow}}v_{11}\stackrel{{ 6}}{{\rightarrow}}v_{13}\), \(v_{1}\stackrel{{ 4}}{{\rightarrow}}v_{15}\stackrel{{ 4}}{{\rightarrow}}v_{17}\), with the same path label; and both satisfy the _assignment condition_. Then, this rooted subgraph represents a subset: {\(D_{4}\), \(c_{6}\)}, which are satisfied by a truth assignment: {\(c_{2}\), \(c_{4}\)} \(\cup\) {\(c_{2}\)} = {\(c_{2}\), \(c_{4}\)} (i.e., {\(c_{2}\) = 1, \(c_{4}\) = 1, \(c_{1}\) = 0, \(c_{3}\) = 0, \(c_{5}\) = 0, \(c_{6}\) = 0})
Now we consider the node \(v_{4}\) at level 4 in Fig. 4. The rooted subgraph rooted at it contains only one path \(v_{4}\to v_{5}\)\(\rightarrow\)\(v_{6}\), where each edge is a tree edge and \(v_{6}\) represents {\(D_{1}\), \(D_{3}\), \(c_{5}\)}. This path corresponds to a truth assignment \(\sigma\) = {\(c_{4}\), \(c_{5}\), \(c_{6}\)} \(\cup\) {\(c_{1}\), \(c_{2}\), \(c_{3}\)} = {\(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\), \(c_{6}\)} (i.e., \(\sigma\) = { \(c_{1}\) = 1, \(c_{2}\) = 1, \(c_{3}\) = 1, \(c_{4}\) = 1, \(c_{5}\) = 1, \(c_{6}\) = 1}), showing that under \(\sigma\): \(D_{1}\), \(D_{3}\), \(D_{5}\) evaluate to _true_, which are in fact a maximum subset of satisfied conjunctions in \(D\). From this, we can deduce that in the formula \(C\) we must also have a maximum set of three satisfied clauses. Also, according to \(\sigma\), we can quickly find those three satisfied clauses in \(C\).
In terms of the above discussion, we give the following algorithm. In the algorithm, a \(stack\)\(S\) is used to explore \(G\) to form the layered graph \(G^{\prime}\). In \(S\), each entry is a subset of nodes labeled with a same variable name.
```
Input : a trie-like graph \(G\). Output : a largest subset of conjunctions satisfying a certain truth assignment.
1\(G^{\prime}\) :={all leaf nodes of \(G\)}; \(g\) := {all leaf nodes of \(G\)};
2 push(\(S\), \(g\)); (* find the layered graph \(G^{\prime}\) of \(G\) *)
3while\(S\) is not emptydo
4\(g\) := pop(\(S\));
5 find the parents of each node in \(g\); add them to \(G^{\prime}\);
6 divide all such parent nodes into several groups: \(g_{1}\), \(g_{2}\),..., \(g_{k}\) such that all the nodes in a group with the same label;
7for each\(j\in\) {1,..., \(k\)}do
8if\(|g_{j}|>1\)then
9 push(\(S\), \(g_{j}\));
10 return \(findSubset\)(\(G^{\prime}\));
```
**Algorithm 1**_SEARCH_(\(G\))
The algorithm can be divided into two parts. In the first part (lines 2 - 12), we will find the layered representation \(G^{\prime}\) of \(G\). In the second part (line 13), we call subprocedure _findSubset_( ), by which we check all the rooted subgraphs to find a truth assignment such that the satisfied conjunctions are maximized. This is represented by a triplet (\(u\), \(s\), \(f\)), corresponding to a rooted subgraph \(G_{u}\) rooted at \(u\) in \(G^{\prime}\). Then, the variable names represented by the path from the root of the whole trie to \(u\) and the variable names represented by any path in \(G^{\prime}\) make up a truth assignment that satisfies a largest subset of conjunctions stored in \(f\), whose size is \(s\).
Concerning the correctness of the algorithm, we have the following proposition.
**Proposition 2**.: _Let \(D\) be a formula in DNF. Let \(G\) be a trie-like graph created for \(D\). Then, the result produced by \(SEARCH(G)\) must be a truth assignment satisfying a maximum subset of conjunctions in \(D\)._
Proof.: By the execution of \(SEARCH\)(\(G\)), we will first generate the layered representation \(G^{\prime}\) of \(G\). Then, all the rooted subgraphs in \(G^{\prime}\) will be checked. By each of them, we will find a truth assignment satisfying a subset of conjunctions, which will be compared with the largest subset of conjunctions found up to now. Only the larger between them is kept. Therefore, the result produced by \(SEARCH\)(\(G\)) must be correct.
## IV Improvements
### _Redundancy analysis_
The working process of constructing the layered representation \(G^{\prime}\) of \(G\) does a lot of redundant work, but can be effectively removed by interleaving the process of _SEARCH_ and _findSubset_ in some way. We will recognize any rooted subgraph as early as possible, and remove the relevant nodes to avoid any possible redundancy. To see this, let us have a look at Fig. 5, in which we illustrate part of a possible layered graph, and assume that from group \(g_{1}\) we generate another two groups \(g_{2}\) and \(g_{3}\). From them a same node \(v_{3}\) will be accessed. This shows that the number of the nodes at a layer in \(G^{\prime}\) can be larger than O(\(nm\)) (since a node may appear more than once.)
Fortunately, such kind of repeated appearance of a node can be avoided by applying the _findSubset_ procedure multiple times during the execution of _SEARCH_( ) with each time applied to a subgraph of \(G^{\prime}\), which represents a certain truth assignment satisfying a subset of conjunctions that cannot be involved in any larger subset of satisfiable conjunctions.
For this purpose, we need first to recognize what kinds of subgraphs in a trie-like graph \(G\) will lead to the repeated appearances of a node at a layer in \(G^{\prime}\).
In general, we distingush among three cases, by which we assume two nodes \(u\) and \(v\) respectively appearing in \(g_{2}\) and \(g_{3}\) (in Fig. 5), with \(v_{3}\)\(\rightarrow\)\(u\), \(v_{3}\)\(\rightarrow\)\(v\)\(\in\)\(G\).
* Case 1: \(u\) and \(v\) appear on different paths in \(G\), as illustrated in Fig. 6(a)), in which nodes \(v_{1}\) and \(v_{2}\) are differently labelled. Thus, when we create the corresponding layered representation, they will belong to different groups, as shown in Fig. 6(b), matching the pattern shown in Fig. 5.
* Case 2: \(u\) and \(v\) appear on a same path in \(G\), as illustrated in Fig. 6(c)), in which two nodes \(v_{1}\) and \(v_{2}\) appear on a same path (and then must be differently labelled.) Hence, when we create the corresponding layered representation, they definitely belong to different groups, as illustrated in Fig. 6(d), also matching the pattern shown in Fig. 5.
* Case 3: The combination of Case 1 and Case 2. To know what it means, assume that in \(g_{1}\) (in Fig. 5) we have two nodes \(u\) and \(u^{\prime}\) with \(u\)\(\rightarrow\)\(v_{3}\) and \(u^{\prime}\)\(\rightarrow\)\(v_{3}\). Thus, if \(u\) and \(v\) appear on different paths, but \(u^{\prime}\) and \(v\) on a same path in \(T\), then we have Case 3, by which Case 1 and Case 2 occur simutaneously by a repeated node at a certain layer in \(G^{\prime}\).
Case 1 and Case 2 can be efficiently differetiated by using the tree encoding illuartted in Fig. 2.
In Case 1 (as illustrate in Fig. 6(a) and (b)), a node \(v\) which appears more than once at a level in \(G^{\prime}\) must be a branching node (i.e., a node with more than one child) in \(T\). Thus, each subset of conjunctions represented by all those substrees repectively rooted at the same-labelled children of \(v\) must be a largest subset of conjunctions that can be satisfied by a truth assignment with \(v\) (or say, the variable represented by \(v\)) being set to _true_. Therefore, we can merge all the repeated nodes to a single one and call _findSubset_( ) immediatly to find all such subsets for all the children of \(v\).
In Case 2 (as illustrated in Fig. 6(c) and (d)), some more effort should be made. In this case, the multiple appearances of a node \(v\) at a level in \(G^{\prime}\) correspond to more than one descendants of \(v\) on a same path in \(T\): \(v_{1}\), \(v_{2}\),..., \(v_{k}\) for some \(k\) > 1. (As demonstrated in Fig. 6(c), both \(v_{1}\) and \(v_{2}\) are \(v_{3}\)'s descendants.) Without loss of generality, assume that \(v_{1}\)\(\Leftarrow\)\(v_{2}\)\(\Leftarrow\)\(...\Leftarrow\)\(v_{k}\), where \(v_{i}\)\(\Leftarrow\)\(v_{i+1}\) represents that \(v_{i}\) is a descendant of \(v_{i+1}\) (1 \(\leq\)\(i\)\(\leq\)\(k\) - 1).
In this case, we will merge the multiple appearances of \(v\) to a single appearance of \(v\) and connect \(v_{k}\) to \(v\). Any other \(v_{i}\) (\(i\)\(\in\) {1, 2,..., \(k\)- 1}) will be simply connected to \(v\) if the following condition is satisfied.
* \(v_{i}\) appears in a group which contains at least another node \(u\) such that \(u^{\prime}\)s parent is diffrent from \(v\), but with the same label as \(v\).
Otherwise, \(v_{i}\) (\(i\)\(\in\) {1, 2,..., \(k\)- 1}) will not be connected to \(v\). It is because if the condition is not met the truth assignment
Fig. 5: A possible part in a layered graph \(G^{\prime}\).
Fig. 6: Two reasons for repeated appearances of nodes at a level in \(G^{\prime}\).
represented by the path in \(G\), which contains the span (\(v\), \(v_{i}\)), cannot satisfy any two or more conjunctions. But the single satisfied conjunction is already figured out when we create the trie at the very beginning. However, we should know that this checking is only for efficiency. Whether doing this or not will not impact the correctness of the algorithm or the worst-case running time analysis.
Based on Case 1 and Case 2, Case 3 is easy to handle. We only need to check all the children of the repeated nodes and carefully distinguish between Case 1 and Case 2 and handle them differently.
See Fig. 4 and 7 for illustration.
First, we pay attention to \(g_{1}\) and \(g_{2}\) at level 2 in Fig. 4, especially nodes \(v_{6}\) and \(v_{9}\) in \(g_{1}\), and \(v_{8}\) in \(g_{2}\), which match the pattern shown in Fig. 5. As we can see, \(v_{6}\) and \(v_{8}\) are on different paths in \(T\) and then we have Case 1. But \(v_{9}\) and \(v_{8}\) are on a same path, which is Case 2. To handle Case 1, we will search along two paths in \(G^{\prime}\): \(v_{3}\xrightarrow{1,5}v_{6}\to v_{7}\) (labeled with {\(D_{1}\), \(D_{3}\), \(D_{5}\)}), \(v_{3}\to v_{8}\to v_{10}\) (labeled with {\(D_{2}\)}), and find a subset of three conjunctions {\(D_{1}\), \(D_{5}\), \(D_{2}\)}, satisfied by a truth assignment: {\(c_{1}\) = 1, \(c_{2}\) = 1, \(c_{3}\) = 1, \(c_{4}\) = 0, \(c_{5}\) = 0, \(c_{6}\) = 1}. To handle Case 2, we simply connect \(v_{8}\) to the first appearance of \(v_{3}\) as illustrated in Fig. 7 and then eliminate second appearance of \(v_{3}\) from \(G^{\prime}\).
### _Improved algorithm_
In terms of the above discussion, the method to generate \(G^{\prime}\) should be changed. We will now generate \(G^{\prime}\) level by level. After a level is created, the repeated appearances of nodes will be checked and then eliminated. In this way, the number of nodes at each layer can be kept \(\leq\) O(\(nm\)).
However, to facilitate the recognition of truth assignments for the corresponding satisfied conjunctions, we need a new concept, the so-called _reachable subsets_ of a node \(v\) through spans, denoted as _RS\({}_{v}\)_.
**Definition 3**.: (reachable subsets through spans) Let \(v\) be a repeated node of Case 1. Let \(u\) be a node on the tree path from _root_ to \(v\) in \(G\) (not including \(v\) itself). A reachable subset of \(u\) through spans are all those nodes with a same label \(c\) in different subgraphs in \(G[v]\) and reachable from \(u\) through a span, denoted as _RS\({}_{u}[c]\)_.
For instance, for node \(v_{2}\) in Fig. 3 (which is on the tree path from _root_ to \(v_{3}\) (a repeated node of Case 1), we have two _RS_s:
* _RS\({}_{v_{2}}[c_{5}]\)_ = {\(v_{5}\), \(v_{8}\)},
* _RS\({}_{v_{2}}[c_{6}]\)_ = {\(v_{6}\), \(v_{9}\)}.
We have _RS\({}_{v_{2}}[c_{5}]\)_ due to two spans \(v_{2}\xrightarrow{5}v_{5}\) and \(v_{2}\xrightarrow{2}v_{8}\) going out of \(v_{2}\), respectively reaching \(v_{5}\) and \(v_{8}\) on two different _p*_-graphs in \(G[v_{3}]\) with _l(\(v_{5}\)) = _l(\(v_{8}\)_) = _l(\(v_{8}\)_) = '\(c_{5}\)'. We have _RS\({}_{v_{2}}[c_{6}]\)_ due to another two spans going out of \(v_{2}\): \(v_{2}\xrightarrow{5}v_{6}\) and \(v_{2}\xrightarrow{2}v_{9}\) with _l(\(v_{6}\))_ = _l(\(v_{9}\)_) = '\(c_{6}\)'.
In general, we are interested only in _RS\({}_{v}\)_'s with _l_RS\({}_{v}\)\(\mid\)\(\geq\) 2. So, in the subsequent discussion, by an _RS\({}_{v}\)_, we mean an _RS\({}_{v}\)_ with _l_RS\({}_{v}\)\(\mid\)\(\geq\) 2.
The definition of this concept for a repeated node \(v\) of Case 1 is a little bit different from any other node on the tree path (from _root_ to \(v\)). Specifically, each of its _RS_s is defined to be a subset of nodes reachable from a span or from a tree edge. So for \(v_{3}\) we have:
* _RS\({}_{v_{3}}[c_{5}]\)_ = {\(v_{5}\), \(v_{8}\)},
* _RS\({}_{v_{3}}[c_{6}]\)_ = {\(v_{6}\), \(v_{9}\)},
respectively due to \(v_{3}\xrightarrow{5}v_{5}\) and \(v_{3}\to v_{8}\) going out of \(v_{3}\) with _l_(\(v_{6}\)) = _l(\(v_{8}\)_) = '\(c_{5}\)'; and \(v_{3}\xrightarrow{5}v_{6}\) and \(v_{3}\xrightarrow{2}v_{9}\) going out of \(v_{3}\) with _l_(\(v_{6}\)) = _l(\(v_{8}\)_) = '\(c_{6}\)'.
Based on the concept of reachable subsets through spans, we are able to define another more important concept, upper bounderies (denoted as _upBounds_), given below.
**Definition 4**.: (upper bounderies) Let \(v\) be a repeated node of Case 1. Let \(G_{1}\), \(G_{2}\),..., \(G_{k}\) be all the subgraphs rooted at a child of \(v\) in \(G\). Let _RS\({}_{v_{i}}[c_{j}]\)_ (i = 1,..., _l_); _j = 1,..., \(q\)_ for some \(l\), \(q\)) be all the reachable subsets through spans. An upBound with respect to \(v\) is a subset of nodes {\(u_{1}\), \(u_{2}\),..., \(u_{f}\)} with the following properties:
1. Each \(u_{i}\) (1 \(\leq i\leq l\)) appears in some \(G_{j}\) (1 \(\leq j\leq k\)).
2. For each pair \(u_{i}\), \(u_{i}\) (\(i\neq j\)) they are not related by the ancestor/descendant relationship.
3. Each \(u_{i}\) (i = 1,..., _l_) is in some _RS\({}_{q}\)_[\(c_{r}\)]. But there is no any other _RS\({}_{q^{\prime}}\)_[\(c_{r^{\prime}}\)], containing a node at a higher position than \(u_{i}\) (or say, a node closer to the _root_ than \(u_{i}\)) in the same subgraph.
Fig. 8 gives an intuitive illustration of this concept.
As an concrete example, consider \(v_{5}\) and \(v_{8}\) in Fig. 3. They make up an upBound with respect to \(v_{3}\) (repeated node of Case 1). Then, we will construct a trie-like graph over two subgraphs, rooted at \(v_{5}\) and \(v_{8}\), respectively. This can be done by a recursive call of the algorithm itself. Here, however, \(v_{4}\) is not included since the truth assignment with \(v_{4}\) being set to _true_ satisfies only the conjunctions associated with leaf node \(v_{10}\). This has already been determined when the initial trie is built up. In fact, the purpose of upper bounderies is to take away the nodes like \(v_{4}\) from the subsequent computation.
Specifically, the following operations will be carried out when meet a repeated node \(v\) of Case 1.
* Calculate all _RS_s with respect \(v\).
* Calculate the upBound in terms of _RS_s.
* Make a recursive call of the algorithm over all the _p*_-subgraphs each starting from a node on the corresponding upBound.
* Merge the repeated nodes of Case 1 to a single one at the corresponding layer in \(G^{\prime}\).
See the following example for illustration.
**Example 1**.: _When checking the repeated node \(v_{3}\) in the bottom-up search process, we will calculate all the reachable subsets through spans with respect to \(v_{3}\) as described above: RS\({}_{v_{2}}\)[\(c_{5}\)], RS\({}_{v_{2}}\)[\(c_{6}\)], RS\({}_{v_{3}}\)[\(c_{5}\)], and RS\({}_{v_{3}}\)[\(c_{6}\)]. In terms of these reachable subsets through spans, we will get the corresponding upBound {\(v_{5}\), \(v_{8}\)}. Node \(v_{4}\) (above the upBound) will not be involved by the recursive execution of the algorithm.
Concretely, when we make a recursive call of the algorithm, applied to two subgraphs: \(G_{1}\) - rooted at \(v_{5}\) and \(G_{2}\) - rooted at \(v_{8}\) (see Fig. 9(a)), we will first constrcut a trie-like graph as shown in Fig. 9(b). Here, we notice that the subset associated with its unique leaf node is {\(D_{2}\), \(D_{5}\)}, instead of {\(D_{1}\), \(D_{2}\), \(D_{3}\), \(D_{5}\)}. It is because the number associated with span \(v_{2}\xrightarrow{5}v_{5}\) is 5 while the number associated with span \(v_{2}\xrightarrow{2}v_{8}\) is 2.
By searching the trie-like graph shown in Fig. 9(b), we will find the truth assignment satisfying {\(D_{2}\), \(D_{5}\)}. This truth assignment is represented by a path consisting three parts: the tree path from _root_ to \(v_{2}\), the span \(v_{2}\xrightarrow{5}v_{5}\), the subpath \(v_{5}\) to \(v_{7}\). So the truth assignment is {\(c_{1}\) = 0, \(c_{2}\) = 1, \(c_{3}\) = 1, \(c_{4}\) = 0, \(c_{5}\) = 1, \(c_{6}\) = 1}.
We remember that when generating the trie \(T\) over the main paths of the \(p^{*}\)-graphs over the variable sequences shown in Table I, we have already found a subset of conjunctions {\(D_{1}\), \(D_{3}\), \(D_{5}\)}, which can be satisfied by a truth assignment represented by the corresponding main path. This is larger than {\(D_{2}\), \(D_{5}\)}. Therefore, {\(D_{2}\), \(D_{5}\)} should not be kept around and this part of computation is futile. However, this kind of useless work can be avoided by performing a pre-checking: if the number of \(p^{*}\)-subgraphs, over which the recursive call of the algorithm will be invoked, is smaller than the size of the partial answer already obtained, the recursive call of the algorithm should not be conducted.
In terms of the above discussion, we change _SEARCH_ ( ) to a recursive algorithm shown below.
The improved algorithm (Algorithm 3) works in a quite different way from Algorithm 1. Concretely, \(G^{\prime}\) will be created level by level (see line 6), and for each created level all the multiple appearances of nodes will be recognized and handled according to the three cases described in the previous
Figure 8: Illustration for upBounds’.
Figure 7: Illustration for removing repeated nodes.
Figure 9: Illustration for recursive call of the algorithm.
subsection (see lines 7 - 10). Especially, in Case 1, a recursive call to the algorithm itself will be invoked.
The sample trace given in the following example helps for illustration.
**Example 2**.: _When applying SEARCH( ) to the \(p\)*-graphs shown in Fig. 3, we will meet three repeated nodes of Case 1: \(v_{3}\), \(v_{2}\), and \(v_{1}\)._
* _Initially, when creating_ \(T\)_, a subset of conjunctions_ {_D_\({}_{1}\)_,_ _D_\({}_{2}\)_,_ _D_\({}_{5}\)_}, is found (see Fig. 2), which can be satisfied by a same truth assignment:_ \(c_{1}\) _= 1,_ \(c_{2}\) _= 1,_ \(c_{3}\) _= 1,_ \(c_{4}\) _= 1,_ \(c_{5}\) _= 1,_ \(c_{6}\) _= 1. (See Fig. 2.)_
* _Checking_ \(v_{3}\)_. As shown in Example 1, by this checking, we will find a subset of conjunction_ {_D_\({}_{2}\)_,_ _D_\({}_{5}\)_} satisfied by a truth assignment_ {_\(c_{1}\)_= 0,_ \(c_{2}\) _= 1,_ \(c_{3}\) _= 1,_ \(c_{4}\) _= 0,_ \(c_{5}\) _= 1,_ \(c_{6}\) _= 1}_, smaller than_ {_D_\({}_{1}\)_,_ _D_\({}_{2}\)_,_ _D_\({}_{5}\)_}. Thus, this result will not be kept around._
* _Checking_ \(v_{2}\)_. When we meet this repeated node of Case 1 during the generation of_ \(G^{\prime}\)_, we have two subgraphs in_ \(G^{\prime}[v_{2}]\)_, as shown in Fig. 10._
With spect to \(v_{2}\), we will calculate all the relevant reachable subsets through spans for all the nodes on the tree path from _root_ to \(v_{2}\) in \(G\). Altogether we have five reachable subsets through spans. Among them, associated with \(v_{1}\) (on the tree path from _root_ to \(v_{2}\) in Fig. 3), we have
* _RS\({}_{v_{1}}[c_{4}]\) = {\(v_{4}\),_ \(v_{11}\)_}_
due to the following two spans (see Fig. 3):
* _{_\(v_{1}\xrightarrow{3}v_{4}\)_,_ \(v_{1}\xrightarrow{6}v_{11}\)_}_
Associated with \(v_{2}\) (the repeated node itself) have we, the following four reachable subsets through spans:
* _RS\({}_{v_{2}}[c_{4}]\) = {\(v_{4}\),_ \(v_{11}\)_}_
* _RS\({}_{v_{2}}[c_{5}]\) = {\(v_{5}\),_ \(v_{8}\)_,_ \(v_{12}\)_}_
* _RS\({}_{v_{2}}[c_{6}]\) = {\(v_{6}\),_ \(v_{9}\)_}_
* _RS\({}_{v_{2}}[\$]\) = {\(v_{10}\),_ \(v_{13}\)_}_
due to four groups of spans shown below:
* _{_\(v_{2}\xrightarrow{3,5}v_{4}\)_,_ \(v_{2}\xrightarrow{6}v_{11}\)_}_._
* _{_\(v_{2}\xrightarrow{5}v_{5}\)_,_ \(v_{1}\xrightarrow{2}v_{8}\)_,_ \(v_{1}\xrightarrow{6}v_{12}\)_}_._
* _{_\(v_{2}\xrightarrow{5}v_{6}\)_,_ \(v_{2}\xrightarrow{2}v_{9}\)_}_._
* _{_\(v_{2}\xrightarrow{2}v_{10}\)_,_ \(v_{2}\xrightarrow{6}v_{13}\)_}_._
In terms of the reachable subsets through spans, we can establish the corresponding upper boundary {\(v_{4}\), \(v_{8}\), \(v_{11}\)} (which is illustrated as a thick line in Fig. 10). Then, we can determine what subgraphs will be utilized to establish a trie-like graph, over which the algorithm is recursively executed.
In Fig. 11(a), we show the trie-like graph built over the three \(p\)*-subgraphs (starting respectively from \(v_{4}\), \(v_{8}\), \(v_{11}\) on the upBound shown in Fig. 10), in which \(v_{4-11}\) stands for the merging of \(v_{4}\) and \(v_{11}\), and \(v_{5-12}\) for the merging of \(v_{5}\) and \(v_{12}\). Especially, \(v_{2}\) should be involved, working as a bridge between the newly contracted trie-like graph and the rest part of \(G\). However, this part of the operation is not specified in Algorithm _SEARCH_( ) for ease of explanation. But it can be easily extended with this operation included.
By a recursive call of _SEARCH_( ), we will construct this graph and then search this graph bottom up, by which we will create a layed graph as shown in Fig. 12. At level 2 in Fig.
Fig. 10: Two subgraphs in \(G^{\prime}[v_{2}]\) and a upBound.
12, we can see a repeated nodes of Case 1: node \(v_{5-12}\). Then, we will make a recursive call of the algorithm, generating an upBound is \(\{v_{7}\), \(v_{13}\}\). Accordingly, we will find a single path as shown in Fig. 11(b), by which we will find a largest subset of conjunctions \(\{D_{3}\), \(D_{6}\}\), which can be satisfied by a certain truth assignment. We notice that the subset associated with this path is \(\{D_{3}\), \(D_{6}\}\), instead of \(\{D_{3}\), \(D_{5}\), \(D_{6}\}\). It is because the span from \(v_{5-12}\) to \(v_{7}\) (in Fig. 11(a)) is labeled with 3 and \(D_{5}\) should be removed.
When we meet the second repeated node \(v_{2}\), we will create the following _RS_'s:
* _RS\({}_{v_{1}}\)_ = \(\Phi\). (Note that any _RS_ with \(|\)_RS_\(|\) < 2 will not be considered.)
* _RS\({}_{v_{2}}\)[\(c_{5}\)]_ = \(\{v_{5-12}\), \(v_{8}\}\). (due to the span \(v_{2}\)) \(\stackrel{{\circ}}{{\rightarrow}}\)\(v_{5-12}\) and the tree edge \(v_{2}\to v_{8}\).)
* _RS\({}_{v_{2}}\)[\(c_{6}\)]_ = \(\{v_{6}\), \(v_{9}\}\). (due to the spans \(v_{2}\)) \(\stackrel{{\circ}}{{\rightarrow}}\)\(v_{6}\) and \(v_{2}\)\(\stackrel{{\circ}}{{\rightarrow}}\)\(v_{9}\).)
Accordingly, the corresponding upBound is \(\{v_{5-12}\), \(v_{8}\}\). Then, by the recursive execution of the algorithm, we will create a tri-like graph as shown in Fig. 13(a). The only branching node is \(v_{5-12-8}\). Checking this node, we will finally get a single path as shown in Fig. 13(b), showing a largest subset of conjunctions which can be satisfied by a certain truth assignment.
In the whole working process, a simple but very powerful heuristics can be used to improve the efficiency. Let \(\alpha\) be the size of the largest subset of conjunctions found up to now, which can be satisfied by a certain truth assignment. Then, any recursive call of the algorithm over a smaller than \(\alpha\) subset of \(p\)*-subgraphs will be surpressed.
After \(v_{2}\) is removed from the corresponding levels in \(G^{\prime}\), the next repeated node \(v_{1}\) of Case 1 will be checked in a way similar to \(v_{3}\) and \(v_{2}\).
## V Time complexity analysis
The total running time of the algorithm consists of three parts.
The first part \(\tau_{1}\) is the time for computing the frenquencies of variable appearances in \(D\). Since in this process each variable in a \(D_{i}\) is accessed only once, \(\tau_{1}\) = O(\(nm\)).
The second part \(\tau_{2}\) is the time for constructing a trie-like graph \(G\) for \(D\). This part of time can be further partitioned into three portions.
* \(\tau_{21}\): The time for sorting variable sequences for \(D_{i}\)'s. It is obviously bounded by O(\(nm\)log\({}_{2}\)\(m\)).
* \(\tau_{22}\): The time for constructing \(p\)*-graphs for each \(D_{i}\) (\(i\) = 1,..., \(n\)). Since for each variable sequence a transitive closure over its spans should be first created and needs O(\(m^{2}\)) time, this part of cost is bounded by O(\(nm^{2}\)).
* \(\tau_{23}\): The time for merging all \(p\)*-graphs to form a trie-like graph \(G\), which is also bounded by O(\(nm^{2}\)).
The third part \(\tau_{3}\) is the time for searching \(G\) to find a maximum subset of conjunctions satisfied by a certain truth assignment. It is a recursive procedure. To analyze its running time, therefore, a recursive equation should established. Let \(l\) = \(nm\) (the upper bound on the number of nodes in \(T\)). Assume that the average outdegree of a node in \(T\) is \(d\). Then the average time complexity of \(\tau_{3}\) can be characterized by the following recurrence based on an observation that for each branching node a recursive call of the algorithm will be performed:
Fig. 11: A trie-like graph and a path.
Fig. 12: A naive bottom-up search of \(G\).
Fig. 13: Illustration for the recursive execution of the algorithm.
\[\Gamma(l)=\left\{\begin{array}{ll}O(1),&\mbox{if $l\leq$ a constant,}\\ \\ \sum_{i=1}^{\lceil log_{d}\;l\rceil}d^{i}\Gamma(\frac{l}{d^{i}})+O(l^{2}m),& \mbox{otherwise.}\end{array}\right. \tag{4}\]
Here, in the above recursuve equation, O(\(l^{2}m\)) is the cost for generating all the reachable subsets of a node through spans and upper boundries, together with the cost for generating local trie-like subgraphs for each recursive call of the algorithm. We notice that the size of all the _RS_s together is bounded by the number of spans in \(G\), which is O(\(lm\)).
From (4), we can get the following inequality:
\[\Gamma(l)\leq d\cdot log_{d}\;l\cdot\Gamma(\frac{l}{d})+O(l^{2}m). \tag{5}\]
Solving this inequality, we will get
\[\Gamma(l) \leq d\cdot log_{d}\;l\cdot\Gamma(\frac{l}{d})+O(l^{2}m) \tag{6}\] \[\leq d^{2}(log_{d}\;l)(\;log_{d}\frac{l}{d})\Gamma(\frac{l}{d^{2 }})+\;(log_{d}\;l)\;l^{2}m+\;l^{2}m\] \[\leq...\;...\] \[\leq d^{\lceil log_{d}^{1}\rceil}(log_{d}\;l)\;(log_{d}(\frac{l }{d}))\;...\;(log_{d}\frac{l}{d^{\lceil log_{d}^{1}\rceil}})\] \[+\;l^{2}m((log_{d}\;l)(log_{d}(\frac{l}{d}))\;...\;(log_{d}\frac {l}{d^{\lceil log_{d}^{1}\rceil}})+...+log_{d}\;l+1)\] \[\leq O(l(log_{d}\;l)^{log_{d}\;l}+\;O(l^{2}m(log_{d}\;l)^{log_{d }\;l})\] \[\sim O(l^{2}m\;(log_{d}\;l)^{log_{d}\;l}).\]
Thus, the value for \(\tau_{3}\) is \(\Gamma(l)\sim\mbox{O($l^{2}m\;(log_{d}\;l)^{log_{d}\;l}$)}\).
From the above analysis, we have the following proposition.
**Proposition 3**.: _The average running time of our algorithm is bounded by_
\[\begin{array}{ll}\Sigma_{i=1}^{3}\tau_{i}&=O(nm)+(O(nm\;log_{2}\;m)+O(nm^{2} )\\ &\quad+O(l^{2}m\;(log_{d}\;l)^{log_{d}\;l})\\ &\\ &=O(n^{2}m^{3}(log_{d}\;nm)^{log_{d}\;nm}).\end{array} \tag{7}\]
But we remark that if the average outdegree of a node in \(T\) is < 2, we can use a brute-force method to find the answer in polynomial time. Hence, we claim that the worst case time complexity is bounded by O(\(l^{2}m(log_{2}\;l)^{log_{2}\;l}\)) since \((log_{d}\;l)^{log_{d}\;l}\) decreases as \(d\) increases.
## VI Conclusions
In this paper, we have presented a new method to solve the 2-MAXSAT problem. The time complexity of the algorithm is bounded by O(\(n^{2}m^{3}(log_{2}\;nm)^{log_{2}\;nm}\)), where \(n\) and \(m\) are respectively the numbers of clauses and variables of a logic formula \(C\) (over a set \(V\) of variables) in _CNF_, and \(d\) is the average outdegree of a node in a trie established over a set of conjunctions that are generated from the clauses in \(C\). The main idea behind this is to construct a different formula \(D\) (over a set \(U\) of variables) in _DNF_, according to \(C\), with the property that for a given integer \(n^{*}\leq n\;C\) has at least \(n^{*}\) clauses satisfied by a truth assignment for \(V\) if and only if \(D\) has least \(n^{*}\) conjunctions satisfied by a truth assignment for \(U\). To find a truth assignment that maximizes the number of satisfied conjunctions in \(D\), a graph structure, called \(p^{\mbox{\scriptsize\sf*}}\)-graph, is introduced to represent each conjunction in \(D\). In this way, all the conjunctions in \(D\) can be represented as a trie-like graph. Searching \(G\) bottom up, we can find the answer efficiently.
|
2301.11818 | Predicting extreme events in a data-driven model of turbulent shear flow
using an atlas of charts | Dynamical systems with extreme events are difficult to capture with
data-driven modeling, due to the relative scarcity of data within extreme
events compared to the typical dynamics of the system, and the strong
dependence of the long-time occurrence of extreme events on short-time
conditions.A recently developed technique [Floryan, D. & Graham, M. D.
Data-driven discovery of intrinsic dynamics. Nat Mach Intell $\textbf{4}$,
1113-1120 (2022)], here denoted as $\textit{Charts and Atlases for Nonlinear
Data-Driven Dynamics on Manifolds}$, or CANDyMan, overcomes these difficulties
by decomposing the time series into separate charts based on data similarity,
learning dynamical models on each chart via individual time-mapping neural
networks, then stitching the charts together to create a single atlas to yield
a global dynamical model. We apply CANDyMan to a nine-dimensional model of
turbulent shear flow between infinite parallel free-slip walls under a
sinusoidal body force [Moehlis, J., Faisst, H. & Eckhardt, B. A low-dimensional
model for turbulent shear flows. New J Phys $\textbf{6}$, 56 (2004)], which
undergoes extreme events in the form of intermittent quasi-laminarization and
long-time full laminarization. We demonstrate that the CANDyMan method allows
the trained dynamical models to more accurately forecast the evolution of the
model coefficients, reducing the error in the predictions as the model evolves
forward in time. The technique exhibits more accurate predictions of extreme
events, capturing the frequency of quasi-laminarization events and predicting
the time until full laminarization more accurately than a single neural
network. | Andrew J. Fox, C. Ricardo Constante-Amore, Michael D. Graham | 2023-01-27T16:15:51Z | http://arxiv.org/abs/2301.11818v2 | # Predicting extreme events in a data-driven model of turbulent shear flow using an atlas of charts
###### Abstract
Dynamical systems with extreme events are difficult to capture with data-driven modeling, due to the relative scarcity of data within extreme events compared to the typical dynamics of the system, and the strong dependence of the long-time occurrence of extreme events on short-time conditions. A recently developed technique [Floryan, D. & Graham, M. D. Data-driven discovery of intrinsic dynamics. Nat Mach Intell **4**, 1113-1120 (2022)], here denoted as _Charts and Atlases for Nonlinear Data-Driven Dynamics on Manifolds_, or CANDyMan, overcomes these difficulties by decomposing the time series into separate charts based on data similarity, learning dynamical models on each chart via individual time-mapping neural networks, then stitching the charts together to create a single atlas to yield a global dynamical model. We apply CANDyMan to a nine-dimensional model of turbulent shear flow between infinite parallel free-slip walls under a sinusoidal body force [Moehlis, J., Faisst, H. & Eckhardt, B. A low-dimensional model for turbulent shear flows. New J Phys **6**, 56 (2004)], which undergoes extreme events in the form of intermittent quasi-laminarization and long-time full laminarization. We demonstrate that the CANDyMan method allows the trained dynamical models to more accurately forecast the evolution of the model coefficients, reducing the error in the predictions as the model evolves forward in time. The technique exhibits more accurate predictions of extreme events, capturing the frequency of quasi-laminarization events and predicting the time until full laminarization more accurately than a single neural network.
## I Introduction
Real world dynamical systems often produce unusual behaviors in the form extreme events. These extreme events are characterized by a dissimilarity to the typical dynamics of the system, usually greater in scope or scale, that occur relatively infrequently compared to the typical dynamics. Common examples include rogue waves in the ocean [1], extreme weather patterns such as hurricanes and tornadoes [2; 3], and intermittency in turbulent flows [4]. While extreme events are a consequence of the same dynamical system that governs the non-extreme state, they are often difficult to forecast using data-driven modeling. The relative scarcity of data within extreme events both limits the overall observations of the extreme events on which to train the model and reduces the relative influence of extreme event behavior on data-driven model training. Thus, creating a data-driven model that can accurately capture extreme events remains a active challenge.
Recent studies have proposed various techniques for analyzing and forecasting the occurrence of extreme events. Guth and Sapsis [5] developed a probabilistic framework for the use of indicator observables as predictors of the extreme events. Ragone and Bouchet [6] supplemented climate model simulations with a rare event algorithm to examine and more accurately capture the increasing frequency of extreme heatwaves in Europe. Blanchard _et al._[7] built a machine learning framework to correct a biased climate model to produce better forecasts of extreme events. Mendez and Farazmand [8] applied probabilistic models toward predicting indirect spreading of wildfires by wind to improve forecasts of new wildfire locations. Gomez _et al._[9] applied a rare-event algorithm to analyze the transition between states in turbulent pressure-driven flow and more efficiently predict passage time between states. While these studies improved predictions of extreme events, they primarily corrected and supplemented the forecasts of existing models; we will instead aim to develop an improved model.
One attractive test case of a dynamical system with extreme events is the nine-dimensional model for turbulent flow developed by Moehlist, Faisst, and Eckhardt (MFE) [10]. The MFE model, an extension of a model by Waleffe [11], governs the evolution of nine amplitudes of spatial Fourier modes describing a turbulent shear flow between walls. These nine modes provide a minimal description of the mechanisms for self-sustenance in turbulence, allowing the resulting flow field to display realistic turbulent dynamics. In particular, the model displays features consistent with turbulence in the transition region, namely long periods of turbulent behavior with infrequent quasi-laminarization events (also called quiescent [12] or hibernating [13] intervals) and ultimately full laminarization [12; 13; 14]. These quasi- and complete relaminarizations will be the extreme events considered in the present work, in which we use time series from the MFE model as "data" with which to develop a data-driven model.
In recent years, several attempts have been made to reproduce the dynamics of the MFE model (and other flow systems) through data-driven techniques based on
neural networks (NNs). Neural networks are a powerful data-driven modeling technique that has been shown to accurately recreate the dynamics of systems such as the viscous Burgers equation[15], the Kuramoto-Sivashinksy equation[16; 17], and Kolmogorov flow[18]. Srinivasan _et al._[19] developed both feedforward neural networks (FNNs) and long short term memory (LSTM) networks to recreate the MFE model as discrete-time maps. While the FNNs were unable to reproduce the model, LSTMs were able to accurately reconstruct long-time behaviors of the full-field velocity statistics. This problem was revisited by Eivazi _et al._[20], where the reconstruction via a LSTM network was compared to predictions generated via a Koopman-based framework with nonlinear forcing. Their work demonstrated that the Koopman framework could reproduce short-time and long-time statistics as well or better than the LSTM networks. Pandey _et al._[21] introduced the use of reservoir computing in the form of an echo state network (ESN), to reproduce the MFE model as a discrete-time map, and provided comparisons to both a FNN and a LSTM network. The LSTM network and the ESN were shown to perform similarly, with both adequately capturing the full-field velocity statistics, while again the FNN was shown to perform appreciably worse. Racca and Magri [22] specifically examined the ability of an ESN to forecast the occurrence of an extreme event within a future time window. They determined that their data-driven model could accurately forecast extreme event episodes far into the future without incorrectly predicting false quasi-laminarization events. Pershin _et al._[23] assessed the ability of an ESN to forecast time until full laminarization. They showed that their model could adequately reproduce the lifetime distribution of the MFE data, correctly predicting the probability of an arbitrary MFE time series remaining in the turbulent state some time in the future. These studies only successfully modeled the MFE equations through the use of non-Markovian models, which forecast the future state through input of the current and past states. As the MFE model is itself Markovian, we will instead endeavor to model the MFE data with a Markovian dynamical system.
Specifically, we will use a recently developed method that will be denoted here as _Charts and Atlases for Nonlinear Data-Driven Dynamics on Manifolds_ (CANDyMan) [24; 25]. CANDyMan operates by decomposing the data distribution in state space into separate regions called charts with a clustering algorithm, learning local dynamical models in each chart using FNNs, then stitching together the charts to create a single atlas containing the global dynamical model. This technique has been previously applied to dimensional reduction problems, accurately learning reduced order dynamical models whose dimension is equal to the intrinsic dimensionality of the system. The use of multiple charts allows low-dimensional manifolds embedded in high dimensional space to be broken down into locally low dimensional structures, capturing the dynamics of a system with the minimal number of dimensions, in a way that single chart methods cannot. Here, we do not perform dimension reduction, but rather utilize the clustering of data to break down the dynamical system into separate regions representing extreme and non-extreme states. By learning the dynamics in the extreme region separately and independently from the non-extreme regions, CANDyMan inherently overcomes the imbalance of extreme vs non-extreme information and thus the limited influence of extreme events in data driven model training.
Here, we will use CANDyMan to reconstruct the dynamics of the MFE model. A data set containing time series of the MFE amplitudes will be decomposed using \(k\)-means clustering into atlases containing between one and six charts. We will train deep neural networks to reconstruct the time evolution of the MFE amplitudes within each of the charts, then stitch them together to create six global models. To assess the accuracy of the models, we will first consider their ability to reconstruct the turbulent flow field. Next, we will analyze their performance in reproducing short-time and long-time statistics. Finally, we will assess the extreme event forecasting of the data-driven models by determining the statistical accuracy of forecasting extreme event occurrences and comparing predicted laminarization lifetime distribution to the true data.
## II Formulation
The MFE model is a severely truncated Fourier Galerkin approximation to the Navier-Stokes equations (NSE) for flow between two free-slip walls and driven by a spatially sinusoidal body force. The flow is composed of nine spatial Fourier modes \(\mathbf{u}_{i}(\mathbf{x})\), describing the basic profile, streaks, and vortices, as well as interactions between them. The velocity field at position \(\mathbf{x}\) and time \(t\) is given by a superposition of the nine modes as \(\mathbf{u}(\mathbf{x},t)=\sum\limits_{i=1}^{9}a_{i}(t)\mathbf{u}_{i}(\mathbf{x})\). The mode amplitudes \(a_{i}(t)\) satisfy a system of nine ordinary differential equations (ODEs), generated through Galerkin projection, whose explicit form is given in Moehlis _et al._[10]. Our study considers a domain of size \(L_{x}\times L_{y}\times L_{z}\), with infinite, parallel walls at \(y=-L_{y}/2\) and \(y=L_{y}/2\) and periodic boundaries \(x=0\), \(x=L_{x}\), \(z=0\), and \(z=L_{z}\); \(x\), \(y\), and \(z\) are the streamwise, wall-normal, and spanwise coordinates, respectively. The domain size of \(L_{x}=4\pi\), \(L_{y}=2,Lz=2\pi\) was used, with a channel Reynolds number of 400; these parameters produce turbulent behavior of suitable length for data-driven model development [19].
As training data, we generated 100 unique time series from a fourth-order Runge-Kutta integration of the MFE equation. Each time series encompasses the transient turbulent state, consisting of turbulent intervals interspersed with quasi-laminarization events, with terminal laminarization occurring at long time. We will
often characterize the flow using the _total_ kinetic energy (KE), given by \(KE=\frac{1}{2}\sum_{i=1}^{9}a_{i}^{2}\). Therefore, the turbulent state is _low_ energy while the laminar is _high_ energy. Every time series collapses to the known laminar fixed point \(a_{i}=\delta_{i1}\). To generate the time series, initial conditions of eight of the amplitudes were given as follows: \((a_{1},a_{2},a_{3},a_{5},a_{6},a_{7},a_{8},a_{9})=(1,0.07066,-0.07076,0,0,0,0,0)\). The initial value of \(a_{4}\) was randomly generated in the range \([-0.1,0.1]\). These initial conditions were previously demonstrated to generate chaotic dynamical data with quasi-laminarization events [19]. Amplitudes and \(KE\) from a randomly chosen time series are shown in Fig. 1. We will report all results in units \(\tilde{t}=t/\tau_{L}\), where \(\tau_{L}\) is the Lyapunov time for the system; in the original nondimensionalization \(\tau_{L}\approx 41\)[26].
In this study, we examined the behavior of multi-chart models with between two and six charts, as well as a standard approach with one global model - the "one-chart" limit of CANDyMan. The dynamical system data is first clustered into \(k\) charts via \(k\)-means clustering, which partitions a data set into \(k\) clusters, minimizing the within-cluster variance [27; 28]. Other clustering techniques, such as \(k\)-nearest neighbors [29] or single-linkage clustering [30], could be used, provided the clustering technique produces charts that encompass contiguous regions of the state space. The clusters are then expanded so that they overlap, by locating the \(k_{NN}\) nearest neighbors to each data point in a cluster by Euclidean distance and adding these to the original cluster. This creates an overlap region between neighboring clusters, providing transition regions in which the dynamics are described by multiple charts and allowing for the movement into and out of the region to be handled by the separate local models.
Then, in each augmented chart, we generated discrete-time models of the form \(a^{(j)}(t+\tau)=F^{(j)}(a^{(j)}(t);\theta^{(j)})\), where \(a^{(j)}(t)\in\mathbb{R}^{9}\) is the representation of the state in chart \(j\), the discrete time step is \(\tau=0.5\), and \(F^{(j)}\) is the corresponding discrete-time map, which takes the form of a FNN. The quantities \(\theta^{(j)}\) are the neural network weights for \(F^{(j)}\), which are learned from the data using a standard stochastic gradient descent method and trained to minimize the loss function \(L^{(j)}=\langle||a^{(j)}(t)-\tilde{a}^{(j)}(t)||_{2}\rangle\), where \(\langle\cdot\rangle\) is the average over the training data. To ensure that the comparison between different numbers of charts was standardized, each global model contains the same number of total neurons, \(N_{T}=1800\); a system of \(k\) charts would then use \(N_{N}=N_{T}/k\) neurons in each local model, each containing four fully-connected hidden layers of \(N_{N}/6\), \(N_{N}/3\), \(N_{N}/3\), and \(N_{N}/6\) of the total number of local neurons, respectively. Each neural network was trained using a learning rate scheduler with an initial learning rate of 0.01, decaying at a rate of 0.9 every 2000 steps. Each model was then trained for 100 epochs, which was found to accurately reproduce the training data while avoiding overfitting.
## III Results and Discussion
### Distribution of data into clusters
Insight into the number of charts necessary for properly reconstructing the MFE data can be gained by observing the clustering of the training data set. Fig. 2 shows how one trajectory from the data set is partitioned when we use different numbers of charts, in terms of (a-f) the time series of \(KE\) and (g-l) state space projections onto amplitudes \(a_{1},a_{6},a_{9}\). With two charts, the data partitions into one cluster covering the low-energy (turbulent) non-extreme states and the second containing the high-energy extreme (quasi-laminar, laminarizing) states. When three charts were used, the clusters are further segmented, with one covering the low-energy turbulent state, the second primarily consisting of the transition into quasi-laminarization and laminarization events, and the third consisting mainly of the high energy components of these events. Clustering into four charts breaks down the low-energy region into two separate clusters that maintain relatively distinct. When the data was clustered into five or six charts, the distinction between the charts in the low-energy turbulent regime decreased and the charts containing the turbulent states were described by increasingly similar centroids.
### Trajectory predictions and time-averaged statistics
The performance of the data-driven models was evaluated on their ability to reconstruct the evolution of the MFE model amplitudes. Two test data sets were generated for comparison between the MFE model and the single- and multi-chart data-driven models. For trajectory predictions, 100 trajectories of MFE amplitudes were generated from randomized initial conditions and time-integrated for 10 Lyapunov times, with the initial conditions separately evolved forward using the generated data-driven models for the same length of time; this will henceforth be denoted as data set A. The purpose of this data set is to determine the short-time precision of
Figure 1: Evolution of three amplitudes, \(a_{1},a_{6},a_{9}\) and corresponding kinetic energy from one time series of the MFE data set.
the predictions generated by the single- and multi-chart models, regardless of any observed or predicted laminarization. For time-averaged statistics, 100 trajectories of MFE amplitudes were generated from randomized initial conditions and time-integrated for 100 Lyapunov times or until a laminarization event occurred, with the initial conditions separately evolved forward using the generated data-driven models and the same ending criteria; this will henceforth be denoted as data set B. The purpose of this data set is to assess the accuracy of the predicted long-time turbulent state statistics, and as such removes any observed or predicted laminarization.
The data-driven models are first evaluated on their ability to reconstruct the velocity statistics of the turbulent regime, the essential function of the MFE model. Using data set B, we project the amplitudes on to the spatial Fourier modes of the MFE model and compare the accuracy of the predicted velocity statistics in the turbulent state to the exact solution. The mean streamwise velocity and Reynolds shear stress were calculated for each data set, shown in Fig. 3. As the figure shows, the single-chart model captures well the form of the velocity statistics, but fails to accurately capture the exact values. The three-chart model creates much better predictions, correctly capturing the flow profile.
Now we turn to the prediction of trajectories. To quantify the performance of the trajectory predictions, we analyzed the data-driven models' ability accurately forecast the evolution of MFE amplitudes. Using data set A, the error in the predictions, \(E(t)\), was then calculated for each time series, averaged, and normalized, such that \(E(t)=\frac{||a(t)-\bar{a}(t)||_{2}}{||a(t)||_{2}}\). Here, \(D\) is the average \(L_{2}\)-norm between randomly chosen time instants in the turbulent state. Fig. 4 shows \(E(t)\) for the single- and three-chart models as a function of time. Both models create accurate predictions for \(\sim 0.5\tau_{L}\), with the error remaining close to 0. After this, the error in the predictions of the single-chart model grows much more rapidly than the three-chart model, indicating that the forecasting ability is much stronger in the multi-chart model.
### Prediction of extreme events
Now we examine the ability of the data driven-model to correctly capture the structure of the extreme events. An extreme event can be identified by a growth in the first MFE amplitude, which represents the mean shear, with a corresponding decrease in the remaining eight amplitudes, which capture the turbulent fluctuations. In Fig. 5.a, we show the joint probability density function (PDF) of \(a_{1}\) and \(a_{3}\) for data set B. The extreme events can be seen as the long tail extending to the right toward the laminar state \(a_{1}=1,a_{3}=0\). The prediction of the single-chart model, shown in 5.b fails to accurately capture the structure of the extreme events, with the
Figure 3: Mean streamwise velocity (solid line) and Reynolds shear stress (dashed line) of the full field of the testing data and of the reconstruction of the MFE model by the single- and three-chart model.
Figure 4: Ensemble averaged short-time error tracking of the reconstruction of the MFE model by the single- and three-chart model.
Figure 2: Clustering of a randomly selected trajectory of kinetic energy (a-f) and the projection of the clustering of the first, sixth, and ninth MFE amplitude (g-l) for one to six charts, color coded by cluster.
tail almost entirely absent. By contrast, the three-chart model, shown in 5.c, captures the structure of the extreme events well, accurately reproducing the shape of the joint probability density function.
We now examine the ability of the single- and multi-chart models to forecast an extreme event, defined by the kinetic energy of the time series increasing to \(KE>0.1\). To analyze the ability to predict quasi-laminarization events, each time series in data set A was segmented into time windows of duration \(0.5\tau_{L}\) and analyzed for the presence of an extreme event (i.e., \(KE\) exceeding \(0.1\) in the window). The exact solution and data-driven models were then compared to determine if each predicted whether an extreme event occurred. If an extreme event occurred in both the exact solution and the model predictions, this was labeled as a _true positive_ (\(TP\)). If the exact solution exhibited an extreme event, but the data-driven model failed to forecast one, this was labeled as a _false negative_ (\(FN\)). If the model predicted an extreme event when the exact solution showed none, it was identified as a _false positive_ (\(FP\)). [22] The total number of each identification type in each window was tabulated and the _F-score_, \(F\), was calculated in each window, where \(F=(1+\frac{FP+FN}{2TP})^{-1}\).
Fig. 6 shows the F-score as a function of prediction time for the single- and multi-chart models, as well as a comparison to results from Racca and Magri [22] using an echo state network. The multi-chart model outperforms the single-chart model, more accurately forecasting extreme events at all prediction times. The multi-chart model performs particularly well, with correct extreme event predictions \(1.5~{}\tau_{L}\) out. Our data-driven model compares favorably to the (non-Markovian) echo state network developed by Racca and Magri [22], matching its predictive capabilities at all prediction times.
Finally, we determine the ability of the data-driven models to forecast the lifetime of the turbulence before permanent laminarization. At long times, all time series generated by the MFE model at the given parameters collapse to the laminar fixed point; the lifetime of each time series is dependent on the initial condition, with the probability of remaining in the turbulent state approaching zero at long times. At \(Re\lesssim 320\), the probability that a given time series remains in the turbulent state for a duration \(t\), known as the survival function \(S(t)\), takes the form [10; 23], where \(t_{0}\) is the time delay caused by the approach to the attractor and \(1/\tau_{S}(Re)\) is the \(Re\)-dependent decay rate. At \(Re\gtrsim 320\), the distribution, particularly at long lifetimes, is known to deviate from an exponential decay, requiring increased time to laminarize.
Here, we define a laminarization event as a high-energy state (\(KE>0.1\)) for which the kinetic energy over \(1~{}\tau_{L}\) levels off. The survival function, \(S(t)\), is shown in Fig. 7 for the full system and the one- and three-chart models. The full system has a mean lifetime of \(251~{}\tau_{L}\). The one-chart model produces poor predictions of the lifetime distribution, vastly underestimating the lifetimes of the turbulent state, with a mean lifetime of \(29\tau_{L}\). The three-chart model produces a much more accurate representation of the lifetime distribution. The predicted distribution closely matches the exact solution for \(\tilde{t}\) up to about \(250\), while overestimating the lifetimes at longer times, and predicts an average lifetime of \(298~{}\tau_{L}\), overestimating the true result by less than \(20\%\). It should be emphasized that we are measuring time here in units of Lyapunov time, so the inaccuracy of \(S(t)\) in the three-chart model only arises at extremely long times.
## IV Conclusion
In this paper, we have applied the CANDyMan [24] technique towards data-driven modeling of a dynamical system with extreme events: the MFE model [10] for turbulent shear flow. We have shown that clustering data sets and training multiple local data-driven models allows unique features of distinct data regimes (e.g. extreme events) to be separately and more accurately captured by
Figure 5: (a) Joint probability density function of \(a_{1}\) and \(a_{3}\); (b) and (c), predictions of the MFE model by the single- and multi-chart model, respectively. Note the logarithmic scale.
Figure 6: Ensemble averaged short-time error tracking of the reconstruction of the MFE model by the single- and three-chart model.
a multi-chart global model than in a conventional data-driven model. Thus, multi-chart models were able to more accurately reproduce the evolution of this system, reducing forecasting error and improving reconstruction of the structure and frequency of extreme events. Importantly, multi-chart models dramatically improved predictions of extreme event occurrences compared to the single-chart models used previously. Finally, we demonstrated the ability of multi-chart models to accurately reconstruct the lifetime distribution of turbulent states, producing accurate results hundreds of Lyapunov times in the future.
Now that we have seen that CANDyMan has improved the performance of data-driven models forecasting a low-dimensional dynamical system with extreme events, future investigations should determine its applicability to higher-dimensional systems. As has been previously shown, the use of a charting technique such as CANDyMan allows improved dimension reduction through the use of autoencoder neural networks, capturing the intrinsic dimensionality of dynamical systems [24]. For high-dimensional dynamical systems with intermittency, such as turbulent fluid flows, the application of CANDyMan could not only aid in improved dimension reduction, but also produce more accurate forecasting than conventional single-chart techniques.
###### Acknowledgements.
This work was supported by ONR N00014-18-1-2865 (Vannevar Bush Faculty Fellowship). We gratefully acknowledge Daniel Floryan for helpful discussions.
|
2302.08819 | SPX, VIX and scale-invariant LSV\footnote{Local Stochastic Volatility} | Local Stochastic Volatility (LSV) models have been used for pricing and
hedging derivatives positions for over twenty years. An enormous body of
literature covers analytical and numerical techniques for calibrating the model
to market data. However, the literature misses a potent approach commonly used
in physics and works with absolute (dimensional) variables rather than with
relative (non-dimensional) ones. While model parameters defined in absolute
terms are counter-intuitive for trading desks and tend to be heavily
time-dependent, relative parameters are intuitive and stable, making it easy to
steer the model adequately and consistently with its Profit and Loss (PnL)
explanation power. We propose a specification that first explores historical
data and uses physically well-defined relative quantities to design the model.
We then develop an efficient hybrid method to price derivatives under this
specification. We also show how our method can be used for robust scenario
generation purposes - an important risk management task vital for buy-side
firms.\footnote{The authors would like to thank Prof. Marcos Lopez de Prado and
Dr. Vincent Davy Zoonekynd for valuable comments.} | Alexander Lipton, Adil Reghai | 2023-02-17T11:33:33Z | http://arxiv.org/abs/2302.08819v1 | # Spx, Vix and scale-invariant Lsv+
###### Abstract
Local Stochastic Volatility (LSV) models have been used for pricing and hedging derivatives positions for over twenty years. An enormous body of literature covers analytical and numerical techniques for calibrating the model to market data. However, the literature misses a potent approach commonly used in physics and works with absolute (dimensional) variables rather than with relative (non-dimensional) ones. While model parameters defined in absolute terms are counter-intuitive for trading desks and tend to be heavily time-dependent, relative parameters are intuitive and stable, making it easy to steer the model adequately and consistently with its Profit and Loss (PnL) explanation power. We propose a specification that first explores historical data and uses physically well-defined relative quantities to design the model. We then develop an efficient hybrid method to price derivatives under this specification. We also show how our method can be used for robust scenario generation purposes - an important risk management task vital for buy-side firms.1
Footnote 1: The authors would like to thank Prof. Marcos Lopez de Prado and Dr. Vincent Davy Zoonekhyd for valuable comments.
## 1 Introduction
Modern financial engineering started with the Black & Scholes model developed; see Black and Scholes (1973). This model has created a consensus for pricing the simplest derivative options, vanilla ones. Black & Scholes implied volatility is a shared concept that any market practitioner can infer from market prices. It is the solid pillar behind the volatility smile description.
However, the Black & Scholes model has severe limitations when pricing advanced derivatives, especially those with changing second-order exposures; see, e.g., Reghai (2015) when it comes to the gamma exposures. Following the first principles of derivative valuation, the explanation of PnL or replication is the primary judge of the quality of a model; see. e.g., Lipton (2002). Thus, practitioners needed a model that could consistently price those derivatives with the volatility smile. Bick and Reisman (1993), Dupire (1994), and Derman _et
al._ (1996) developed such a model through the introduction of local volatility. This model solved the pricing issue in the presence of the smile, particularly the changing gamma exposure over time and spot value. However, despite its wide usage by the industry, the local volatility model only included the static cost of hedging the smile. In theory, the model accounts only for the introduction of the different vanillas at the inception of the trade. In practice, traders, risk managers, and quants all realized quickly that during the life of the trade, an additional systematic cost arises when the trader readjusts her position. Nevertheless, the readjustment costs are not adequately considered under the local volatility model. Therefore, the industry needed models consistent with the static and dynamic cost of hedging.
The solution for this problem was introduced long ago and labeled the Local Stochastic Volatility (LSV) model of Jex _et al._ (1999) and Lipton (2002). In theory, the model was perfectly adequate. However, in practice, both the local volatility and the LSV had specifications pitfalls that generated recurrent problems when applied on the industrial scale. In particular, they were based on absolute levels of the spot. Some practitioners partially tackled the problem by normalizing with the forward values to circumvent this issue. However, little information was documented, resulting in arbitrary choices and significant differences from one bank to the other. Accordingly, although prices were marginally impacted, the resulting hedges were not necessarily correct.
In Hobson and Rogers (1998), an exciting class of path-dependent local volatility models was proposed. A specific version was examined with the volatility depending on the difference between the current price and an exponential average of past prices. This property is appealing to traders as it reflects the perception that large movements of the asset price in the past tend to forecast higher future volatility. This model exhibits a wide variety of smiles and skews. Lipton emphasized that all meaningful models have to be scale-invariant (dimensionless); see Lipton (1999, 2001, 2017). Lipton recommended building physical models depending on functions of dimensionless arguments of the time and spot parameters \(t\) and \(S_{t}\) and introduced the general form of such a model based on a kernel. In Guyon (2014); Guyon and Lekeufack (2022), the same observation was made regarding local volatility setup, showing how to capture prominent historical volatility patterns using path-dependent local volatility.
Hagan _et al._ (2002) present an analytically tractable formula that is scale-invariant and has both local and stochastic volatility. However, the local volatility part, being parametric and rigid, does not permit fitting the whole volatility surface.
We need to keep several typical trading conundrums in mind to design a proper approach to the problem at hand.
* Traders know very well that path dependency strongly impacts the volatility dynamic. It is an essential factor in their PnL explanation for the cost of hedging. Consider the Spx index. The implied volatility surface when the spot level is at 4000 depends on its previous levels. On the one hand, if the spot went down before, then the implied volatility surface would be
higher, reflecting an elevated level of risk. On the other hand, if the spot comes from lower levels, then the implied volatility is certainly lower as the market is pricing a momentum movement.
* Traders think in relative terms, and to be consistent with the industry, the volatility modeling must rely on a dimensionless approach.
* Traders operate on multiple time scales. They infer the cost of future volatility hedging through the recent evaluation of the market movement. They know that different players (institutional investors, asset managers, day traders, and statistical arbitrageurs) all impact market volatility.
* This set of models is vital for correctly determining the volatility dynamic, which is essential for hedging vanilla options and pricing path dependant options such as barrier or autocall options.
Our paper aims to provide an in-depth analysis of the most traded assets, SPX and VIX. Section 2 presents the SDE governing the LSV model. Section 3 explores data and presents the critical link between SPX and VIX. It shows that the scale-invariant specification gives excellent results both in-sample and out-of-sample. Section 4 presents different techniques for pricing derivatives, some of which are highly efficient and useful for production purposes. In particular, the impact on pricing classical path-dependent options is reasonably high. Section 5 briefly deals with scenario generation and involves creating hypothetical scenarios to simulate different realistic market conditions. Finally, Section 6 concludes.
The paper provides a comprehensive understanding of the financial market and is valuable for financial analysts, traders, and risk managers.
## 2 The process
We are interested by the process of the form:
\[\frac{dS_{t}}{S_{t}} = (r-q)dt+\sigma_{loc}e^{Y_{t}}(\rho dW_{t}+\sqrt{1-\rho^{2}}dB_{t}), \tag{1}\] \[dY_{t} = -\kappa Y_{t}dt+\nu dW_{t}, \tag{2}\]
where \(r,q\) are the risk neutral rate and dividend yield,
\[\sigma_{loc}=f\left(\frac{t}{\bar{t}},\frac{S_{t}}{\int_{-\infty}^{t}\phi(t, u)S_{u}du}\right), \tag{3}\]
where \(\bar{t}\) is a representative timescale, and \(\phi\) is a suitable averaging kernel, for example
\[\phi(t,u)=\kappa e^{-\kappa(t-u)}. \tag{4}\]
The scale invariant index \(I_{t}=\frac{S_{t}}{\int_{-\infty}^{t}\phi(t,u)S_{u}du}\) satisfies the following stochastic differential equation (SDE):
\[\frac{dI_{t}}{I_{t}} = ((r-q)-\phi(t,t)I_{t})dt+\sigma_{loc}e^{Y_{t}}(\rho dW_{t}+\sqrt{1- \rho^{2}}dB_{t}). \tag{5}\]
The natural scale for this index is the unity, and a deviation from it can be seen as a shock or a surprise and therefore generates a different value for the risk. Risk, in our case, is a forward-looking measure that the VIX represents.
## 3 Data exploration
In this section, we will explore historical data. We shall observe the historical paths for the SPX and VIX and qualitatively determine the type of links between these two quantities. In particular, we shall see that describing the VIX as a function of the absolute level of the SPX is not straightforward. Also, using a moving average kernel, we observe that this creates more data structure. Finally, we shall seek an optimal kernel designed to maximize the fit of the VIX as a function of the dimensionless quantity of the path.
### VIX as a function of the spot
We consider a long period of time starting from the beginning of the 1990 till the end of 2022.
In Figures 1, 2, we can observe that the VIX as a function of the absolute spot level shows little structure as expected because traders recognize many skewed regimes associated with different periods. Each period corresponds to some characteristic level of spot. In Figure 3, we observe different local behaviour of the volatility relatively to the absolute level of the SPX. In particular, one can
Figure 1: VIX and SPX as functions of time. Own graphics. Source : Factset.
Figure 3: Local Fit per period of 5 years. Own graphics.
Figure 2: Global Fit. Own graphics.
see that for a level of around 3500 there is hysteresis, i.e. the volatility is either increasing or decreasing depending on the period under scrutiny. The levels of R2 scores are shown in Table 1.
In Figure 4, we try different classical moving averages that are classical in the trading industry. Each moving average is a particular, in equation (6), a kernel with equal weights for each spot within the averaging window is shown. More precisely, for a lag of \(n\) days, the kernel is given by:
\[\phi(t,u)=\frac{1}{n}1_{t-u\leq n}. \tag{6}\]
We examine different lags based on trading rationals that are exposed below.
* 50 days: The 50-day moving average is a reliable technical indicator used by several investors to analyze price trends. It is a security's average closing price over the previous 50 days. The 50-day moving average is popular because it is a realistic and effective trend indicator in the stock market.
* 100 days: A moving average of 100 days helps investors see how the stock has performed over 20 weeks and find the price trend if it is upward or downward, which gives them a sense of the market sentiment as well.
* 200 days: The 200-day moving average is perceived as the dividing line between a technically healthy stock and one that is not. Furthermore, the percentage of stocks above their 200-day moving average helps determine the market's overall health.
* 250 days: The 250 period moving average is popular on the daily chart since it describes one year of the price action (one year has roughly 250 trading days).
Note that in Table 2 that the constant coefficient is not loaded, which is natural since we are regressing two non-dimensional quantities. Also, note that the ratio between the slope coefficient is negative for all time scales, as expected. Finally, the ratio of the slope coefficient and the curvature one are all of the same levels, i.e., the level of the average VIX value. For the last experiment,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Period & Intercept & Slope & R2 Score \\ \hline
[1990-1995] & 30.4 & -0.0352 & 42.78 \\ \hline
[1995-2000] & 9.4 & 0.0115 & 38.34 \\ \hline
[2000-2005] & 36.06 & -0.013 & 10.34 \\ \hline
[2005-2010] & 71.14 & -0.0407 & 44.02 \\ \hline
[2010-2015] & 33.85 & -0.0099 & 32.67 \\ \hline
[2015-2020] & 8.44 & 0.0035 & 3.73 \\ \hline
[2020- ] & 62.06 & -0.0097 & 37.27 \\ \hline \end{tabular}
\end{table}
Table 1: Score per period of 5 years
see equations (7, 8, 10) we perform an optimization described as follows. We choose a length \(n\) for the kernel. We assume that it is a stationary kernel with positive weights.
\[\min_{\phi_{1},...,\phi_{n},a,b,c}\frac{1}{T-n+1}\sum_{t=n}^{T}(VIX_{t}-(a+bI+cI^ {2}))^{2} \tag{7}\]
where
\[I=\frac{S_{t}}{\phi_{1}S_{t-n}+...+\phi_{n}S_{t-1}}, \tag{8}\]
subject to:
\[\phi_{1}+...+\phi_{n} = 1 \tag{9}\] \[\phi_{i} \geq 0\quad\forall i \tag{10}\]
Figure 4: Fit using natural trading scales. Own graphics.
We use a classic reparametrization of the weights, as in equation (11), to perform a non-constrained optimization. More precisely, we search for \(\psi_{i}\) such that:
\[\phi_{i}=\frac{e^{\psi_{i}}}{\sum_{j}e^{\psi_{j}}}, \tag{11}\]
Now \(\psi_{i},\forall i\) are free from constraints.
### In-sample results
Below we perform an in-sample study where we identify the functional form for the period [1990-2010].
Table 3 demonstrates that the quality of the \(R^{2}\) score has improved significantly to up to 88%. We shall measure the fit quality for the out-of-sample period in the next section.
### Out-of-sample results
This section measures the score of the previously calibrated models for the out-of-sample period [2010-2022]. The corresponding results are shown in Table 4.
### Optimised weights
We optimize weights for four representative periods - 50, 100, 200, and 250 days. The corresponding results are presented in Figure 5. Optimal weights do not have a clear structure for shorter periods, while for longer periods, they are well-formed.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Method & a & b & c & score \\ \hline
50 days & 0 & -1966.13 & 947.69 & 44.1 \\ \hline
100 days & 0 & -1302. & 618. & 55.29 \\ \hline
200 days & 0 & -676 & 311. & 56.24 \\ \hline
250 days & 0 & -548. & 250. & 54.83 \\ \hline \end{tabular}
\end{table}
Table 2: \(Vix(t)=a+bx+cx^{2}\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Method & a & b & c & R2 score \\ \hline Optimised 50 days & 72.12 & 3.96 & -56.32 & 82.77 \\ \hline Optimised 100 days & 92.25 & 2.49 & -74.2 & 86.73 \\ \hline Optimised 200 days & 66.18 & 4.25 & -49.17 & 88.07 \\ \hline Optimised 250 days & 75.02 & 3.47 & -57.13 & 88.55 \\ \hline \end{tabular}
\end{table}
Table 3: \(Vix(t)=a+bx+cx^{2}\)
Figure 6 shows that the optimally identified kernel reduces the cloud width compared to the moving average previously calculated. Remarkably, the obtained kernel resembles a power law in the form of a fast-decreasing shape as described Bergomi (2016); see Figure 7.
The corresponding indices \(I_{t}^{200}\) and \(I_{t}^{250}\) are shown in Figure 8.
### Innovations
The relationship between VIX and SPX is complex and dynamic, and no one-size-fits-all approach works in all market conditions. However, the dimensional approach provides a good proxy for such a relationship. Besides, it gives a causal relationship between levels of VIX and surprises or shocks expressed as the distance between the current SPX spot and its weighted average using the previously identified kernel. In Figure 9, we plot the following ratio:
Figure 6: Optimised fit. Own graphics.
Figure 7: Functional form of the weights. 200 days average: score-weight 0.9948, parameters: [0 -0.38 -0.08]; 250 days average: score-weight 0.9839, parameters: [0 0.82 -0.23]. Own graphics.
\[y_{t}=\ln(\frac{VIX_{t}}{f(\frac{S_{t}}{\phi_{1}S_{t-n}+\ldots+\phi_{n}S_{t-1}})}). \tag{12}\]
which represents stochastic innovations unexplained by the local volatility contributions.
We estimate the volatility process using an autoregressive AR(1) the corresponding \(y_{t}\). The fit on \(y_{t}\) is excellent; see Figure 10. It has a \(R^{2}\) score of \(96.75\%\).
Thus, the discrete dynamics of the innovations can be written as follows:
\[y_{t+1}=0.9822y_{t}+0.0026\epsilon_{t+1}, \tag{13}\]
where \(\epsilon_{t}\sim\mathcal{N}(0,1)\).
Figure 8: Out Sample : Indices \(I_{t}^{200}\) and \(I_{t}^{250}\). Own graphics.
Figure 9: Innovations. Own graphics.
## 4 Derivatives pricing
### Background
We combine the one-dimensional Monte Carlo simulation and the quantization technique method to design an efficient technique for pricing derivative options on assets in the LSV model. Our approach is similar to the approach proposed in Lipton and Sepp (2022) with few modifications. First, we condition the dynamics on the realization of the \(Y\) process. The idea is the same; however, we rely on the functional quantization technique because the conditional price of a derivative is smooth enough with respect to the volatility path so that there is not obligatory to use Monte Carlo on \(Y\). In practice, only three quantizers are enough to capture the conditional convexity of the products with respect to this variable. Second, as described in Lipton and Sepp (2022), we price derivatives conditionally to the \(Y\) process. In practice can be done in several ways, depending on the problem's dimensionality. For mono-asset barrier options, one can use PDEs. For multi-dimensional problems, one can use Monte Carlo. It is crucial to note that this pricing methodology is extremely fast and smooth both for the mono and multi assets pricing. Therefore, it is a good candidate for pricing and risk management industrialization.
We start by illustrating the method for vanilla options and consider a call option with maturity \(T\) and strike \(K\).
We first introduce a functional quantization partition \(C_{i}(y)\) with \(Q\) quantizer as described in Pages and Printems (2005). One can note that the Orstein-Uhlenbeck process is treated analytically as an example in Pages and Printems (2005).
We define a quantization \(\hat{Y}^{y}\) of \(Y\) that can be used in the following quadrature formula. For a given functional \(F:L_{2}[0,T]\mapsto\mathbb{R}\), and for every \(y=(y_{1},...,y_{Q})\) in \(L_{2}[0,T]^{Q}\) we get:
Figure 10: The autoregressive fit of the innovations. Own graphics.
\[\mathbb{E}(F(\hat{Y}^{y}))=\sum_{i=1}^{Q}\mathbb{P}_{Y}(C_{i}(y))F(y). \tag{14}\]
So if one has numerical access to both the Q-quantizer \(y\) and its "companion" distribution \(\mathbb{P}_{Y}(C_{i}(y))\), the computation is straightforward.
\[\mathbb{E}(S_{T}-K)^{+} = \mathbb{E}(\mathbb{E}((S_{T}-K)^{+}|\{Y_{t},t\in[0,T]\}) \tag{15}\] \[= \sum_{i=1}^{Q}\mathbb{P}_{Y}(C_{i}(y))\mathbb{E}((S_{T}-K)^{+}|y), \tag{16}\]
The process \(S\) conditioned on the quantization \(y_{t}\), that we note \(S^{y}\), is given by:
\[\frac{dS^{y}_{t}}{S^{y}_{t}} = \mu^{y}_{t}\sigma_{loc}dt+\nu^{y}_{t}\sigma_{loc}dB_{t}, \tag{17}\] \[\mu^{y}_{t} = \frac{\rho}{\nu}(y^{{}^{\prime}}_{t}+\kappa y_{t})e^{y_{t}},\] (18) \[\nu^{y}_{t} = \sqrt{1-\rho^{2}}e^{y_{t}}. \tag{19}\]
Monte Carlo Method can be used to simulate this model. However, the process \(S^{y}\) is no longer a martingale. Instead, it presents a term structure drift and path-dependent volatility.
### Vanilla option pricing
In this section, we work on the conditional process (17). We can make the pricing using three different approaches:
* Monte Carlo,
* Partial differential equation,
* and Most likely path approach.
In the Monte Carlo approach, we discretize the time \(t_{0}=0<t_{1}<...<t_{n}=T\) and generate random numbers \(\epsilon_{i}\in\mathcal{N}(0,1)\) i.i.d (independently and identically distributed):
\[S^{y}_{0} = S_{0}, \tag{20}\] \[S^{y}_{i+1} = (1+(r-q)(t_{i+1}-t_{i}))S^{y}_{i}\] (21) \[+ S^{y}_{i}(\mu^{y}_{i}\sigma_{loc}(i)(t_{i+1}-t_{i})+\nu^{y}_{i} \sigma_{loc}(i)\epsilon_{i}\sqrt{t_{i+1}-t_{i}}),\] (22) \[\sigma_{loc}(i) = f(\frac{S^{y}_{i}}{e^{-\kappa t_{i}}\sum_{j=0}^{i}e^{\kappa t_{ j}}S^{y}_{j}}). \tag{23}\]
To avoid cumbersome computations, we introduce an additional variable \(m\) representing the running mean:
\[m_{i} = \sum_{j=0}^{i}e^{\kappa t_{j}}S^{y_{j}}, \tag{24}\] \[m_{0} = e^{\kappa t_{0}}S_{0},\] (25) \[m_{i+1} = m_{i}+e^{\kappa t_{i+1}}S_{i+1}. \tag{26}\]
Regarding the partial differential equation (PDE), we introduce a new variable for the weighted average \(m_{t}\):
\[dm_{t}=e^{kt}S_{t}dt, \tag{27}\]
The value function \(u(t,s,m)\) for the vanilla option satisfies the following PDE:
\[\partial_{t}u+\mu_{t}^{y}s\partial_{s}u+e^{kt}s\partial_{m}u+\frac{1}{2}( \frac{s}{m})^{2}\partial_{ss}u=0, \tag{28}\]
\[u(T,s,m)=(s-K)^{+}. \tag{29}\]
The most likely path, cf Reghai (2012), is also an efficient technique to compute the prices and is well adapted to quantizing the volatility. Figure 11 demonstrates the corresponding results
Figure 11: Implied volatilities calculated by several complementary methods. Own graphics.
### Path-dependent option pricing
We now have several models calibrated on the same smile, and we shall price different derivative products that depend on the dynamic of the smile; see Monciaud and Reghai (2021) for further details. Indeed, an emblematic product widely used by investors is the up-and-out call. The fact that its gamma exposure changes sign requires the smile to be considered in pricing and hedging Reghai (2015). In addition, the fact that the product can terminate earlier than its maturity if we touch the deactivating barrier means we must unwind our vanilla hedging. The cost of such an operation is also dependent on the smile in the future. Thus the hedging of this product requires taking into account the dynamics of the smile. We compare the different hedging costs using the approach described in Monciaud and Reghai (2021) and the newly calibrated model; see Table 5.
Table 6 shows hedging costs for variance and volatility swaps.
### Final set of equations
Using the parametrization in equation (13) above, as well as a zero correlation between the spot and the volatility, we get the following set of equations:
\[\frac{dS_{t}}{S_{t}} = (r-q)dt+\sigma_{loc}e^{Y_{t}}dB_{t}, \tag{30}\] \[dY_{t} = -\kappa Y_{t}dt+\nu dW_{t},\] (31) \[\frac{dI_{t}}{I_{t}} = ((r-q)-\phi(t,t)I_{t})dt+\sigma_{loc}e^{Y_{t}}dB_{t}. \tag{32}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Variance Swap & Volatility Swap \\ \hline Black Scholes & \(17.49\pm 2.33\times 0.02\) & \(17.47\pm 2.33\times 0.01\) \\ \hline Local Volatility & \(18.97\pm 2.33\times 0.04\) & \(18.47\pm 2.33\times 0.04\) \\ \hline LSV as in Monciaud and Reghai (2021) & \(18.97\pm 2.33\times 0.02\) & \(18.17\pm 2.33\times 0.005\) \\ \hline LSV PD & \(18.97\pm 2.33\times 0.03\) & \(18.39\pm 2.33\times 0.05\) \\ \hline \end{tabular}
\end{table}
Table 6: Volatility derivatives prices.
\begin{table}
\begin{tabular}{|c|c|} \hline Model & Price \\ \hline Black Scholes & \(1.93\pm 2.33\times 0.04\) \\ \hline Local Volatility & \(2.91\pm 2.33\times 0.05\) \\ \hline LSV as in Monciaud and Reghai (2021) & \(2.77\pm 2.33\times 0.05\) \\ \hline LSV PD & \(3.12\pm 2.33\times 0.05\) \\ \hline \end{tabular}
\end{table}
Table 5: Price Call Up & Out K=100,B=120.
\[\sigma_{loc}=f\left(\frac{S_{t}}{\int_{-\infty}^{t}\phi(t,u)S_{u}du}\right). \tag{33}\]
where \(r,q\) are the risk neutral rate and dividend yield and \(\phi(t,u)\) is the power law weight function given in Figure 7.
## 5 Scenario generation
Generating consistent trajectories for VIX and SPX is essential for both the buy and sell sides. For the buy side, it is crucial to generate realistic paths to test strategies. On the sell side, it is the question of consistency and having a model matching both SPX and Vix smile.
## 6 Conclusion
In conclusion, dimensionless data modeling has proven valuable in understanding the complex relationship between VIX and SPX. By removing the units in the SPX and putting some knowledge about how markets function, we could identify the proper kernel for modeling the link between SPX and VIX.
After studying the residuals, we could identify that the log ratio is well represented with a fast mean reverting process.
Combining these two techniques, we end up with a parsimonious generating model that captures with fidelity some of the main features in financial markets.
Overall, the use of dimensionless data modeling in the analysis of VIX and SPX highlights the importance of data normalization and the benefits of using this technique to uncover patterns and relationships in complex data sets.
|
2302.01183 | Grain-size effects during semi-brittle flow of calcite rocks | We study the role of grain size in the rheological behaviour of calcite
aggregates in the semi-brittle regime. We conduct triaxial deformation tests on
three rocks, Solnhofen limestone, Carrara marble and Wombeyan marble, with
average grain sizes of 5-10 $\mu$m, 200 $\mu$m and 2 mm, respectively, at
pressures in the range 200-800 MPa and temperatures in the range 20-400
$^\circ$C. At all conditions, both strength and hardening rate increase with
decreasing grain size. Flow stress scales with the inverse of grain size to a
power between 1/3 and 2/3. Hardening rate decreases linearly with the logarithm
of grain size. In-situ ultrasonic monitoring reveals that P-wave speed tends to
decrease with increasing strain, and that this decrease is more marked at room
temperature than at 200 and 400 $^\circ$C. The decrease in wave speed is
consistent with microcracking, which is more prevalent at low temperature and
low pressure. Microstructural observations reveal high twin densities in all
deformed samples. Twin density increases with stress, consistent with previous
datasets. Spatial distributions of intragranular misorientation indicate that
twins are sometimes obstacles to dislocation motion, but this effect is not
ubiquitous. Computed slip-transfer statistics indicate that that twins are
typically weaker barriers to dislocation glide than grain boundaries, so that
their effect on dislocation accumulation and hardening rates is likely smaller
than the effect of grain size. Indeed, our data reveal that grain size exerts a
first-order control on flow stress and hardening in calcite, whereas twinning
may only have a secondary impact on these behaviours. | Christopher Harbord, Nicolas Brantut, David Wallis | 2023-02-02T16:10:25Z | http://arxiv.org/abs/2302.01183v1 | # Grain-size effects during semi-brittle flow of calcite rocks
###### Abstract
Cracking is a key ingredient for the study of the effects of the semi-brittle flow of calcite rocks. The main difference between the two models is that the effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of the semi-brittle flow is a significant effect. The effect of semi-brittle flow is a significant effect.
###### Abstract
We study the role of grain size in the rheological behaviour of calcite aggregates in the semi-brittle regime. We conduct triaxial deformation tests on three rocks, Solnhofen limestone, Carrara marble and Wombeyan marble, with average grain sizes of 5-10 \(\mathrm{\SIUnitSymbolMicro m}\), 200 \(\mathrm{\SIUnitSymbolMicro m}\) and 2 \(\mathrm{\SIUnitSymbolMicro m}\), respectively, at pressures in the range 200-800 MPa and temperatures in the range 20-400\({}^{\circ}\)C. At all conditions, both strength and hardening rate increase with decreasing grain size. Flow stress scales with the inverse of grain size to a power between 1/3 and 2/3. Hardening rate decreases linearly with the logarithm of grain size. In-situ ultrasonic monitoring reveals that P-wave speed tends to decrease with increasing strain, and that this decrease is more marked at room temperature than at 200 and 400\({}^{\circ}\)C. The decrease in wave speed is consistent with microcracking, which is more prevalent at low temperature and low pressure. Microstructural observations reveal high twin densities in all deformed samples. Twin density increases with stress, consistent with previous datasets. Spatial distributions of intragranular misorientation indicate that twins are sometimes obstacles to dislocation motion, but this effect is not ubiquitous. Computed slip-transfer statistics indicate that that twins are typically weaker barriers to dislocation glide than grain boundaries, so that their effect on dislocation accumulation and hardening rates is likely smaller than the effect of grain size. Indeed, our data reveal that grain size exerts a first-order control on flow stress and hardening in calcite, whereas twinning may only have a secondary impact on these behaviours.
## Plain Language Summary
Rocks are strongest when their failure mode is mixed between brittle (fracture processes) and crystal plastic (individual grains deform by movement of crystal defects). Unfortunately, we have a limited understanding of how these two mechanisms accommodate deformation together. Yet, both failure modes are sensitive to grain size. We selected three calcite rocks of varying grain size and deformed them at elevated pressure and temperature to simulate coupled brittle and crystal-plastic deformation. Calcite also forms twins (small planar structures in the crystal) when deformed that may act to strengthen the rock during deformation. By monitoring the speed at which sound waves pass through the rock during experiments, we were able to track brittle behaviour. We find that smaller grain size results in greater strength. We observe twins in all deformed samples, but they do not seem to have a big effect on the resulting strength. Microscopic observations of
samples and wave-speed measurements show that brittle behaviour is suppressed by increases in pressure and temperature. We suggest a new approach to model these processes that could be used to model coupled brittle and crystal-plastic behaviour on a large scale.
## 1 Introduction
In the shallow lithosphere, rocks deform by localised brittle failure and strength is described by a friction law (Scholz, 2002; Townend & Zoback, 2000). In the lower lithosphere, high temperatures and pressures promote the onset of crystal-plastic deformation mechanisms. Here, strength is described by flow laws sensitive to temperature (\(T\)) and strain rate (\(\dot{\varepsilon}\)) (Goetze & Brace, 1972; Evans & Kohlstedt, 1995). At intermediate conditions, deformation is ductile, i.e., remains macroscopically distributed (following the terminology of Rutter (1986)), and is often termed "semi-brittle", which is characterised by coupled cracking and crystal plasticity. Semi-brittle deformation is likely to support the highest stresses in the lithosphere (Goetze & Evans, 1979; Brace & Kohlstedt, 1980), impacting geodynamic processes (Burov, 2011) and the maximum depth of earthquake nucleation (Scholz, 1998). Despite the significance of semi-brittle flow, there is a paucity of simple models to describe the rheological behaviour of rock in this regime, and the relative importance of key processes, such as friction, tensile cracking, dislocation motion, twinning, or grain-boundary sliding is not well constrained. Thus, to advance our understanding of semi-brittle flow, we must first improve our understanding of interactions among microscale deformation processes.
Calcite rock is an important constituent of tectonic terrains, and undergoes a transition to semi-brittle flow at modest pressure and low temperature (e.g., Heard, 1960; Fredrich et al., 1989). This combination of characteristics has led to numerous laboratory investigations into the rheological behaviour of calcite rock across the brittle-ductile transition (see Rybacki et al. (2021) and references therein). The main deformation behaviours of calcite rock can be illustrated using the case of Carrara marble, a relatively isotropic, pure calcite aggregate with little initial crack porosity and equant grain shapes with sizes in the range 60-200 \(\mu\)m. At room temperature, Carrara marble is dilatant and brittle at low pressure (\(P<30\) MPa), and its strength is controlled by friction. At intermediate pressures (\(30<P<300\) MPa), calcite rocks deform in a semi-brittle manner, and strength becomes decreasingly pressure sensitive with increasing pressure (Fredrich
et al., 1989). When pressure is high enough (\(P\,>\,300\) MPa), strength becomes independent of confining pressure and deformation is nondilatant (Edmond and Paterson, 1972). Furthermore, increases in temperature reduce the pressure required to promote semi-brittle flow (Rybacki et al., 2021). Edmond and Paterson (1972) and Fischer and Paterson (1989) report _in-situ_ changes of volumetric strain during deformation of calcite rocks, finding that dilatancy is reduced by increasing pressure. Furthermore, sample volume remains constant when strength is independent of pressure (Edmond and Paterson, 1972). More recently, Schubnel et al. (2005) and Rybacki et al. (2021) measured _in-situ_ P-wave velocity (\(V_{\text{P}}\)) and demonstrated that wave-speed decreases during deformation are suppressed by pressure increases. _In-situ_ measurements therefore support microstructural observations in implying that increasing pressure suppresses cracking and frictional sliding whilst promoting crystal-plastic processes.
Microstructural observations from calcite rocks deformed in the semi-brittle regime reveal that distributed cracking, twinning, and dislocation motion act together to accommodate strain (Fredrich et al., 1989). _Post-mortem_ crack density decreases with increasing confining pressure (Fredrich et al., 1989), approaching zero when strength is independent of confining pressure. Additionally, deformation twins are present in calcite rocks deformed at conditions below \(800^{\circ}\)C (Rybacki et al., 2021), and twin spacing depends on stress (Rowe and Rutter, 1990). Detailed strain measurements at the grain scale reveal that twinning occurs readily in well oriented grains (those with high Schmid factor for twinning), and that it is likely associated with a local backstress that causes hardening of twinned grains (Spiers, 1979). In addition, microscale strain mapping also indicates the existence of shear localised along grain boundaries at temperatures \(<\,800^{\circ}\)C, potentially identifying grain-boundary sliding as a possible deformation mechanism (Quintanilla-Terminel et al., 2017).
Another common rheological behaviour of semi-brittle flow in calcite rocks is strain hardening, which can persist to high temperatures (\(<800^{\circ}\)C, see Rybacki et al., 2021). At low temperature, strain hardening can arise from microscopic frictional slip across distributed defects, such as grain boundaries (e.g., David et al., 2020). In dislocation-mediated deformation regimes, strain hardening occurs due to an increase in dislocation density caused by inefficient recovery mechanisms (_e.g._, lack of dislocation climb) (Mecking and Kocks, 1981) and is sensitive to microstructure. In particular, grain boundaries can act as barriers to dislocation motion, resulting in increases in strength and sometimes
increases in hardening rates with decreasing grain size for most metals (grain-size strengthening, see Cordero et al., 2016), and some geological materials (_e.g._, olivine, Hansen et al., 2019). Alternatively, strain hardening in calcite rocks has been proposed to be controlled by twinning (Rybacki et al., 2021). Twins may not contribute significantly to the total strain, but they could indirectly control strength and hardening by providing additional barriers to dislocations. Indeed, TEM observations show that dislocation densities are elevated adjacent to twin boundaries (Barber & Wenk, 1979; Fredrich et al., 1989; Rybacki et al., 2021). In the metallurgy literature (De Cooman et al., 2018), the additional hardening provided by twins is commonly known as twinning-induced plasticity, or TWIP. TWIP originates from observations of high-manganese steels, where high hardening rates and ductility occur as a result of the formation of deformation twins (De Cooman et al., 2018). The hardening rates in these metals are attributed to a "dynamic Hall-Petch effect" in which twinning refines the intracrystalline microstructure and thereby reduces the dislocation mean free path. In this case, twin spacing largely controls strength and hardening and thereby limits sensitivity to grain size.
The semi-brittle regime in calcite rock is thus characterised by many interacting deformation mechanisms. One way to quantify the relative contribution of each mechanism to the overall rheological behaviour is to test the impact of independent variables beyond the usual pressure and temperature conditions and imposed strain rate. One such variable is the grain size. At room temperature, yield stress (\(\sigma_{\text{y}}\)) and the transition pressure to semi-brittle deformation increase with decreasing grain size (Olsson, 1974; Fredrich et al., 1990). Grain size also impacts the strength of marble tested at elevated temperature in the dislocation-creep regime, which could be consistent with a Hall-Petch effect (Renner et al., 2002). In the high-pressure, moderate-temperature range (\(P>200\) MPa, \(T<400^{\circ}\)C) where twinning is ubiquitous, models of twinning-induced plasticity suggest that twin density rather than grain size is the main control on strength (Rybacki et al., 2021). To test this hypothesis, here we explore the role of grain size in semi-brittle rheological behaviour and the brittle-plastic transition in calcite rocks for comparison to the role of twin density.
We performed a series of experiments at a range of pressures (\(P\leq 800\) MPa) and temperatures (\(\leq 400^{\circ}\)C) using calcite rocks spanning two orders of magnitude in grain size: Solnhofen limestone (grain size less than \(10\) um), Carrara marble (grain size on the order of \(100\) um), and Wombeyan marble (grain size on the order of \(1\) mm). Experiments
are supplemented by _in-situ_ measurements of axis-parallel P-wave speed and _post-mortem_ microstructural investigations to infer deformation mechanisms. We find that grain size has a first-order impact on both strength and hardening rate, and that the influence of twins is only indirect and likely smaller than anticipated from TWIP models.
## 2 Methods
### Sample materials
Three calcite rocks were selected for experiments, with grain size, \(D\), spanning three orders of magnitude. Solnhofen limestone is a lithographic limestone with a grain size of 5-10 um (French et al., 2022), composed of \(>99.9\%\) calcite, and with an initial porosity (measured with helium pycnometry) of 4% (Baud et al., 2000). Carrara Marble is a medium-grained marble with equant grains of size 60-220 um, comprised of \(>99.9\%\) calcite with a porosity of \(<\) 0.1%. In the starting material, most grains exhibit at least one twin set. Wombeyan marble is a coarse-grained marble with a grain size of 1-2 mm, comprised of 96 % calcite. Grains are equant and typically twinned in at least one plane, with an initial twin density of 60 mm\({}^{-1}\). This material was obtained thanks to Ian Jackson at the Australian National University, and is the same rock that was used by Paterson (1958). See Table 1 for further petrographic details.
### Mechanical testing
A total of 42 experiments were conducted in the "Murrell" gas-medium triaxial apparatus hosted at University College London (Harbord et al., 2022). Rectified cores, 10
\begin{table}
\begin{tabular}{c c c c} \hline & Solnhofen limestone & Carrara marble & Wombeyan marble \\ \hline Abbreviation & SL & CM & WM \\ Composition & \(>\)99\% CaCO\({}_{3}\) & \(>\)99\% CaCO\({}_{3}\) & 96\% CaCO\({}_{3}\) \\ & & & 2.5\% MgCO\({}_{3}\) \\ Porosity (\%) & 4\% & \(<\)0.5\% & \(<\)0.5\% \\ \(d\) (mm) & 0.005–0.01 & 0.06–0.22 & 1–2 \\ Initial \(V_{\text{P}}\) (m s\({}^{-1}\)) & 5600 & 5900 & - \\ \hline \end{tabular}
\end{table}
Table 1: Composition and physical properties of starting materials
Figure 1: (a) Detail of sample geometry in the pressure vessel. Insets (b)–(d): Polarised incident-light images of initial sample microstructure of (b) Solnhofen limestone, (c) Carrara marble and (d) Wombeyan marble.
mm in diameter and 22 mm in length, were dried in an oven at 70\({}^{\circ}\)C for at least 24 hours before testing. Samples were inserted into annealed copper jackets of 0.1 mm wall thickness and were swaged onto the deformation rams using a slip ring (Figure 1a). The jacketed samples were inserted into the pressure vessel and pressurised to the target confining pressure using argon gas. Here the confining pressure is provides the intermediate and minimum principal stresses. Throughout the tests, the measured pressure remained within 5% of the target pressure. An internal furnace was used to heat samples to 200\({}^{\circ}\)C and 400\({}^{\circ}\)C. Axial load was applied vertically, generating the differential stress, \(\sigma\). Axial stress was measured by an external load cell, and axial shortening was measured by a pair of linear variable-displacement transducers. Axial deformation was then applied by a piston that moved at a constant imposed shortening rate of 0.22 um s\({}^{-1}\), equivalent to a strain rate of 10\({}^{-5}\) s\({}^{-1}\). All samples were deformed to a total strain of 7.5%. A detailed list of test conditions is given in Table 3.
### Data processing
Several corrections were made to the mechanical data. Seal friction and jacket strength were subtracted from load measurements (as reported in Harbord et al. (2022)) and displacement due to machine compliance was subtracted from shortening measurements. The differential stress supported by the sample was computed by dividing the corrected force by the cross-sectional area of the sample, which was assumed to linearly increase with deformation consistent with a constant sample volume. Mechanical data were then further processed to obtain estimates of the tangent modulus (\(h\) = \(\delta\sigma/\delta\varepsilon\)) by numerical differentiation of the stress data over a moving window of 1% strain. Uncertainties in seal friction during deformation result in error on the hardening modulus that is \(<\) 20 % of reported values. The yield stress (\(\sigma_{\rm y}\)) is defined as the stress at which \(h\) falls to 90% of the sample-specific Young's modulus determined from elastic loading.
### In-situ P-wave speed measurements
_In-situ_ P-wave speed was measured during tests conducted on Solnhofen limestone and Carrara marble. Measurements were made parallel to the sample axis during deformation using the pulse-transmission method (Birch, 1961). Every 10 s during the experiments, a 200 V pulse of 0.4 us duration and 2.5 MHz frequency was sent to a lead-titanate- zirconate ceramic disk mounted centrally in the bottom ram, which was received by a
transducer mounted centrally at the top of the sample assembly and recorded digitally at 100 MHz (see Harbord et al., 2022, for further details). The signal-to-noise ratio was improved by stacking 256 raw traces at each time interval. Changes in axial P-wave speed were computed relative to a reference waveform using cross correlation, and corrected for interfacial delays following the methods outlined in Harbord et al. (2022). Measurements of P-wave speed are reported as the change of wave speed (\(\Delta V\)) and are normalised relative to the wave speed measured at the start of loading (\(V_{0}\)). Selected deformation tests were subsequently repeated, and wave-speed changes were found to be reproducible (see Supplementary Material Figure S1).
To gain insight into the microstructural state of samples after deformation during the return to ambient pressure, we continued to perform wave-speed surveys after removal of differential stress and throughout the staged decompression. The change in wave speed during decompression is quantified as the relative change in wave speed (\(\Delta V\)) normalised by the wave speed measured at the end of deformation (\(V_{\mathrm{final}}\)), and when at temperature, after cooling of the sample. These measurements were also complemented by _post-mortem_ measurements of wave speed at room pressure (0.1 MPa).
### Microstructural analysis
#### Optical microscopy
To investigate deformation microstructures, selected samples were mounted in epoxy, sectioned parallel to the deformation axis, and polished. A set of thin sections was also made for visible-light microscope observations. Imaging using transmitted and reflected light was performed using a Leica DM750P microscope furnished with an ICC50 camera. To quantify the prevalence of intragranular cracks in each sample, we followed the method outlined by Fredrich et al. (1989). Samples were imaged using reflected light at \(\times 4\) magnification, and we counted intersections between cracks (excluding grain boundaries) and a square grid of 2 mm by 2 mm with a spacing of 0.2 mm. These counts were used to determine the resulting crack surface area per volume, \(S_{\mathrm{v}}\) (mm\({}^{2}\)/mm\({}^{3}\)).
#### Electron microscopy
Scanning electron microscopy was performed using a Jeol JSM-6480LV scanning electron microscope (SEM) hosted at University College London and a Zeiss Gemini 300
field-emission gun SEM at the University of Cambridge. Electron backscatter diffraction (EBSD) patterns and forescattered electron images were collected using an Oxford Instruments Symmetry detector and AZtec 4.0 acquisition software. Table 2 lists the acquisition parameters for each EBSD dataset. All diffraction patterns with acquired with low detector gain. Datasets collected for conventional EBSD were acquired with a reduced number of pixels in the diffraction patterns to increase the mapping speed. Datasets collected for high-angular resolution electron backscatter diffraction (HR-EBSD) were collected with the maximum number of pixels permitted by the detector.
HR-EBSD is a postprocessing technique that analyses distortion of the diffraction patterns to measure lattice rotations and intragranular heterogeneity in elastic strain (Wilkinson et al., 2006; Britton & Wilkinson, 2011, 2012). Full details of the technique are given by Wallis et al. (2019) and here we provide a summary of the key points. One diffraction pattern from the host grain within each mapped area was manually selected to be a reference pattern based on the quality of the diffraction pattern and its position within the map. 100 regions of interest, 256\(\times\)256 pixels in size, were extracted from all diffraction patterns within the host grain. Each region of interest from each diffraction pattern was cross-correlated with the corresponding region of interest from the reference pattern to determine shifts in their positions. Shifts in the diffraction pattern due to beam scanning were corrected using a calibration determined on an undeformed Si single crystal following Wilkinson et al. (2006) and the position of the pattern centre was calibrated using diffraction patterns collected over a range of detector insertion distances (Maurice et al., 2011). A deformation-gradient tensor was fit to the field of shifts in each diffraction pattern. The deformation-gradient tensor was decomposed into its symmetric and antisymmetric parts, which respectively give the elastic-strain and rotation tensors (Wilkinson et al., 2006). We used the pattern remapping approach of Britton and Wilkinson (2012), in which a first pass of cross-correlation measures lattice rotations that are used to rotate each pattern back into the orientation of the reference pattern before a second pass of cross-correlation measures the elastic strain and a small correction to the rotations. The elastic strains were converted to stresses using Hooke's law. The measured stresses are relative to the unknown stress state at the reference point, giving maps of intragranular stress heterogeneity, rather than absolute values. We subtracted the arithmetic mean value of each component of the stress tensor within the map area from each measured value so that the final maps provide stress heterogeneity relative to the unknown mean
stress state within each grain (Mikami et al., 2015). Alongside, spatial gradients in the lattice rotations were used to estimate densities of geometrically necessary dislocations (GNDs). Densities of each dislocation type on the slip systems summarised by J. H. De Bresser and Spiers (1997) were fit to the measurable components of the lattice curvature following the approach applied to quartz by Wallis et al. (2019). We emphasise that the stress heterogeneity and GND densities are determined independently, being respectively derived from the distinct symmetric and antisymmetric parts of the deformation-gradient tensor. Data points were filtered out if they had a mean normalised peak height in the cross-correlation function of \(<\)0.3 or a mean angular error in the fitted deformation gradient tensor of \(>\)0.004 radians (Britton and Wilkinson, 2011).
We analysed the probability distributions of the stress heterogeneity to assess whether the stresses are imparted by dislocations. The probability distribution of the stress field of population of dislocations has a characteristic form with tails that depart from a normal distribution towards higher stresses (Jiang et al., 2013; Wilkinson et al., 2014). We assess for the presence of these tails using a normal-probability plot, in which the cumulative-probability axis is scaled such that a normal distribution falls on a straight line. Importantly, the tails of the probability distribution, \(P(\sigma)\), have a specific form if the stresses, \(\sigma\), are imparted by dislocations, whereby \(P(\sigma)\to C\rho|\sigma|^{-3}\), where \(C\) is a constant that depends on the material, type(s) of dislocation, and considered stress component, and \(\rho\) is the total dislocation density (Groma and Bako, 1998; Wilkinson et al., 2014). To test whether the measured stress fields exhibit this form, we compute the restricted second moment, \(\nu_{2}\), which is a metric that characterises the shape of a probability distribution based on the integral over restricted ranges in stress, calculated as \(\nu_{2}(\sigma)=\int_{-\sigma}^{+\sigma}P(\sigma)\sigma^{2}\,d\sigma\)(Wilkinson et al., 2014; Kalacska et al., 2017). A plot of \(\nu_{2}\) versus \(\ln(\sigma)\) becomes a straight line at high stresses if the tails of the probability distribution of the stresses exhibit the form \(P(\sigma)\propto|\sigma|^{-3}\) expected of a population of dislocations (Wilkinson et al., 2014; Kalacska et al., 2017). We apply this analysis to the \(\sigma_{12}\) component of the stress tensor as this component is the least modified by sectioning the sample and is a shear stress capable of exerting glide forces on dislocations (Wallis et al., 2019). This approach has recently been applied to olivine by Wallis et al. (2021, 2022). We include in these plots data from an undeformed Si wafer measured by Wallis et al. (2022) to provide an indication of the noise level of the stress measurements.
#### 2.5.3 Twin density measurements
Twin density was measured using a combination of forescattered electron images and EBSD maps. For each sample for which twin density was reported, we chose between 20 and 60 grains in a representative forescattered electron image, and measured twin spacing and twin width perpendicular to the selected twin set. Using an EBSD map of the same area, we determined the orientation of each grain and active twin set, and used the angle between the normal to the twin plane and the normal to the section to correct the measured twin width and spacing (the same procedure as used by Rutter et al., 2022).
## 3 Results
### Mechanical data and _in-situ_ P-wave speed
#### 3.1.1 General characteristics
The stress-strain behaviour and evolution of _in-situ_ P-wave speed are qualitatively similar across the range of conditions and tested materials (Figure 2), and are typical of ductile behaviour. Taking the example of Carrara marble at \(P=600\) MPa and \(T=20^{\circ}\)C (Figure 2b, solid dark blue curve), the mechanical data is characterised by a rapid linear increase in stress at low strain (\(\varepsilon\,<\,0.5\%\)), representing elastic loading, during which P-wave speed remains relatively constant (Figure 2b, dashed dark blue curve). Above
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Figure & Expt. & Lithology & Step (\(\mu\)m) & Map (X\(\times\)Y) & EBSP pixels (X\(\times\)Y) \\ \hline Fig. 11a & Run0119 & WM & 15 & 480\(\times\)110 & 622\(\times\)512 \\ Fig. 11b & Run0093 & CM & 2 & 479\(\times\)359 & 622\(\times\)512 \\ Fig. 11c & Run0167 & SL & 0.15 & 250\(\times\)191 & 622\(\times\)512 \\ Fig. 12a & Run0093 & CM & 2 & 479\(\times\)359 & 622\(\times\)512 \\ Fig. 12b & Run0093 & CM & 0.5 & 450\(\times\)470 & 622\(\times\)512 \\ Fig. 12c & Run0093 & CM & 0.5 & 340\(\times\)285 & 622\(\times\)512 \\ Fig. 12d & Run0093 & CM & 0.5 & 560\(\times\)560 & 622\(\times\)512 \\ Fig. 13a & Run0093 & CM & 0.2 & 260\(\times\)175 & 1244\(\times\)1024 \\ Fig. 13b & Run0093 & CM & 0.15 & 300\(\times\)130 & 1244\(\times\)1024 \\ \hline \hline \end{tabular}
\end{table}
Table 2: EBSD acquisition parameters for each map
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Experiment & Lithology & \(P\) (MPa) & \(T\) (\({}^{\circ}\)C) & \(\sigma_{5}\) (MPa) & \(h_{5}\) (GPa) \\ \hline Run0147* & Solnhofen limestone & 207 & 20.1 & 466 & 0.34 \\ Run0149* & Solnhofen limestone & 416 & 20.4 & 535 & 1.57 \\ Run0150* & Solnhofen limestone & 628 & 19.8 & 517 & 1.85 \\ Run0152* & Solnhofen limestone & 225 & 195 & 437 & 1.25 \\ Run0153* & Solnhofen limestone & 440 & 180 & 474 & 2.22 \\ Run0154* & Solnhofen limestone & 217 & 393 & 436 & 1.36 \\ Run0162* & Solnhofen limestone & 435 & 394 & 456 & 1.56 \\ Run0167* & Solnhofen limestone & 610 & 186 & 474 & 2.54 \\ Run0168* & Solnhofen limestone & 617 & 398 & 416 & 1.84 \\ Run0075 & Carrara marble & 200 & 19.8 & 344 & 1.22 \\ Run0078 & Carrara marble & 405 & 19.8 & - & - \\ Run0084 & Carrara marble & 615 & 19.8 & 386 & 1.83 \\ Run0086 & Carrara marble & 236 & 197 & 187 & 0.73 \\ Run0089 & Carrara marble & 366 & 204 & 272 & 1.91 \\ Run0090 & Carrara marble & 213 & 390 & 203 & 0.82 \\ Run0091 & Carrara marble & 378 & 405 & 186 & 0.72 \\ Run0093 & Carrara marble & 599 & 192 & 295 & 1.37 \\ Run0094 & Carrara marble & 594 & 401 & 198 & 0.65 \\ Run0095 & Carrara marble & 602 & 206 & - & - \\ Run0097 & Carrara marble & 597 & 204 & - & - \\ Run0098 & Carrara marble & 769 & 20.1 & 377 & 1.67 \\ Run0129* & Carrara marble & 403 & 17.6 & 385 & 1.67 \\ Run0131* & Carrara marble & 202 & 18.1 & 335 & 1.12 \\ Run0132* & Carrara marble & 407 & 18.2 & 354 & 1.31 \\ Run0137* & Carrara marble & 556 & 19.1 & 378 & 1.73 \\ Run0138* & Carrara marble & 220 & 172 & 267 & 1.41 \\ Run0141* & Carrara marble & 241 & 387 & 221 & 1.22 \\ Run0143* & Carrara marble & 579 & 20.2 & 376 & 1.63 \\ Run0145* & Carrara marble & 417 & 370 & 222 & 0.93 \\ Run0163* & Carrara marble & 616 & 171 & 247 & 1.58 \\ Run0164* & Carrara marble & 622 & 368 & 255 & 1.28 \\ Run0165* & Carrara marble & -1991 & 195 & 321 & 2.17 \\ Run0166* & Carrara marble & 188 & 183 & 306 & 1.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Table of experiments conducted at a range of conditions. Tests denoted with * are accompanied by measurements of P-wave speed parallel to the specimen axis.
a stress of around 250 MPa, the rate of increase in stress with strain begins to decrease and the P-wave speed begins to decrease concomitantly. Beyond approximately 1.5% strain, stress continues to increase approximately linearly with strain, which is accompanied by a steady decrease in wave speed. Similar qualitative behaviour occurs in all samples at all conditions tested, with quantitative variations in the yield stress and degree of hardening.
Strain hardening is observed at nearly all test conditions (Figure 2), with the exception of Wombeyan marble deformed at \(T\,=\,400^{\circ}\)C (Figure 2i), solid curves). Between \(20^{\circ}\)C and \(200^{\circ}\)C, hardening rates are unaffected by temperature increases, however the hardening modulus is lower for all lithologies at \(T=400^{\circ}\)C.
Figure 2: Differential stress (\(\sigma\), solid lines) and normalised P-wave speed (\(\Delta V/V_{0}\), dashed lines) from constant-strain rate experiments at \(P\) = 200, 400, 600, and 800 MPa, indicated by increasing intensity of blue, at 20, 200 and \(400^{\circ}\)C increasing downward in the plot. Plots (a), (d) and (g) correspond to experiments performed on Solnhofen limestone at 20, 200 and \(400^{\circ}\)C respectively. Plots (b), (e) and (h) are tests performed on Carrara marble at 20, 200 and \(400^{\circ}\)C respectively. Plots (c), (f) and (i) are tests performed on Wombeyan marble at 20, 200 and \(400^{\circ}\)C respectively.
#### Effect of pressure and temperature
The absolute strength, _i.e._ the differential stress at a given strain, of all rocks tested decreases with increasing temperature. The temperature sensitivity is greater at elevated pressure (400 and 600 MPa) than at low pressure, and is greater in Wombeyan marble than in Carrara marble and Solnhofen limestone (Figure 3).
Both Carrara marble and Wombeyan marble have strengths and yield stresses that tend to increase with increasing pressure (Figure 3). However, strength seems to become pressure-independent beyond a pressure threshold. In Carrara marble, there are no quantitative differences between the tests conducted at 600 MPa and at 800 MPa, and only small differences can be detected between 400 MPa and 600 MPa. In Wombeyan marble, strength does not change appreciably with pressure above 400 MPa. The pressure sensitivity of strength in Carrara marble and Wombeyan marble is reduced by increasing temperature, and strength becomes pressure independent at a lower pressure as temperature increases. Hardening rates in Carrara marble and Wombeyan marble also increase with increasing pressure, except for at a temperature of 400\({}^{\circ}\)C, at which hardening rates are independent of pressure.
Solnhofen limestone exhibits some differences compared to the other lithologies in terms of the pressure dependence of strength. At all conditions, the yield stress of Solnhofen limestone decreases with increasing pressure, in contrast with the other lithologies. In
Figure 3: Differential stress at 5% strain (\(\sigma_{5}\)) as a function of confining pressure in Solnhofen limestone (a), Carrara marble (b) and Wombeyan marble (c), at temperatures of 20, 200 and 400\({}^{\circ}\)C. All tests were conducted at a strain rate of \(10^{-5}\) s\({}^{-1}\).
Figure 4: Evolution of yield stress (\(\sigma_{\rm y}\)), differential stress at 2.5% strain (\(\sigma_{2.5}\)) and 5% strain (\(\sigma_{\rm 5}\)) as functions of grain size. Circles correspond to data obtained in this work, and other symbols are taken from the literature (see Table 4 for list of labels).
contrast, at 5% strain, the strength of Solnhofen limestone increases slightly with pressure from 200 to 400 MPa, but decreases at 600 MPa (Figure 3a).
Wave-speed changes also depend on pressure and temperature. Decreases in axial P-wave speed with strain are smaller at higher temperature in both Solnhofen limestone and Carrara marble. For example, in Carrara marble at a pressure of 200 MPa and strain of 7.5%, the relative wave-speed decrease is 8% at a temperature of 20\({}^{\circ}\)C (Figure 2b ) and is 2.5% at 400\({}^{\circ}\)C (Figure 2h). In Carrara marble, at all temperatures, increasing pressure results in smaller wave-speed decreases during deformation. In contrast, in Solnhofen limestone at temperatures of 20\({}^{\circ}\)C and 200\({}^{\circ}\)C the final wave-speed drop increases with increasing pressure (Figure 2a,d).
strain (Figure 4). This observation is consistent with measurements of the hardening modulus, which increases with decreasing grain size at all tested temperatures (Figure 5).
Wave-speed evolution is also sensitive to grain size, as highlighted by comparing the behaviour of Solnhofen limestone and Carrara marble. The overall decrease in wave speed is nearly always greater for Solnhofen limestone at a given set of conditions than it is for Carrara marble. For example, at \(T=20^{\circ}\)C, \(P=600\) MPa and \(\varepsilon=7.5\%\), the wave-speed change is \(-5\%\) in Carrara marble (Figure 2b) but is \(-11\%\) in Solnhofen limestone (Figure 2a). The only exception to this grain-size dependence is at \(T\,=\,400^{\circ}\)C and \(P\,=\,600\) MPa. At these conditions, the final relative wave-speed change at \(\varepsilon\,=\,7.5\%\) is \(+2\%\) in Solnhofen limestone (Figure 2g) and -2% in Carrara marble (Figure 2 h) dashed dark blue curve).
### Wave-speed changes during decompression
Wave speed always decreases significantly during decompression (Figure 6). In Solnhofen limestone, the P-wave speed remains approximately constant during initial decompression down to around 400 MPa. Further decompression leads to a decrease in wave speed,
\begin{table}
\begin{tabular}{l c c c} \hline \hline Reference & Lithology & Grain size & Legend \\ \hline Paterson (1958) & Wombeyan marble & 1–2 mm & \(\Delta\) \\ Heard (1960) & Solnhofen limestone & 6–10 μm & \\ Edmond and Paterson (1972) & Solnhofen limestone & 6–10 μm & \\ Edmond and Paterson (1972) & Carrara marble & 60–120 μm & \\ Fredrich et al. (1989) & Carrara marble & 60–120 μm & \\ Rybacki et al. (2021) & Carrara marble & 60–120 μm & \\ Fredrich et al. (1990) & Wombeyan marble & 1–2 mm & \\ Donath and Fruth (1971) & Beldens marble & 0.5 mm & \\ Mogi (1964) & Yamaguchi ’Fine marble’ & 0.4 mm & \\ Mogi (1964) & Mito marble & 1 mm & \\ Mogi (1964) & Yamaguchi ’Coarse marble’ & 3.5 mm & \\ Rutter (1974) & Solnhofen limestone & 6–10 μm & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of references for literature data presented in Figure 4 and 5.
Figure 6: Relative change in P-wave speed referenced to the final wave speed (\(V_{\rm final}\)) measured during the unloading phase of each experiment and after sample cooling. Stars denote the wave speed measured at atmospheric pressure after sample recovery. The grey curve represents measurements of wave speed in a fused silica blank over the same pressure range.
which drops substantially at pressures below 200 MPa. In samples deformed at room temperature, the wave speed of the recovered material is as low as 50% of the wave speed measured after deformation but prior to decompression. This drop becomes less marked in samples deformed at high temperature, with changes on the order of 25-35% at \(T=200^{\circ}\)C, and 15-30% at \(T=400^{\circ}\)C. The behaviour is generally similar in Carrara marble, with some notable quantitative differences. The wave speed starts decreasing immediately as pressure in decreased, even in tests conducted at \(P\,=\,600\) MPa (Figure 6b,d). The characteristic pressure below which P-wave speed drops most markedly is on the order of 100 MPa. The effect of deformation temperature is similar to that observed in Solnhofen limestone, with elevated temperature during deformation promoting a more limited reduction in wave speed during decompression.
### Microstructures
#### 3.3.1 Brittle features
Systematic observations of cracks within deformed samples reveals three distinct deformation regimes according to deformation conditions (figure 7). The low-pressure, low-temperature regime and the high-pressure, low-temperature regime are characterised by cracks, whereas in the high-temperature regime cracks are absent.
In the low temperature regimes where cracks are observed, cracks interact with twins. Dense arrays of cracks confined to individual twin lamellae are commonly observed (Fig
Figure 7: Span of microstructural regimes identified in deformed samples.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Sample & Lithology & Pressure & Temperature & \(S_{v}\) (mm\({}^{2}\)/mm\({}^{3}\)) \\ \hline Run0075 & Carrara & 200 & 19.8 & 18.73 \\ Run0078 & Carrara & 405 & 19.8 & 7.31 \\ Run0084 & Carrara & 615 & 19.8 & 2.28 \\ Run0086 & Carrara & 236 & 197 & 7.60 \\ Run0090 & Carrara & 213 & 390 & 2.05 \\ Run0093 & Carrara & 599 & 192 & 7.82 \\ Run0094 & Carrara & 594 & 401 & 1.30 \\ \(\cdot\) Run0098 & Carrara & 769 & 20.1 & 4.60 \\ Run0104 & Wombeyan & 206 & 19.3 & 4.02 \\ Run0106 & Wombeyan & 595 & 20.2 & 0.33 \\ Run0110 & Wombeyan & 231 & 207 & 1.51 \\ Run0119 & Wombeyan & 609 & 187 & 0.49 \\ Run0120 & Wombeyan & 408 & 19.2 & 1.60 \\ Run0121 & Wombeyan & 428 & 190 & 0.35 \\ Run0124 & Wombeyan & 233 & 408 & 0.47 \\ Run0126 & Wombeyan & 574 & 416 & 0.36 \\ \hline \end{tabular}
\end{table}
Table 5: Crack densities, \(S_{v}\), in selected samples as the volumetric number of cracks determined from stereological measurements following Underwood (1970)
## 6 Summary
Figure 8: Brittle microstructures observed in samples after testing. a) Crush zones, b) geometrically controlled cracks, c) inter-twin cracks and stair-step cracks, d) grain-boundary cracks and low aspect-ratio cracks, e) stair-step cracks with orientations controlled by twin boundaries and f) cracks nucleated from twins.
ure 8c). In addition, some intragranular cracks contain steps and change orientation across individual twin lamellae ('stair-step', Figure 8e). In other instances, microcracks are observed to nucleate from the tips of twins, and do not reach grain boundaries (Figure 8f).
**Low-pressure, low-temperature regime**
The low-pressure, low-temperature regime is limited to Carrara marble experiments conducted at 200 MPa and 20 and 200\({}^{\circ}\)C only. In this regime mode-I intragranular microcracks are a common feature, and result in the highest crack densities. These microcracks are typically confined to single grains and are of low aspect ratio (_e.g._, Figure 8b,d). In places, the microcracks are concentrated in discrete regions as crush zones (Figure 8a). In these regions, cracks often span a few grains and result in a locally elevated crack density.
Cracks are also observed to relate to the geometry of grains and grain boundaries. Some cracks nucleate at geometric irregularities along grain boundaries. For example, steps in grain boundaries are often associated with short tensile cracks that propagate a short distance into the grain interior (Figure 8d). Smaller grains can also act as indenter grains, and cracks are nucleated to accommodate the indenter grain shape (Figure 8b). Often, cracks relating to geometric incompatibilities can be wholly contained within a single grain (Figure 8b and d).
**High-pressure, low-temperature regime**
The high-pressure, low-temperature regime spans samples deformed above \(P\ =\ 200\) MPa and at \(T=20^{\circ}\) and 200 \({}^{\circ}\)C for Carrara marble, and all samples deformed at 20\({}^{\circ}\)C and 200\({}^{\circ}\)C for Wombeyan marble (Figure 7). This regime is characterised by a reduced crack density (\(S_{v}<\)8 mm\({}^{2}\)/mm\({}^{3}\)) with respect to the low-pressure, low-temperature regime. Intragranular cracks are not completely suppressed (Figure 8d), e) and f)). However, thorough observations revealed no intergranular cracking and crush zones. Another important change compared to the low-pressure, low-temperature regime is the abundant observation of open grain boundaries (Figure 8c and e).
Within the high-pressure, low-temperature regime, the crack density decreases with increasing pressure for Carrara marble and Wombeyan marble. For example in Carrara marble at 20\({}^{\circ}\)C, \(S_{v}=7.31\) mm\({}^{2}\)/mm\({}^{3}\) at \(P=400\) MPa reducing to \(S_{v}=2.28\) mm\({}^{2}\)/mm\({}^{3}\) at \(P=600\) MPa (Table 5, Run0078 and Run0084). Furthermore, for a given set of con
ditions in the high-pressure, low-temperature regime, crack density is lower in the coarser grained Wombeyan marble.
#### Crystal-plastic microstructures
Crystal-plastic microstructural features are dominated by deformation twins that are present in samples deformed at all conditions (Figure 9). Nearly all grains are twinned on at least one plane, grains that contain two twin sets are also common, and occasionally grains contain three twin sets. In particular, the density of twins, _i.e._, the number of lamellae per unit length, is high in samples deformed at low temperature (Figure 9 a-d) and decreases with increasing temperature (Figure 9g,h). Twins are often curved, especially in the vicinity of grain boundaries (Figure 9f) or around geometric irregularities (Figure 9 b). Twins also often appear to have nucleated from twins within neighbouring grains (figure 9b,d) or from geometric irregularities at grain boundaries (figure 9f).
On a larger scale than twin lamellae, undulose extinction is widespread (Figure 9). In Wombeyan marble, the intensity of the undulose extinction increases when approaching grain boundaries and grain-boundary irregularities (Figure 9c). In Wombeyan marble, the undulose extinction observed in some instances suggests corrugation of the crystal lattice (Figure 9e). In Carrara marble, grains also display undulose extinction, although it is not as common as in Wombeyan marble. Again, in Carrara marble undulose extinction is associated to geometric irregularities (Figure 9h).
#### SEM observations
Forescattered electron images were used to image twins at high spatial resolution. These images reveal very thin deformation twins, especially in Solnhofen limestone, which in some places are on the order of 100 nm in thickness (Figure 10a,b). Grains of calcite in Solnhofen limestone also exhibit lower twin incidence than the coarser-grained samples of Wombeyan marble and Carrara marble (Figure 10c,d). Twins in Solnhofen limestone also propagate across grain boundaries, and are not bent on the grain scale, although they do sometimes taper in proximity to grain boundaries. High-resolution electron images of Wombeyan marble reveal multiple scales of twins, with micrometre-scale twins
Figure 9: a) High twin density generated at high-stress, low-temperature conditions. b) High-density twins bending around an apparent deformation band. Twins are also nucleated by twins in neighbouring grains. c) Long-range undulose extinction traversing twin sets. The undulose extinction appears to be generated by the interaction of neighbouring grains. d) Multiply-twinned grains and high twin density. e) Corrugation of crystal lattice identified by wavy undulation. f) Multiple curved twin sets and twinned twin sets. g) Multiply-twinned grain in Wombeyan marble. Twins are thicker than at lower-temperature conditions and also appear more patchy. h) Twins formed at high temperature in Carrara \(\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\)\ n\tt n\)\n\n\n\)\n\n\n\n\n\)\n\n\n\)\n\n\n\)\)\n\n\)\n\n\)\n\n\n\)\n\)\n\n\)\n\)\n\)\)\n\n\n\)\)\)\(n\n\n\n\n\)\)\)\(n\(n\(n\(n\(n\tt\n\(\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\tt n\n\tt\n\)\n\)\n\)\n\n\)\n\)\(n\(\tt n\tt\n\)\n\)\(\tt\n\)\(\tt\)\(\n\tt\tt\n\n\tt\n\)\)\(\)\(\tt\n\tt\tt\n\n\tt\n\)\(\tt\n\tt\n\tt\n\tt\n\tt\n\tt\n\)\)\(\tt\n\tt\n\)\(\ttn\n\tt\n\tt\tt\n\tt\n\tt\)\)\(\ttn\ttn\tt\n\tt\)\)\(\ttn\ttn\tt\n\
contained within thicker twins on the order of tens of micrometres in thickness (Figure 10c).
Overview EBSD maps demonstrate that most grains have significant internal misorientation. Lattice curvature is particularly obvious in Wombeyan marble (Figure 11a) and in Carrara marble (Figure 11b) and in some grains of Solnhofen limestone (Figure 11c). Regions of lattice curvature are often related to the geometry of grain boundaries and in places curvature increases in the vicinity of grain boundaries. Other grains exhibit lattice rotation in the vicinity of twin planes, indicated by stripes in the inverse pole figure map (_e.g._, Figure 11b).
Further evidence of internal lattice distortion is revealed by maps of the grain reference orientation deviation (GROD), which is the misorientation of each point with respect to the mean orientation of the grain (Figure 12). An overview map of the GROD in Carrara marble Run0093, deformed at a pressure of 600 MPa and temperature of 200\({}^{\circ}\)C demonstrates that individual grains have variable internal structure (Figure 12a). Some grains exhibit striped patterns of variable GROD that follow the local twin orientation
Figure 10: Forescattered-electron orientation-contrast images (Prior et al., 1996) of deformation twins in (a and b) Solnhofen limestone, (c) Wombeyan marble and (d) Carrara marble.
(Figure 12c). In other grains, the GROD pattern is largely unaffected by twins and instead exhibits lattice distortion over larger length scales approaching the grain size (Figure 12d). In some cases, there is a mixed interaction, where the GROD weakly follows the twin orientation but is also affected by the grain geometry (Figure 12b).
HR-EBSD maps (Figure 13) reveal the distributions of GNDs and intragranular stress heterogeneity in the endmember grains in Figure 12. The grain in Figure 12b, which exhibited the least impact of the twins on the distribution of GROD, also exhibits negligible impact of the twins on GND density (Figure 13a). Within the grain interior, GND density is generally at or below the background noise level of approximately \(10^{13.5}\) m\({}^{-2}\) arising from noise in the rotation measurements. Apparent GND densities above this noise level are highly localised in the vicinity of twin boundaries and potentially result from reduced pattern quality indicated by the band-contrast map. However, GND densities are significantly elevated to the order of \(10^{14}\) m\({}^{-2}\) adjacent to a grain boundary along the right edge of the map. Comparable distributions are evident in the maps of stress heterogeneity with stresses in the grain interior being relatively homogeneous and stresses near the grain boundary being elevated to several hundred megapascals. Different dis
Figure 11: EBSD maps of samples deformed at a pressure of 600 MPa and temperature of 200\({}^{\circ}\)C. (a) Wombeyan marble, (b) Carrara marble, and (c) Solnhofen limestone. Grains are coloured according to the inverse pole figure. Twin boundaries are not shown for clarity. White areas were not indexed and are mostly due to poor quality diffraction patterns along twin boundaries.
Figure 12: Map of grain reference orientation deviation (GROD) obtained from EBSD maps of Run0093 (\(P=600\) MPa, \(T=200^{\circ}\)C). Insets (b), (c) and (d) are small-area EBSD maps showing the local GROD within grains with varying interaction between lattice distortion and twins. Strong interactions between twins and lattice curvature are evident in (c), intermediate interaction is present in (b), and weak interaction is apparent in (d). Insets b) and d) also exhibit significant lattice curvature in the vicinity of grain boundaries.
Figure 13: HR-EBSD maps of (a) the grain in Figure 12b and (b) the grain in Figure 12c. Each of these subfigures presents maps of the band contrast in the diffraction patterns, which reveals the locations of twins, GND density, and heterogeneity in stress (\(\sigma_{ij}\)). (c) Normal probability plot of \(\sigma_{12}\) in each grain. Straight lines indicate a normal distribution. (d) Restricted second moment (\(\nu_{2}\)) versus ln\(\sigma_{12}\). Straight lines indicate that the probability distribution of the stress exhibits the form \(P(\sigma)\propto|\sigma|^{-3}\) expected of a population of dislocations.
tributions are apparent in HR-EBSD maps (Figure 13b) from within the grain in Figure 12c, which exhibited the greatest impact of the twins on the distribution of GROD. In this grain, zones of elevated GND density and stress extend a few micrometres from the twin boundaries and beyond the zones of reduced pattern quality represented in the band-contrast map. Within these zones, GND densities approach \(10^{14}\) m\({}^{-2}\) and shear stresses on the order of several hundred megapascals are common.
The probability distributions of the stress heterogeneity provide further information on the cause of the stresses. The normal probability plot (Figure 13c) exhibits similar probability distributions for both grains, with stresses that are significantly greater than those measured on the undeformed Si standard. Below stress magnitudes of approximately 300-500 MPa, the distributions fall on a straight line indicating these stresses are normally distributed. However, at greater stress magnitudes, the distributions depart from straight lines. These high-stress tails are typical of materials, including Cu and olivine, deformed by dislocation-mediated mechanisms (Jiang et al., 2013; Wallis et al., 2021, 2022). The plot of the restricted second moment \(\nu_{2}\) versus \(\ln(\sigma_{12})\) provides a further test of whether these high-magnitude stresses are the stress fields of dislocations (Figure 13d). On this plot, the distributions from the maps from each grain both fall on straight lines at high stresses, indicating that the distributions have the \(P(\sigma)\propto|\sigma|^{-3}\) form that is characteristic of the stress field of a population of dislocations (Groma and Bako, 1998; Wilkinson et al., 2014; Kalacska et al., 2017; Wallis et al., 2021). These characteristics indicate that the stresses are, at least in part, the stress fields of dislocations.
In addition to the EBSD mapping, based on the forescatter imaging, we were able to measure twin spacing and correct their thickness in the same manner as Rutter et al. (2022, Figure 14). We report values of twin density, \(\rho_{\rm t}\) (mm\({}^{-1}\)), which is computed as the inverse of the measured average twin spacing (Figure 14b). Our data agree with previously reported measurements of stress versus twin density. Twin density is lowest in Wombeyan marble deformed at a pressure of 600 MPa and temperature of 200\({}^{\circ}\)C with a value of 130 mm\({}^{-1}\), equivalent to a mean spacing of 7.5 \(\mu\)m. The highest density is obtained for Solnhofen limestone deformed at 600 MPa and 200\({}^{\circ}\)C, with a value of about 1500 mm\({}^{-1}\), equivalent to a mean twin spacing of 0.7 \(\mu\)m.
## 4 Discussion
The mechanical data reveal that deformation of calcite rocks across the range of conditions that we tested is ductile. Almost all experiments demonstrate strain-hardening behaviour but the precise characteristics of the hardening vary with pressure and temperature. The deformation of calcite rocks in this study is accommodated by cracking, twinning and dislocation motion, depending on experimental conditions. The main question we seek to answer is what controls the rheological behaviour of calcite rocks in the semi-brittle regime? Determination of the rheological behaviour of calcite rocks requires knowledge of the approximate partitioning of strain between each active deformation mechanism. This can be answered by considering the microstructural, wave-speed and mechanical data.
### Brittle deformation
The evolution of wave speed combined with post-mortem measurements of crack density characterise the degree of microcracking occurring during deformation. At all temperatures at which strength is pressure sensitive, increasing pressure results in in
Figure 14: The final differential stress at the end of deformation as a function of twin density for this study is shown in black symbols. Additional data from previous studies are shown in grey. CM RR 1990, TM RR 1990 and SL RR 1990 are data from Carrara marble, Taiwan marble, and Solnhofen limestone by Rowe and Rutter (1990), CM RWK 2022 is data from Carrara marble data by Rutter et al. (2022), and CM Ry 2013 is data from Carrara marble by Rybacki et al. (2013). The reference line indicates the gradient of a relationship in which stress is dependant on the square root of twin density.
creasing hardening rates, smaller wave-speed decreases and a reduction in post-mortem crack density (Figure 15 and Table 5). Similar observations of decreasing _post-mortem_ crack density with increasing pressure were made by Fredrich et al. (1989) in Carrara marble at room temperature. Measurements of volumetric strain during the deformation of Carrara marble and Solnhofen limestone at room temperature made by Edmond and Paterson (1972) demonstrate that dilatancy is suppressed at high pressure. Both of these studies also report increased hardening rates with increasing pressure. Decreasing strain accommodation by cracking is therefore systematically associated with increasing hardening rates (Figure 15).
With increasing temperature, pressure-independent strength is reached at a lower pressure. In conjunction with this, deformation at elevated pressure is systematically related to smaller drops in wave speed and increased hardening rates. Crack density is also lower for a given pressure at higher temperature. Measurements of reduced pore volume during deformation of Carrara marble at high temperatures by Fischer and Paterson (1989) support this observation. Taken together, these observations identify that cracking is suppressed by temperature increases.
Another interesting observation is the large wave-speed decrease during decompression. This feature suggests that significant sample damage is accrued during decompression. Samples with the largest wave-speed decrease are those deformed at high-pressure, low-temperature conditions, which correspond to a low intragranular fracture density (ta
Figure 15: Hardening modulus computed at 5% strain plotted against post-mortem crack densities. a) Carrara marble data from room-temperature experiments (circles this study, stars Fredrich et al. (1989)) and at 200\({}^{\circ}\)C (triangles). b) Wombeyan marble from experiments at room temperature (circles) and 200\({}^{\circ}\)C (triangles).
ble 5). These samples are characterised by open grain boundaries (Figure 8b, c, e and g), suggesting that grain-boundary opening is the cause of the observed wave-speed decrease. Similar observations were made by Edmond and Paterson (1972), who reported increases in volumetric strain during decompression in deformed Solnhofen limestone and Carrara marble, with the magnitude of volumetric strain change increasing with the final stress level. This phenomenon is likely to result from deformation caused by strain incompatibility between grains (Ashby, 1970). During deformation at high pressure, individual grains deform both plastically and elastically to accommodate the imposed strain in the sample, which results in heterogeneous internal stresses. Upon removal of the confining pressure, these internal stresses are no longer in equilibrium with the applied external forces and are likely to generate interface cracks between grains to accommodate the strain incompatibilities accrued during deformation.
### Grain-size sensitive behaviour
Our results demonstrate that yield stress and the hardening modulus are functions of grain size. This result is consistent with the works of Olsson (1974) and Fredrich et al. (1990) and other published results at both low temperatures (Paterson, 1958; Heard, 1960; Mogi, 1964; Donath & Fruth, 1971; Edmond & Paterson, 1972; Rutter, 1974; Fredrich et al., 1989; Rybacki et al., 2021, Figure 4 and Table 4) and high temperatures (Renner et al., 2002).
Based on experiments at room temperature and moderate pressure, Fredrich et al. (1990) discussed grain-size strengthening in the model framework of Horii and Nemat-Nasser (1986). Horii and Nemat-Nasser (1986) solved a wing-crack model coupled to a 'plastic zone', and their results predicted that coarser-grained materials should be more 'brittle', _i.e._, the ratio of differential stress to confining pressure at the brittle-ductile transition should decrease with increasing grain size. The results of Fredrich et al. (1990) showed that this ratio was independent of grain size, which they explained by considering that the plastic yield stress scaled as \(D^{-1/2}\). This hypothesis is supported by our observations. More generally, yield stress is observed to scale inversely with grain size in metals (Cordero et al., 2016) and also in olivine (Hansen et al., 2019).
Grain-size strengthening is a common phenomenon in metals and has received significant attention in the metallurgy literature (see Cordero et al. (2016) and Y. Li et al.
(2016) for reviews). Strengthening with decreasing grain size is usually termed the Hall-Petch effect (Hall, 1951; Petch, 1953), and is typically of the form
\[\sigma=\sigma_{0}+KD^{-m}, \tag{1}\]
where \(\sigma_{0}\) is the intrinsic resistance of a lattice to dislocation motion, \(K\) is a material-dependent Hall-Petch coefficient and \(m\) is a dimensionless exponent. In the original observations and theoretical arguments of Hall (1951) and Petch (1953), the exponent \(m=0.5\). Values of \(m=0.2\)-1 have been reported in metals (Dunstan & Bushby, 2014; Cordero et al., 2016; Y. Li et al., 2016).
We fitted equation 1 to our data using a least-squares regression and setting \(m=0.5\) (Table 4.2), similar to Renner et al. (2002). Although our results suggest \(m\) in the range 0.3-0.6 (Figure 4), fitting the exponent significantly modifies \(\sigma_{0}\) and \(K\) and makes comparison among our datasets challenging. Values of the apparent lattice resistance, \(\sigma_{0}\), and the Hall-Petch coefficient, \(K\), increase with strain at all temperatures. Values of the lattice resistance are largest at \(T=20^{\circ}\)C, with \(\sigma_{0}=155\)-274 MPa, and are smaller
\begin{table}
\begin{tabular}{l c c} \hline \hline Variable & \(\sigma_{0}\) (MPa) & \(K\) (MPa m\({}^{-0.5}\)) \\ \hline \(T=20\)\({}^{\circ}\)C & & \\ \(\sigma_{y}\) & 155 & 0.40 \\ \(\sigma_{2.5}\) & 244 & 0.73 \\ \(\sigma_{5}\) & 274 & 0.87 \\ \hline \(T=200\)\({}^{\circ}\)C & & \\ \(\sigma_{y}\) & 83 & 0.51 \\ \(\sigma_{2.5}\) & 149 & 0.72 \\ \(\sigma_{5}\) & 185 & 0.79 \\ \hline \(T=400\)\({}^{\circ}\)C & & \\ \(\sigma_{y}\) & 68 & 0.52 \\ \(\sigma_{2.5}\) & 91 & 0.86 \\ \(\sigma_{5}\) & 108 & 0.96 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of fitting collated mechanical data with the Hall-Petch relation, with \(m=0.5\) (Equation 1).
at \(T=200\)\({}^{\circ}\)C with \(\sigma_{0}=\) 83-185 MPa, further reducing at 400 \({}^{\circ}\)C to \(\sigma_{0}=68-\)108 MPa. The values for the lattice resistance are consistent with measurements from single-crystal experiments by J. H. De Bresser (1991) and Turner et al. (1954) at the same temperatures. Values of the Hall-Petch coefficient are largely unaffected by temperature changes. For the reported values of \(\sigma_{y}\), \(K=\) 0.4-0.52 MPa m\({}^{-0.5}\), at larger strain for values of \(\sigma_{2.5}\) the Hall-Petch coefficient increases to \(K=0.72\)-0.86 MPa m\({}^{-0.5}\) with further increases at large strain of \(K=0.79\)-0.96 MPa m\({}^{-0.5}\) for \(\sigma_{5}\) values. When \(K\) is normalised by the product of the shear modulus \(G\) and Burgers vector \(b\), with \(G\) = 35 GPa and \(b\) = 0.74 nm, it falls into the range 0.42-1, which is consistent with values typically obtained for BCC and HCP metals (Cordero et al., 2016). In summary, our data combined with literature sources reveal several key features of the Hall-Petch effect in calcite rocks, 1) the Hall-Petch effect is amplified by strain, being weakest at yield and strong after plastic strain, 2) the apparent lattice resistance increases with plastic strain and decreases with temperature, and 3) the Hall-Petch coefficient is independent of temperature.
Extensive reviews by Cordero et al. (2016) and Y. Li et al. (2016) summarise proposed physical models of the Hall-Petch effect. Y. Li et al. (2016) identify four categories of models: (1) the dislocation pile-up model, in which grain boundaries act as obstacles that cause pile-ups until the stress in front of the pile-up is sufficient to induce yielding of neighbouring grains (Hall, 1951; Petch, 1953); (2) the grain-boundary ledge model, in which grain- and subgrain-boundary irregularities emit forest dislocations that act as obstacles (see J. C. Li, 1963); (3) the plastic-strain model, in which the rate of increase in dislocation density with plastic strain is inversely proportional to grain size (Conrad et al., 1967); (4) the elastic-anisotropy model, in which interactions among elastically anisotropic grains require the introduction of GNDs, and smaller grains have relatively larger strain gradients and greater GND densities (Meyers & Ashworth, 1982). All of these models, except that of Meyers and Ashworth (1982), arrive at an expression similar to Equation 1. The coefficient \(K\) always includes the shear modulus, \(G\), and the Burgers vector, \(b\), as well as other geometric constants, and \(m=0.5\). However, fitting exercises have shown that for a wide range of metals, \(m~{}\neq~{}0.5\) and may be better fit by \(m~{}=-1\) or by the relationship \(\ln d/d\)(Dunstan & Bushby, 2014; Y. Li et al., 2016).
A grain-size dependence of the yield stress suggests that the plastic-strain model 3 is not appropriate for calcite as a finite amount of macroscopic plastic strain is required to generate the Hall-Petch effect (Conrad et al., 1967; Ashby, 1970). As pointed out by
Hansen et al. (2019), models 1, 2 and 4 also require finite plastic strain, although this may be sufficiently localised for the macroscopic behaviour still appear elastic prior to macroscopic yielding (Maass & Derlet, 2018). The temperature dependence of the yield stress suggests that the models must include short-range dislocation interactions, as long-range interactions are elastic and therefore largely temperature insensitive (Hansen et al., 2019).
In model 1, the Hall-Petch effect is attributed to pile-ups of dislocations at grain boundaries, and their role in promoting yield of neighbouring grains (Hall, 1951; Petch, 1953). In this case, larger grains can support longer pile-ups, which generate more intense stress concentrations that promote yield of neighbouring grains. If this model is relevant, we might expect to see pile-ups of dislocations at grain boundaries and localised strain transfer across grain boundaries. Quintanilla-Terminel and Evans (2016) use a micro-grid to estimate microscale strains in deformed Carrara marble and document the occurrence of strain transfer across grain boundaries, although this appears to be in broad zones. Further observations using transmission electron microscopy and HR-EBSD are needed to characterise dislocation structures in the vicinity of grain boundaries in calcite to test the relevance of this model.
Model 2 considers that dislocations are emitted from grain-boundary ledges (see J. C. Li, 1963). As materials with finer grain sizes have greater grain-boundary areas, there are more potential sites for generation of dislocations, resulting in a higher dislocation density and therefore stress required for macroscopic plastic deformation. In our samples it is difficult to assess the grain-boundary ledge model. Some evidence may be that undulose extinction is often controlled by the geometry of grain boundaries (Figure 9), indicating that dislocation activity is influenced by grain boundaries, although it does not identify whether grain boundaries act as dislocations sources.
The elastic-anisropy model of Meyers and Ashworth (1982) relies on incompatibilities in elastic strain between neighbouring grains. When stress is applied, GNDs form to accommodate the elastic mismatch between grains. Our microstructural observations and wave-speed data may be compatible with this model. Open grains boundaries (Figure 8 c, e and g) and large wave-speed decreases during decompression (Figure 6) suggest significant relaxation of internal stress during decompression and, in turn, this internal stress may result from elastic mismatch among grains and associated GND for
mation. EBSD observations also reveal significant lattice curvature (Figure 12) and increases in GND density in the vicinity of grain boundaries (Figure 13). Furthermore, microscale observations of strain in deformed calcite rocks by Spiers (1979) and Quintanilla-Terminel and Evans (2016) reveal local heterogeneity in finite strain near to grain boundaries. In particular, Quintanilla-Terminel and Evans (2016) identify strain heterogeneity on a scale similar to the grain size. Wallis et al. (2018) observed an increase in misorientation in the vicinity of grain boundaries in naturally deformed calcite rocks, consistent with the presence of plastic strain gradients imparted in response to strain incompatibility among neighbouring grains (Meyers & Ashworth, 1982).
In summary, our observations suggest that either model 1, 2 or 3 maybe applicable to calcite. More systematic microstructural observations are required to discriminate among these models. As a generality, all models predict that the strength is sensitive to the mean free path of dislocations, which is controlled by grain size. Therefore, Hall-Petch models are closely related to Kocks-Mecking-Estrin (KME) single state-variable models of strength, which incorporate the role of dislocation mean free path (Kocks, 1966, 1976; Mecking & Kocks, 1981). To explore the KME model we must consider the role of twins, which is discussed in the next section.
### Is TWIP compatible with the deformation of calcite rocks?
The activity of mechanical twinning in calcite is closely related to the magnitude of differential stress (Jamison & Spang, 1976; Spiers, 1979; Rowe & Rutter, 1990). In experiments, Rowe and Rutter (1990) demonstrated that the spacing of twins decreases with increasing stress, independent of temperature and grain size. Twins are observed in all our deformed samples, and twin density increases with differential stress consistently with the observations of Rowe and Rutter (1990); Rybacki et al. (2013); Rutter et al. (2022). Rybacki et al. (2013) also demonstrated that stress was proportional to the square root of twin density at low stresses (\(<250\) MPa). However, at high stress, the value of stress saturates with respect to twin density and the square root dependence breaks down (Rowe & Rutter, 1990, and Figure 14). Transmission Electron Microscope observations of calcite deformation twins indicate that twins often interact with dislocations (Barber & Wenk, 1979; Fredrich et al., 1989; Rybacki et al., 2013). Given these observations, Rybacki et al. (2021) argued that twin spacing linearly decreases as dislocation density increases, as flow stress is proportional to the square root of dislocation
density (Taylor, 1934). These observations suggest that twin spacing either is a consequence of, or controls, the strength of calcite rocks.
Rybacki et al. (2021) argued that twin spacing directly controls the strain hardening rate, and by extension the strength, by drawing analogy to twinning induced plasticity steels (TWIP, De Cooman et al., 2018). Models of TWIP originate from high manganese steels, which exhibit high hardening rates (approx. 3% of the shear modulus) with respect to other steels (approx. 0.05 % of the shear modulus), as a consequence of mechanical twinning. The mechanism of TWIP originates from the abrupt changes in crystallographic orientation at twin boundaries, which can act as barriers to dislocation motion. Progressive twinning leads to dynamic refinement of the microstructure (De Cooman et al., 2018), in which finer twin spacing reduces the mean free path of dislocations (\(\lambda\)) and causes a dynamic Hall-Petch effect.
In phenomological models of TWIP in the metallurgical literature, strain hardening is attributed to a dynamic Hall-Petch effect resulting from progressive twinning (Bouaziz et al., 2008). The model of Bouaziz et al. (2008) is formulated by first considering the Taylor equation that relates shear flow stress (\(\tau\)) to the total dislocation density (\(\rho\)),
\[\tau=\tau_{0}+\tau_{b}+\alpha\mu b\sqrt{\rho}, \tag{2}\]
where \(\tau_{0}\) is the initial strength of a polycrystal, \(\tau_{b}\) is the back stress, which may arise from long-range dislocation interactions and stress fields around twins, \(\alpha\) a constant close to unity, \(b\) the Burgers vector and \(\mu\) the shear modulus. In their formulation, the final term represents isotropic hardening due to short-range dislocation interactions. In calcite, the Taylor equation has been demonstrated to apply to calcite rocks deformed at temperatures of 550-700\({}^{\circ}\)C (J. H. P. De Bresser, 1996), although the relative contributions of kinematic hardening due to long-range dislocation interactions that generate back stress and isotropic hardening due to short-range dislocation interactions have not been separated.
To obtain the evolution of stress with strain, the Taylor relation is combined with a modified Kocks-Mecking-Estrin equation (Kocks, 1966, 1976; Mecking & Kocks, 1981) to describe the change of \(\rho\) with strain (Bouaziz et al., 2008):
\[\frac{d\rho}{d\varepsilon}=\frac{1}{b\lambda}-f\rho=\frac{1}{b}\left(\frac{1} {D}+\frac{1}{D_{\rm t}}+k\sqrt{\rho}\right)-f\rho, \tag{3}\]
where \(f\) is a rate- and temperature-dependent dynamic-recovery coefficient, \(k\) is a constant that characterises dislocation storage due to dislocation interactions, and \(D_{t}\) is the
twin spacing. In this model, changes in the dislocation mean free path are sensitive to the total dislocation density, grain size and twin spacing. An additional term, given by the product of the recovery factor and the dislocation density, \(f\rho\), is subtracted to account for dynamic recovery processes. Rybacki et al. (2021) argued that this model could potentially capture the rheological behaviour of calcite polycrystals.
The dynamic Hall-Petch model of Bouaziz et al. (2008) provides a physical basis for the dependence of stress on the square root of twin density at low stress. Rybacki et al. (2021) also argued that the high hardening rates (3-5 % of the shear modulus, Figure 5) observed in calcite rocks are also consistent with TWIP. However, it should be noted that high hardening rates (compared to typical expectations in metals of 0.5-1% of the shear modulus) are not unique to calcite rocks as olivine, which does not twin, exhibits hardening rates up to 5-10 % of the shear modulus when deformed by low-temperature plasticity (Hansen et al., 2019; Druiventak et al., 2011).
The TWIP model also suggests that the hardening rate should be dominated by twin spacing as \(D_{\text{t}}\) is always at least 1-2 orders of magnitude smaller than the grain size \(D\) (Fig. 14). Taking the case of Carrara marble at 600 MPa, 200 \({}^{\circ}\)C with \(D=100\) um and \(D_{\text{t}}=5\) um, we might expect strain hardening of \(450\alpha\mu\) (neglecting recovery and forest hardening). At the same conditions for Wombeyan marble, \(D=1\) mm and \(D_{\text{t}}=7.5\) um, so that strain hardening should be of \(360\alpha\mu\). The ratio of hardening rates would therefore be 1.25. The actual hardening rates observed in our experiments are in a ratio of 2 (considering \(H_{5}=1.6\) and 0.8 GPa for Carrara marble and Wombeyan marble respectively), which suggests that twins may have a smaller impact on hardening than anticipated from Equation 3.
#### 4.3.1 Twins as potential barriers to dislocations
Our main observation is that of grain size strengthening (Figure 4), and it is difficult to assess the exact role of twins in the hardening behaviour from our microscopic data alone. The TWIP model is founded on the notion that twins produce further hardening by either retarding or stopping dislocation motion. In order to test the potential validity of the TWIP model in calcite, in this Section we assess the respective efficiency of grain boundaries and twins at impeding dislocations.
Microstructural observations suggest that the interaction between dislocations and twins boundaries varies between grains. Some grains contain lattice curvature that is clearly affected by twin boundaries (_e.g._, Figure 12c) whereas in other grains the lattice curvature appears to be affected by twin boundaries (e.g., Figure 9c and Figure 12d). The trapping of dislocations therefore appears to depend on grain orientation and related twin orientation.
We can assess the effectiveness of twin boundaries and grain boundaries as barriers to dislocations by considering slip-transmission coefficients. The simplest form of this analysis is purely geometric and considers only the slip-plane orientation and direction of the Burgers vector of the incoming and outgoing slip systems (Luster & Morris, 1995),
\[m^{\prime}=(n_{\rm A}\cdot n_{\rm B})(b_{\rm A}\cdot b_{\rm B})=\cos(\phi)\cos(\kappa) \tag{4}\]
where \(n\) denotes the unit normal vector of the slip plane and \(b\) the unit Burgers vector, and the subscripts A and B denote the incoming and outgoing slip systems. A value of zero for \(m^{\prime}\) indicates an impenetrable barrier to dislocations and a value of one indicates a transparent boundary.
Extensive work by J. De Bresser and Spiers (1990); J. H. P. De Bresser and Spiers (1993); J. H. De Bresser and Spiers (1997) demonstrated that \(r\{1\,0\,\overline{1}\,4\}\langle\overline{2}\,0\,2\,1\rangle^{\pm}\) and \(f\{\overline{1}\,0\,1\,2\}\langle 2\,\overline{2}\,0\,1\rangle^{\pm}\) are the dominant slip systems in calcite at low temperatures. The critical resolved shear stresses of these slip systems is about an order of magnitude greater than that of \(e\) twinning at room temperature. Despite their strength, we consider them to be active in our samples since our stress level is typically significantly above this level and we observe sig
Figure 16: Computed values of \(M^{\prime}\) for a) grain boundaries and b) twin boundaries.
nificant intragranular misorientation. We computed slip transmission across twin and grain boundaries using a random fabric in MTEX. Maximum values of \(m^{\prime}\) across twin boundaries (Table 4) demonstrate that \(m^{\prime}\) is always greater than 0.4, and in some cases twins are transparent with a value of 1 (_e.g._, \(r_{2}^{+}\) to \(r_{3}^{+}\) across an \(e_{1}\) twin). Given also the large number of available slip systems, this analysis suggests that twin boundaries may impede dislocation motion less than randomly oriented grain boundaries.
To further compare the effect of twin and grain boundaries on dislocation motion, we can also consider the effects of twin-boundary orientation. This analysis can be per
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(S_{\rm A}\) & maximum \(m^{\prime}_{e1}\) & \(S_{\rm B}\) & maximum \(m^{\prime}_{e2}\) & \(S_{\rm B}\) & maximum \(m^{\prime}_{e3}\) & \(S_{\rm B}\) \\ \hline \(r_{1}^{-}\) & 0.617 & \(r_{1}^{-}\) & 0.414 & \(f_{1}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.414 & \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) \\ \(r_{2}^{-}\) & 0.414 & \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.617 & \(r_{2}^{-}\) & 0.414 & \(f_{2}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(r_{3}^{-}\) & 0.414 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.414 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.617 & \(r_{3}^{-}\) \\ \(r_{1}^{+}\) & 0.630 & \(r_{1}^{+}\) & 1.000 & \(r_{3}^{+}\) & 1.000 & \(r_{2}^{+}\) \\ \(r_{2}^{+}\) & 1.000 & \(r_{3}^{+}\) & 0.630 & \(r_{2}^{+}\) & 1.000 & \(r_{1}^{+}\) \\ \(r_{3}^{+}\) & 1.000 & \(r_{2}^{+}\) & 1.000 & \(r_{1}^{+}\) & 0.630 & \(r_{3}^{+}\) \\ \(f_{1}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{1}^{-}\) & 0.718 & \(f_{1}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{1}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{1}\langle 2\,\overline{2}\,0\,1\rangle^{+}\) & 0.541 & \(r_{1}^{-}\) & 0.718 & \(f_{1}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{1}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{2}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{2}^{-}\) & 0.718 & \(f_{2}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.718 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{3}^{-}\) \\ \(f_{3}\langle\overline{2}\,0\,2\,1\rangle^{+}\) & 0.718 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{3}^{-}\) \\ \(f_{1}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{1}^{-}\) & 0.596 & \(f_{1}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.717 & \(f_{1}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.596 & \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{2}^{-}\) & 0.596 & \(f_{2}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{2}\langle\overline{2}\,0\,2\,1\rangle^{+}\) & 0.596 & \(f_{2}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{2}^{-}\) & 0.596 & \(f_{2}\langle 0\,\overline{2}\,2\,\overline{1}\rangle^{-}\) \\ \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.718 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.718 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{3}^{-}\) \\ \(f_{3}\langle\overline{2}\,2\,0\,1\rangle^{+}\) & 0.718 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.718 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{3}^{-}\) \\ \(f_{3}\langle\overline{2}\,\overline{2}\,0\,1\rangle^{+}\) & 0.718 & \(f_{3}\langle 2\,0\,\overline{2}\,\overline{1}\rangle^{-}\) & 0.718 & \(f_{3}\langle\overline{2}\,2\,0\,\overline{1}\rangle^{-}\) & 0.541 & \(r_{3}^{-}\) \\ \hline \end{tabular}
\end{table}
Table 7: Slip-transmission analysis of twin boundaries. Tabulated results for the maximum value of \(m^{\prime}\) (Equation 4) between slip system \(S_{\rm A}\) and \(S_{\rm B}\). The \(m^{\prime}\) subscript denotes the twinning system considered: _e.g._, \(m^{\prime}_{e1}\) represents slip transfer across an \(e1\) twin.
formed with a geometric criterion that quantifies the degree of misalignment between the line intersections of incoming and outgoing slip planes with the twin plane, \(\mathbf{l}_{\mathrm{A}}\) and \(\mathbf{l}_{\mathrm{B}}\), respectively, and between their slip directions, \(\mathbf{d}_{\mathrm{A}}\) and \(\mathbf{d}_{\mathrm{B}}\), respectively. The following scalar quantity is maximal when the two slip systems on either side of the boundary are aligned and slip can be transmitted easily across the boundary (Shen et al., 1986; Bayerschen et al., 2016):
\[\hat{M}=(\mathbf{l}_{\mathrm{A}}\cdot\mathbf{l}_{\mathrm{B}})(\mathbf{d}_{ \mathrm{A}}\cdot\mathbf{d}_{\mathrm{B}}). \tag{5}\]
The line intersections \(\mathbf{l}\) can be obtained from \(\mathbf{l}=(\mathbf{n}\times\mathbf{n}_{\Gamma})/|\mathbf{n}_{A}\times \mathbf{n}_{\Gamma}|\), where \(\mathbf{n}_{\Gamma}\) is the twin boundary plane normal.
We used the \(\hat{M}\) criterion to compare the efficiency of slip transmission across twin boundaries to slip transmission across grain boundaries. A random fabric was generated using MTEX, and we computed \(\hat{M}\) between random grain pairs by assuming that the activated slip system (either \(r^{\pm}\) or \(f^{\pm}\) slip) was that with highest Schmid factor in each grain. For each grain, we also computed \(\hat{M}\) across a twin boundary hosted in the initial grain, the activated twin system was assumed to be that with the highest Schmid factor in each grain. These calculations suggest an average value for slip transmission of \(\hat{M}\,=\,0.27\) across grain boundaries, which is considerably smaller than the average value of \(\hat{M}=0.54\) for slip transfer across twin boundaries.
To further test this result, we computed the expected \(m^{\prime}\) values for our EBSD data across the activated twin systems within each grain (Figure 17). The results demonstrate that grains with low values of \(m^{\prime}\) (Figure 17, grain c) exhibit dislocation substructures that are affected by twinning, that is segmentation of lattice distortion between twins (Figure 12c). In contrast, grains with large values of \(m^{\prime}\) (Figure 17, grain d) exhibit gradients in lattice distortion over length scales approaching the grain size, but which are not strongly affected at smaller scales by twin boundaries. HR-EBSD maps taken from the interior of these grains supports this conclusion, large residual stresses and GND densities are observed in the vicinity of twin boundaries in grains for which \(m^{\prime}\) is small (Figure 13b and c). Lower residual stresses and GND densities are present in the vicinity of twin boundaries in grains for which \(m^{\prime}\) is low (Figure 13a and Figure 17b). The efficacy of twin boundaries as barriers to dislocation motion is therefore dependent on the local orientation of individual grains, but is on average weaker than a grain boundary taken at random.
Figure 17: Map of \(\hat{M}\) values for slip transmission across grain boundaries (coloured according to value) and twin boundaries (grains coloured according to value) computed from EBSD data obtained for Run0093.
The relative ease with which dislocations can transmit across twin boundaries suggests that Equation 3 requires refinement. We suggest that weights should be applied to the contributions of twin boundaries and grain boundaries, such that equation 3 becomes,
\[\frac{d\rho}{d\varepsilon}=\frac{1}{b\lambda}-f\rho=\frac{1}{b}\left(\frac{k_{D} }{D}+\frac{k_{\mathrm{t}}}{D_{\mathrm{t}}}+k\sqrt{\rho}\right)-f\rho, \tag{6}\]
in which \(k_{\mathrm{D}}\) and \(k_{\mathrm{t}}\) are weights to account for the relative efficacy of grain boundaries and twin boundaries. We expect that \(k_{\mathrm{t}}\ll k_{\mathrm{D}}\). Further microstructural measurements, such as the evolution of twin density with strain, are required to determine the value of these weighting factors.
### Towards a model of semi-brittle flow in calcite rocks
Our observations combined with previous results suggest several key characteristics that should be captured by a model of semi-brittle flow in calcite-rich rocks at \(T<400^{\circ}\)C: (1) non-linearly increasing strength and hardening with increasing pressure, (2) decreasing strength with increasing temperature, (3) increasing strength with decreasing grain size, (4) strength that is nearly insensitive to strain rate. Rybacki et al. (2021) reviewed proposed models of semi-brittle flow, identifying difficulties in combining brittle and plastic flow into a simple unified model.
Through the semi-brittle flow regime, the strain contribution of brittle and crystal-plastic processes changes with depth. At low pressures and temperatures, at which brittle processes dominate, the macroscopic behaviour of rocks is described by frictional failure (Byerlee & Brace, 1966; Brace & Kohlstedt, 1980). Microphysical models of brittle deformation are typically based the wing-crack model (e.g., Nemat-Nasser & Horii, 1982; Ashby & Sammis, 1990), in which brittle damage is accounted for by the propagation of Mode I wing cracks. In this regime, strength is pressure sensitive, dependent on grain size and is to first order strain rate-insensitive. There is no strong temperature dependence, and plasticity is not considered.
There are only a small number of microphysical models in existence accounting for coupled brittle-plastic deformation. Horii and Nemat-Nasser (1986) modified and solved the problem of a wing crack coupled a 'plastic zone' by considering a dislocation pile-up ahead of the shear crack. Their model is sensitive to pressure, grain size and also temperature as the plastic yield strength in the plastic zone can vary with temperature. How
ever, their model predicts that materials with coarser grain size are more brittle, contradicting observations from experimental data (Fredrich et al., 1990).
More recently, Nicolas et al. (2017) derived a model incorporating the propagation of wing cracks, plastic pore collapse and nucleation of new cracks due to dislocation pile-ups. The model reproduces several important characteristics of the deformation of porous limestones. As pointed out by Rybacki et al. (2021), this model does not consider dynamic recovery or twinning, which are important deformation processes in calcite-rich rocks. The large number of free parameters make this model challenging to test and may limit its general application.
An alternative approach to introduce feedbacks between cracking and plastic flow may be by use of a modified Kocks-Mecking equation. The anticorrelation between strain hardening and crack density indicated by our results (Figure 15) suggests that cracking acts to reduce stress and hardening. The role of cracks could be multiple. One possibility is that strength remains dictated by dislocation density, and correctly predicted by the Taylor equation 2, in which case a decreasing flow stress would imply a reduction of dislocation density and thus that and cracks could act as dislocation sinks. This possibility is compatible with the idea that tensile cracks correspond to free surfaces within the material, and dislocations intersecting those free surfaces would form steps and disappear from the crystals. One other possibility is that cracks relax internal stress and strain incompatibilities between grains, i.e., act as "geometrically necessary" structures. A third option is that deformation at low pressure, at which cracks are pervasive, is not controlled by dislocation motion but dominated by elastically-accommodated intergranular slip, and tensile cracks relax the associated internal stresses.
The origin of microcracks during semi-brittle deformation of calcite is potentially coupled to intracrystalline plasticity. As discussed extensively by Nicolas et al. (2017), cracks can be nucleated due to stress concentrations at the head of dislocation pile-ups (e.g., Stroh, 1954; Olsson & Peng, 1976; Wong, 1990) and may therefore be dislocation sinks, i.e., microcracks could contribute to dislocation escape from the deformed crystals. Where slip transfer is inefficient (Olsson & Peng, 1976) and also where geometry results in a high density of GNDs (e.g., Figure 8d) stresses are high, which can lead to the nucleation of cracks.
An approach based on the Kocks-Mecking model has the advantage that twinning and grain size effects can be incorporated, whilst introducing a confining-pressure dependence due to the propagation of Mode I cracks. At this stage however, there are insufficient data on dislocation-density evolution during semi-brittle deformation to make sensible progress beyond the qualitative statements listed above. Thus, detailed modelling attempts are beyond the scope of the present work.
## 5 Conclusions
We performed a series of triaxial deformation experiments using three calcite rocks of variable grain size: Solnhofen limestone, Carrara marble and Wombeyan marble. Our experimental results demonstrate that the strength and hardening rates of calcite rocks deformed in the semi-brittle regime are inversely dependent on grain size.
Microstructural observations using visible-light and electron microscopy demonstrate that strain is accommodated by cracking, twinning and dislocation glide. Deformation tests were accompanied by _in-situ_ measurements of P-wave speed, which generally decreases with strain. Wave speed decreases more at room temperature than at 200 \({}^{\circ}\)C and 400 \({}^{\circ}\)C, which is consistent with microcracking being more prevalent at low temperature.
Quantitative microstructural observations reveal that microcrack density is inversely proportional to hardening rate. While the exact role of cracks in the overall stress-strain behaviour remains unclear at this stage, we propose the hypothesis that tensile microcracks cause weakening by either relaxing internal stresses (accommodating strain incompatiilibities and reducing the need for geometrically necessary dislocations), or offering free surfaces where dislocations can escape individual grains, or a combination thereof.
Furthermore, significant decreases in wave speed are observed upon removal of confining pressure, indicating the accumulation of sample damage. Decompression induced wave-speed decreases are greatest in experiments performed at high pressure (600 MPa) and low temperature (room temperature) in conjunction with the highest hardening rates. Microstructural observations from samples deformed at these conditions are characterised by open grain boundaries, suggesting that wave-speed decreases during decompression originate from the release of stored elastic strain.
Electron microscopy shows that twin density is high, consistent with previous studies at similar conditions. Twin spacing is always at least one order of magnitude smaller than grain size. The spatial distributions of intragranular misorientation suggest that twin boundaries do not always act as significant barriers to dislocation motion and slip-transfer computations indicate that twins are statistically weaker barriers than grain boundaries. Therefore, grain size exerts a first-order control on strength and strain hardening, whereas the spacing of twin boundaries may exert a second-order control on these properties.
Taken together, our results show that semi-brittle flow in calcite is controlled by grain-size dependent processes that lead to significant hardening. This behaviour is qualitatively consistent with rheological models that include dislocation density and twin spacing as key state variables. The role of cracking in the decrease of strain hardening at low pressure and temperature requires the addition of a quantity describing crack density (that should include information on crack spacing, length, and orientation distribution) as a new state variable, to address fully the stress-strain behaviour of rocks in the semi-brittle regime.
## Open Research Section
Processed experimental data (stress, strain and velocity change) is available from Zenodo ([https://doi.org/10.5281/zenodo.7347236](https://doi.org/10.5281/zenodo.7347236)).
## Acknowledgments
Extensive discussions with Erik Rybacki and Brian Evans, who shared some of their (then) unpublished data, helped to shape this work. Emmanuel David contributed to early technical developments on the Murrell apparatus. Technical support from John Bowles and Neil Hughes is greatly appreciated. Ian Jackson kindly provided the Wombeyan marble. Sarah Incel facilitated thin section preparation. Sheng Fan helped running SEM sessions. Discussions with Thomas Breithaupt, Jorg Renner and Chris Spiers contributed to our understanding of plastic deformation in calcite. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no 804685/"RockDEaF" to N.B.) and from the UK Natural Environment Research Council (Grant Agreement No.
NE/M016471/1 to N.B.). DW acknowledges support from a UK Research and Innovation Future Leaders Fellowship [grant number MR/V021788/1].
|
2306.13070 | Armed Conflict and Early Human Capital Accumulation: Evidence from
Cameroon's Anglophone Conflict | This paper examines the impact of the Anglophone Conflict in Cameroon on
human capital accumulation. Using high-quality individual-level data on test
scores and information on conflict-related violent events, a
difference-in-differences design is employed to estimate the conflict's causal
effects. The results show that an increase in violent events and
conflict-related deaths causes a significant decline in test scores in reading
and mathematics. The conflict also leads to higher rates of teacher absenteeism
and reduced access to electricity in schools. These findings highlight the
adverse consequences of conflict-related violence on human capital
accumulation, particularly within the Anglophone subsystem. The study
emphasizes the disproportionate burden faced by Anglophone pupils due to
language-rooted tensions and segregated educational systems. | Hector Galindo-Silva, Guy Tchuente | 2023-06-22T17:45:31Z | http://arxiv.org/abs/2306.13070v1 | # Armed Conflict and Early Human Capital Accumulation: Evidence from Cameroon's Anglophone Conflict
###### Abstract
This paper examines the impact of the Anglophone Conflict in Cameroon on human capital accumulation. Using high-quality individual-level data on test scores and information on conflict-related violent events, a difference-in-differences design is employed to estimate the conflict's causal effects. The results show that an increase in violent events and conflict-related deaths causes a significant decline in test scores in reading and mathematics. The conflict also leads to higher rates of teacher absenteeism and reduced access to electricity in schools. These findings highlight the adverse consequences of conflict-related violence on human capital accumulation, particularly within the Anglophone subsystem. The study emphasizes the disproportionate burden faced by Anglophone pupils due to language-rooted tensions and segregated educational systems.
**Keywords:** Anglophone Conflict, Cameroon, human capital accumulation, educational outcomes, language-based conflicts.
**JEL codes:** I25, O15, D74, J24.
Introduction
Since 2016, Cameroon, a central African country, has been immersed in a violent civil conflict between armed separatist groups and the government. Referred to as the Anglo-phone Conflict or the Anglophone Crisis, this conflict has been primarily concentrated in two regions where English is the predominant language, encompassing approximately 20% of the country's landmass. Its consequences have been devastating, resulting in a significant loss of life, with reported casualties exceeding 4,000, and causing the displacement of more than 788,000 individuals, leading to widespread population movements (Human Rights Watch, 2022).
The Anglophone Conflict in Cameroon is primarily driven by a linguistic division rooted in the country's colonial history. With English and French as official languages,1 the Francophone population holds power in the government and elite circles, while some Anglophones have long faced marginalization. The conflict, which began in 2016 as a peaceful protest by English-speaking lawyers and teachers, has since escalated into a violent clash between armed separatists and the Cameroonian government.2
Footnote 1: It is worth noting that although Cameroon boasts over 200 local languages, French and English serve as the official languages, aiming to unify its ethnically diverse population.
Footnote 2: Numerous reports indicate that English-speaking separatists, advocating for the establishment of an independent English-speaking state called Ambazonia, have terrorized civilians and engaged in attacks against government forces (Human Rights Watch, 2021). Additionally, there are reports of Cameroonian troops firing upon unarmed civilians and destroying their homes (Human Rights Watch, 2018; Amnesty International, 2018).
A significant aspect of the violence associated with this conflict emerged in 2017 when Anglophone activists initiated 'Operation Ghost Town.' This protest called for the closure of Anglophone schools in the north-west and south-west regions to oppose the perceived assimilation of the Anglophone education system into the French-speaking one.3 The boycott has been marked by violence, including student abductions and targeted killings of school staff (Human Rights Watch, 2018, 2021; Amnesty International, 2018; OCHA, 2021). These
incidents have become critical flashpoints in the ongoing conflict (The Guardian, 2018; The Washington Post, 2019).
This paper investigates the impact of violence associated with Cameroon's Anglophone Conflict on the acquisition of human capital by pupils. The unique characteristics of this conflict, such as its localized geography and linguistic grievances as the primary driver, present an exceptional opportunity to analyze the consequences of armed conflicts associated with language-related matters. Moreover, the coexistence of both Anglophone and Francophone education systems in Cameroon, with overlapping geographic areas, provides a natural laboratory to explore the implications for human capital resulting from conflicts of this nature.
Our analysis utilizes two main data sources: pupil test scores in reading and mathematics for Grades 2 and 6, obtained from the Programme d'Analyse des Systemes Educatifs de la CONFEMEN (PASEC) for the years 2014 and 2019. This data includes a representative sample of pupils from both the Francophone and Anglophone subsystems, allowing for a comparison before and after the onset of the Anglophone Conflict. Additionally, we rely on the Armed Conflict Location & Event Data Project (ACLED) to gather information on violent events and fatalities associated with the conflict. Spanning from 2000 to 2022, this dataset provides comprehensive information including the date, location, and actors involved in each event (such as Ambazonian rebels or the Cameroonian army).
Our study focuses on pupils in the conflict-affected North-West and South-West regions who are enrolled in the Anglophone subsystem. We compare this group, referred to as the treatment group, with Anglophone subsystem pupils in unaffected regions as the control group. Using a difference-in-differences methodology, we analyze the causal impact of the Anglophone Conflict on human capital accumulation. Our primary identification assumption is that, in the absence of the conflict and given certain observable variables (included in our analysis), the change in test scores between 2014 and 2019 would have been the same for all pupils. We provide evidence consistent with this assumption using data from pupils from the francophone subsystem during the same time period.
Our main finding reveals a significant and negative causal relationship between violence
stemming from the conflict and the accumulation of human capital among pupils in the North-West and South-West regions of the Anglophone educational subsystem. Specifically, our estimates indicate that an additional ten violent events involving Ambazonian rebels correspond to a 2.6% decrease in reading test scores and a 2.1% decline in mathematics test scores for students enrolled in the Anglophone educational subsystem. Furthermore, for every ten deaths, there is a 1.9% decrease in language test scores and a 1.9% reduction in mathematics test scores for these pupils. These results indicate that events resulting in loss of life have a detrimental impact on the accumulation of human capital. Given the crucial role of early childhood years in human capital development, these declines may have significant long-term consequences (which we cannot directly study due to the temporal limitation of our data).
Moreover, our analysis indicates that a rise in fatal events involving Ambazonian rebels is linked to an increased rate of teacher absenteeism in the Anglophone education subsystem. This finding implies that the presence of a high-risk environment detrimentally impacts both the quality and quantity of pupils' learning experiences, ultimately leading to a diminished acquisition of skills. Additionally, our research demonstrates a negative association between conflict-related fatalities and schools' access to electricity. This observation implies that the presence of conflict exacerbates economic hardships, potentially leading pupils to shift their focus from studying to engaging in labor within either the legal or illegal economy. As a consequence, this further widens the disparities in learning outcomes among Anglophone pupils.
We also investigate the effect of the Anglophone conflict on educational outcomes of pupils enrolled in the Francophone education subsystem (in the Grand-Ouest and Grand-Nord regions). Due to the characteristics of the Anglophone conflict, we anticipate that these students have been less directly exposed to armed violence or have experienced it indirectly. Our analysis reveals that fatal events involving Ambazonian rebels have considerably smaller effects on their human capital accumulation compared to their Anglophone counterparts. In fact, the observed effects are approximately 10 times smaller for these pupils.
Our research not only highlights the detrimental impact of violence related to the Anglo-phone conflict on the accumulation of human capital but also sheds light on a mechanism by which ethnic or linguistic disparities are exacerbated within the context of armed conflict. Remarkably, this conflict either originates from or is driven by the intent to reduce these differences. Our study demonstrates that such exacerbation occurs not only in the short term, where there is direct destruction of lives and infrastructure, particularly in regions with a predominantly English-speaking population, but also in the long term, as early-life experiences significantly shape individuals' future educational and economic trajectories (Heckman, Pinto, and Savelyev, 2013). Specifically, we identify a discernible effect of the Anglophone conflict on students within the Anglophone and Francophone educational subsystems.
The existing body of literature on the impact of armed conflict on human capital is extensive. Previous studies, such as Bertoni, Di Maio, Molini, and Nistico (2019), have demonstrated the significant consequences of armed conflicts on the accumulation of human capital. Furthermore, exposure to violence during childhood has been linked to negative effects on various aspects of individuals' lives, including psychological well-being, health, and education, as indicated by Jurges, Stella, Hallaq, and Schwarz (2022).
Numerous studies have provided evidence of the detrimental effects of conflicts on school attainment, academic achievement, and educational performance. These studies, including those by Akbulut-Yuksel (2014); Bertoni, Di Maio, Molini, and Nistico (2019); Chamarbagwala and Moran (2011); Dabalen and Paul (2014); Di Maio and Nandi (2013); Leon (2012); Pivovarova and Swee (2015); Singh and Shemyakina (2016); Swee (2015); Verwimp and Van Bavel (2014); Valente (2014); Bruck, Di Maio, and Miaari (2019), have significantly contributed to our understanding of the relationship between conflict and human capital accumulation--the acquisition of skills, knowledge, and experience through formal education that are valuable in the production process.
However, the specific impacts of language-based violence remain poorly understood. Existing research has only tangentially explored how ethnic disparities in relate these effects, with limited attention paid to this aspect in studies by Chamarbagwala and Moran (2011);
Dabalen and Paul (2014); Bertoni, Di Maio, Molini, and Nistico (2019). Importantly, none of these studies have specifically focused on conflicts rooted in language identity, which constitutes the primary focus of this paper.
Moreover, our study makes a significant contribution to this literature by providing causal estimates of the effects of language-rooted conflicts. We demonstrate that when educational systems are segregated,the minority group disproportionately bears the burden of the conflict. To the best of our knowledge, this paper is the first to propose a quasi-experimental design using a difference-in-differences approach, complemented by high-quality measures of individual skill levels, to examine the effects of the Anglophone conflict in Cameroon. Consequently, our research contributes to the ongoing interdisciplinary efforts aimed at assessing the consequences of the Anglophone crisis and exploring potential peaceful resolutions (Willis, Angove, and Mbinkar, 2023; Willis, McAulay, Ndeunyema, and Angove, 2019; Nwati, 2020; Crawford, Kewir, Annan, and Beseng, 2021; Pelican, 2022; Ousmanou, 2022; Pelican, Schumann, Plucken, and Drew, 2022).
The remainder of the paper proceeds as follows. Section 2 presents details regarding the data sources and institutional background pertinent to early human capital and the Anglophone Conflict. Section 3 describes our research design. Section 4 examines the impact of armed conflict on early human capital accumulation. Section 5 focuses on estimating the effects of the conflict on the Francophone subsystem. Finally, Section 6 concludes the paper.
## 2 Institutional Background and Data Sources
### Cameroon's Primary Schools Education System
Cameroon's primary education sector is served by three main providers: the government, responsible for public schools, private institutions, and private confessional schools such as Catholic, Islamic, and Protestant establishments. These providers deliver education in the country's two official languages: French and English, each with its own separate primary
education subsystem.
The primary education system in Cameroon is characterized by the presence of both French and English subsystems in all ten regions of the country. However, the majority of regions (eight out of ten) are predominantly French-speaking, resulting in a higher concentration of French schools. On the other hand, English-speaking regions like the North-West and South-West have a greater proportion of educational institutions that offer instruction in the English language.
Data collected during the 2014-2015 academic year shows that 71.6% of pupils were enrolled in the Francophone education system, while 28.4% were enrolled in the Anglophone education system (Alemnge, 2019). It is worth noting that 75% of pupils attend public schools, and there is a higher proportion of pupils in the Francophone system compared to the Anglophone system (Alemnge, 2019).
Regarding the evolution of the educational system, Figure 1 illustrates a consistent upward trend in primary school enrollment rates for both boys and girls from 1996 to 2016. Notably, there were periods of accelerated growth in enrollment rates in 2000 and 2010. However, in 2016, there was a decline in enrollment rates for both genders. This decline may be attributed to the ongoing conflict in the Anglophone regions since 2016. While this graph indicates a reduction in the percentage of primary school enrollment around
Figure 1: Enrolment Rate 1994 to 2019 from the World Bank data.
the onset of the Anglophone crisis, the precise causal impact of the conflict on human capital accumulation remains to be accurately determined. Moreover, further analysis is necessary to gain a comprehensive understanding of the effects of the conflict on primary school enrollment rates. This paper will focus on its impact on the accumulation of human capital.
### Cameroon's Anglophone Conflict
Cameroon gained independence from French and English colonial rule in 1960 and 1961, respectively, for the Francophone and Anglophone regions, and has generally enjoyed a peaceful environment without military coups. However, recent years have seen the emergence of conflicts in Cameroon that pose threats to its stability. To illustrate this, Figure IIa provides a visual representation of conflict-related events and fatalities since 2001.
Among the conflicts experienced by Cameroon in recent years, the Anglophone Conflict has emerged as one of the most intense.4 It primarily stems from a linguistic division, with its epicenter located in the North West and South West regions. The main actors involved are the Cameroonian army and the Ambazonian separatists. The origins of the Anglophone Conflict can be traced back to the historical legacy of Cameroon's two official languages, English and French, inherited from the colonial era. The institutional framework established by former colonial powers, resulted in the dominance of French speakers in Cameroon's government and elite circles, and contributed to the marginalization of the Anglophone population. This has given rise to various social issues commonly referred to as the 'Anglophone problem' (Konings and Nyammjoh, 1997).
Footnote 4: In addition to the Anglophone conflict, Cameroon has faced the insurgency of Boko Haram in the Far North, as well as complex challenges related to refugees and transborder issues in the East region. (see Pelican, 2022, for a detailed background and literature review on the Anglophone conflict).
In 2016, the 'Anglophone problem' transformed into a full-fledged violent conflict. This escalation originated from peaceful protests organized by English-speaking lawyers and teachers. They were driven by their frustrations with the government's practice of assigning French-speaking judges and teachers to English-speaking courts and schools. The English-speaking community argued that this forced assimilation into Francophone legal
and educational systems. While the government initially acknowledged the need for some reforms, it simultaneously repressed activists by imprisoning moderate leaders and employing violence against protesters. As moderate voices were silenced, more extremist factions emerged, advocating for complete separation from Cameroon and demanding independence (The Washington Post, 2019; Pelican, 2022). Subsequently, the conflict intensified, with separatist groups increasing their attacks on the military, prompting retaliatory actions by Cameroon troops. This retaliation has included firing upon unarmed civilians and demolishing their homes.
One notable series of events in this conflict is known as the 'Operation Ghost Town,' which was initiated by Anglophone activists in 2017. This 'operation' aimed to achieve the closure of schools in the north-west and south-west regions as a protest against the perceived assimilation of the Anglophone education system into the French-speaking one. Notably, this 'operation' specifically targeted Anglophone schools in these regions, while excluding Francophone schools operating in the same areas. Acts of violence associated with this 'operation' have included forced closures of schools, the abduction of students, and targeted killings of principals and staff members within these Anglophone schools.
In addition to including targeted violence against Anglophone schools, the Anglophone conflict is characterized by two central features. Firstly, it unfolds within a relatively concise timeframe, with violence stemming from this conflict first appearing in 2017 and rapidly escalating in 2018. This pattern is evident in Figure 4(b), which illustrates the distribution of conflict-related events and fatalities specifically attributed to Ambazonian separatists from 2011 to 2022.5
Footnote 5: Figure 4(b) also highlights the escalation of the conflict in 2020 and 2021, leading to a substantial rise in both fatalities and events. Despite the Major National Dialogue held in October 2019, which engaged multiple stakeholders, the desired political transformation and decrease in violence within the Anglophone regions were not realized, as depicted in Figure 2.
The second notable characteristic of the Anglophone conflict is its geographical localization, confined exclusively to the North West and South West regions of Cameroon. This remarkable geographic limitation is visually depicted in Map 1 of the Appendix. Given the significance of this aspect to our empirical analysis, we will provide a more comprehensive
exploration in Section 2.4.
### Data sources
The objective of this study is to examine the effect of violence associated with the Anglophone conflict on the academic achievements of pupils in different regions of Cameroon. In order to evaluate this impact, we utilize reading and mathematics test scores from the Programme d'Analyse des Systemes Educatifs de la CONFEMEN (PASEC) dataset. This dataset, collected in 2014 and 2019, comprises a representative sample of pupils enrolled in schools across French-speaking African nations and can be accessed publicly through registration on the official PASEC website.6
Footnote 6: The PASEC dataset can be accessed at [https://pasec.confemen.org/en/](https://pasec.confemen.org/en/)
The PASEC data on Cameroon consists of information from approximately 180 schools and around 4,887 pupils. This dataset encompasses both the francophone and anglophone education subsystems and includes data from all ten regions of the country. The PASEC data is obtained from a sample of individuals organized into'strates', which represent the regions and educational subsystem. In 2014, there were six states, but in 2019, this number increased to twelve.7 To ensure comparability between the two years, as explained in detail in Appendix A.2.1, we focus on the 2014 states. Table I presents these states along with
Figure II: Conflict-Related Events in Cameroon
their corresponding regions and educational subsystems.
As shown in Table I, states 1 and 2 (referred to as Zone Anglophone) encompass pupils from both public and private schools within the Anglophone subsystem residing in the North-West and South-West regions of Cameroon (the English-speaking regions). As previously mentioned, these regions are known to be the theatre of the Anglophone conflict. Strate 3 (or Zone Francophone) comprises pupils from the Anglophone subsystem attending public schools in all regions except the North-West and South-West (i.e., the French-speaking regions of Cameroon). Strate 4 (or Grand-Ouest), includes pupils from the Francophone subsystem from the West, North-West, South-West, and Littoral regions. It is worth noting that Strate 4 overlaps geographically with Strate 1. Additionally, Strate 5 (or Grand-Centre) encompasses pupils selected from the Centre, Sud, and Est regions. Lastly, Strate 6 (or Grand-Nord) comprises pupils from the Adamaoua, North, and Extrem-North regions.8
Footnote 8: It should be noted that strates 1 and 4 overlap in terms of the geographical regions they cover.
Our study focuses on the Cameroonian regions that have been directly impacted by violence associated with the Anglophone conflict. Specifically, we aim to analyze the effects of this violence on Anglophone public schools. Given this objective, the information pertaining to strates 1 and 3 holds particular relevance for our research. In the subsequent discussion, we will elaborate on the definition of Strate 1 as our control group, while Strate 3 will serve as our treatment group. Descriptive statistics for our primary academic outcome variables are presented separately for these two strates, as well as by year, in Panels A and
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Strate** & **Region** & **Educational system** \\ \hline
1 (Zone Anglophone) & North-West, South-West & Anglophone (public) \\
2 (Zone Anglophone) & North-West, South-West & Anglophone (private) \\
3 (Zone Francophone) & Littoral, West, Centre, Sud, Est, & Anglophone \\ & Adamaoua, North, Extrem-Nort & \\
4 (Grand-Ouest) & West, North-West, South-West, Littoral & Francophone \\
5 (Grand-Centre) & Centre, Sud, Est & Francophone \\
6 (Grand-Nord) & Adamaoua, North, Extrem-Nort & Francophone \\ \hline \hline \end{tabular}
\end{table}
Table I: Strates along with their corresponding regions and educational subsystems
C of Table 2.
To estimate the impact on pupil learning of Cameroon's Anglophone conflict, we integrate the PASEC data with data on conflict from the Armed Conflict Location & Event Data Project (ACLED). The ACLED is a publicly available database that offers extensive information on armed conflicts, political violence, and protest events worldwide. It is a collaborative project between researchers, analysts, and organizations working in the fields
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Strates 1 and 3} & \multicolumn{3}{c}{Strate 1} & \multicolumn{3}{c}{Strate 3} \\ \cline{2-10} & obs. & mean & st.dev. & obs. & mean & st.dev. & obs. & mean & st.dev. \\ \cline{2-10} & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline \multicolumn{10}{c}{2014} \\ \hline \multicolumn{10}{c}{_Panel A: Academic Outcomes (PASEC)_} \\ \hline Reading score & 2088 & 53.1 & 9.2 & 1578 & 51.3 & 8.8 & 510 & 58.6 & 8.4 \\ Reading score (females) & 1058 & 53.6 & 9.2 & 802 & 51.8 & 8.5 & 256 & 59.4 & 8.9 \\ Reading score (males) & 1030 & 52.6 & 9.2 & 776 & 50.9 & 9.0 & 254 & 57.7 & 7.7 \\ Math score (females) & 2088 & 50.4 & 8.7 & 1578 & 48.5 & 8.0 & 510 & 56.2 & 8.0 \\ Math score (males) & 1058 & 50.6 & 8.7 & 802 & 48.6 & 7.8 & 256 & 56.9 & 8.5 \\ \multicolumn{10}{c}{_Panel B: Armed Conflict (ACLED)_} \\ \hline Events involving any group & 2088 & 34.4 & 44.7 & 1578 & 9.0 & 0.0 & 510 & 113.0 & 0.0 \\ Events involving Amazonian rebels & 2088 & 0.0 & 0.0 & 1578 & 0.0 & 0.0 & 510 & 0.0 & 0.0 \\ Events involving Cameroonian army & 2088 & 4.6 & 8.2 & 1578 & 0.0 & 0.0 & 510 & 19.0 & 0.0 \\ Fatalities by any group & 2088 & 344.4 & 569.0 & 1578 & 21.0 & 0.0 & 510 & 1345.0 & 0.0 \\ Fatalities by Amazonian rebels & 2088 & 0.0 & 0.0 & 1578 & 0.0 & 0.0 & 510 & 0.0 & 0.0 \\ Fatalities by Cameroonian army & 2088 & 43.2 & 76.1 & 1578 & 0.0 & 0.0 & 510 & 177.0 & 0.0 \\ \hline \multicolumn{10}{c}{_Panel C: Academic Outcomes (PASEC)_} \\ \hline Reading score & 2497 & 55.9 & 9.5 & 727 & 51.9 & 9.4 & 1770 & 57.6 & 9.1 \\ Reading score (females) & 1279 & 56.4 & 9.6 & 364 & 52.3 & 9.3 & 915 & 58.0 & 9.2 \\ Reading score (males) & 1218 & 55.5 & 9.5 & 363 & 51.5 & 9.5 & 855 & 57.2 & 9.0 \\ Math score & 2497 & 52.4 & 8.3 & 727 & 48.9 & 7.5 & 1770 & 53.8 & 8.1 \\ Math score (females) & 1279 & 52.2 & 8.3 & 364 & 48.4 & 7.2 & 915 & 53.8 & 8.2 \\ Math score (males) & 1218 & 52.5 & 8.2 & 363 & 49.5 & 7.8 & 855 & 53.8 & 8.1 \\ \multicolumn{10}{c}{_Panel D: Armed Conflict (ACLED)_} \\ \hline Events involving any group & 2497 & 435.9 & 12.3 & 727 & 455.0 & 0.0 & 1770 & 428.0 & 0.0 \\ Events involving Amazonian rebels & 2497 & 47.8 & 66.8 & 727 & 152.0 & 0.0 & 1770 & 5.0 & 0.0 \\ Events involving Cameroonian army & 2497 & 104.4 & 89.5 & 727 & 244.0 & 0.0 & 1770 & 47.0 & 0.0 \\ Fatalities by any group & 2497 & 591.1 & 131.3 & 727 & 796.0 & 0.0 & 1770 & 507.0 & 0.0 \\ Fatalities by Amazonian rebels & 2497 & 45.1 & 70.4 & 727 & 155.0 & 0.0 & 1770 & 0.0 & 0.0 \\ Fatalities by Cameroonian army & 2497 & 226.0 & 248.1 & 727 & 613.0 & 0.0 & 1770 & 67.0 & 0.0 \\ \hline \hline \end{tabular}
* **Notes:** The sample in all columns is restricted to the period 2014 to 2019 and to pupils in the Anglophone subsystem, from public schools. The sample in columns (1)-(3) includes states 1 and 3 (see Table 1 for the definition of these states). The sample in columns (4)-(6) is limited to the state 1 (Zone Anglophone, public schools) and sample in columns (7)-(9) is limited to the state 2 (Zone Francopphone). The data on education outcomes comes from PASEC. The data on conflict comes from the ACLED.
\end{table}
Table 2: Descriptive Statistics
of political violence, conflict resolution, and human rights.9
Footnote 9: The ACLED data can be accessed publicly at [https://acleddata.com](https://acleddata.com)
ACLED gathers and examines real-time information regarding the location, participants, casualties, and other pertinent details of conflict incidents. Their research encompasses various types of political violence, including state-based conflict, non-state conflict, and one-sided violence. The data undergoes regular updates, enabling the analysis of conflict trends and patterns over time.
In our study, we utilize ACLED's data specifically pertaining to Cameroon, with a particular emphasis on the Anglophone conflict. Consequently, our scope is limited to violence associated with the actors involved in this conflict during the period from 2014 to 2019. The actors we identify as being associated with this conflict include the Ambazonian Separatists, the Ambazonian Defense Forces (ADF), and the armed and police forces of Cameroon.
As we describe in detail in Appendix A.2.2, in order to combine the ACLED data with the PASEC data, we construct indicators of conflict-related violence at the regional (or state) level, however distinguishing between the armed groups involved. Descriptive statistics for our primary conflict violence variables are presented in Panels B and D of Table II, organized by year.
### Distribution of Conflict Related Events
Utilizing data extracted from the ACLED, Figure IIIa provides an overview of the temporal distribution of all conflict-related incidents across the entirety of Cameroon's territory over the past two decades. As mentioned earlier, the majority of these incidents occurred after 2014. Complementing this, Figure IIIb specifically focuses on the number of fatalities resulting from these events, revealing significant regional variation between 2000 and 2022. Notably, States 1 and 4, identified as the epicenter of the Anglophone conflict, experienced the highest number of fatalities between 2014 and 2022.
Shifting our attention to events involving the Ambazonian Separatists, Figure IV emphasizes the occurrence and intensity of such events exclusively within States 1 and 4
(North-West and South-West regions). Importantly, these events began escalating from 2016 onwards, and resulted in a significant number of fatalities between 2014 and 2022.
Figure IV, when compared with Figure IIIa, highlights a crucial fact: prior to 2016: while other conflict-related events occurred before 2016, they took place outside the strata of interest and involved armed actors unrelated to the Anglophone conflict (e.g., Boko Haram in the Far North region). This important finding, i.e. the localized nature of the Ambazonian separatists' actions and the timing of their occurrence, will play a pivotal role in our identification strategy.
Identification and Estimation Strategy
To assess the impact of violent conflict events during the Anglophone crisis on pupil learning in Cameroon, we propose a quasi-experimental approach that takes into account the level of exposure to conflict-related violence. Our analysis will specifically examine fatalities in the North-West and South-West regions, primarily involving actors such as the Ambazonian Separatists. Initially, our main focus will be on students enrolled in the Anglophone education subsystem. The presence of pupils in the anglophone subsystem both in the rest of the country and in the North-West and South-West regions allows for the design of a quasi-experiment. Subsequently, we will expand our analysis to include the Francophone education subsystem, which will provide supplementary results and will play a crucial role in evaluating our main identification assumption.
Specifically, for our main model, we define the following two groups:
1. Pupils who resided in regions where there was violence related to the Anglophone conflict and who were enrolled in the Anglophone education system. These pupils belong to state 1.
2. Pupils who resided in regions unaffected by violence related to the Anglophone conflict and who also were enrolled in the Anglophone education system. These pupils belong to state 3.
By comparing outcomes between two distinct groups - one exposed to conflict-related violence and the other unexposed - we can estimate the effect of violence on pupil learning using a quasi-experimental difference-in-differences approach.
More formally, let \(Y_{ist}\) denote an outcome of interest, representing math or reading test scores for individual \(i\) residing in region/stratum \(r\) at time \(t\). Here, \(t\) can take values of 2014 or 2019, while \(r\) can be either 1 or 3, corresponding to the two treatment groups. The index \(i\) ranges from 1 to \(N\). To estimate the impact of the Anglophone conflict, we utilize
the following model:
\[Y_{irt}=a_{0}+\alpha AMB\_CE_{rt}+cMIL\_CE_{rt}+bX_{irt}+a_{1}dT+u_{irt} \tag{1}\]
where \(AMB\_CE_{rt}\) measures the intensity of conflict-related violent events involving Ambazonian separatists, such as the number of fatalities, in the region where individual \(i\) resided at time \(t\). The dummy variable \(dT\) indicates whether the data was collected in 2014 or 2019. \(MIL\_CE_{rt}\) captures the intensity of conflict-related violent events involving the Cameroon army.10 The variable \(X_{irt}\) represents a set of exogenous characteristics, including factors like sex, age, and grade, and \(u_{irt}\) is the error term.
Footnote 10: Fatalities involving the military are recorded in all regions and time periods.
To clearly establish the main identification assumptions of the model in equation (1), it is important to note that \(AMB\_CE_{rt}\) is only observed in the North-West and South-West regions and only in 2019. Hence, we can rewrite equation (1) as follows:
\[Y_{irt}=a_{0}+a_{1}dT+\alpha dT\times AMB\_CE_{r}+cMIL\_CE_{rt}+bX_{irt}+dAMB \_CE_{r}+u_{irt} \tag{2}\]
In equation (2), the variable \(AMB\_CE_{r}\) represents the intensity of conflict-related violent events in region \(r\). It is important to note that for each individual \(i\), region \(r\), and time \(t\), the expression \(dT\times AMB\_CE_{r}\) in (2) is equivalent to \(AMB\_CE_{rt}\) in (1). This equivalence implies that the model in (1) can be seen as a Difference-in-Difference style representation, which is essentially the same as the model (2).
Assuming that military-induced violence, including violent events and fatalities, has a consistent impact on the outcome variable \(Y_{irt}\) across all regions, the parameter \(\alpha\) captures the effect of the Anglophone conflict on \(Y_{irt}\). Specifically, it measures the average effect of conflict violence, whether in terms of events or fatalities, on pupils enrolled in the Anglophone subsystem and residing in the North-West and South-West regions. This parameter represents the average treatment on the treated (ATT), providing an understanding of the average differential impact experienced by these pupils.
To evaluate this impact, we rely on the parallel trend assumption, which posits that
in the absence of conflict-related events involving Ambazonian separatists, the trajectory of human capital accumulation would have been the same for pupils in the Anglophone system, regardless of their region of residence, between 2014 and 2019. This assumption is based on the lack of regional reforms in the educational system in Cameroon during that time period.
Moreover, we conduct estimations to demonstrate that the violence associated with the conflict has negligible or no effect on pupils in the Francophone subsystem. These estimations serve as additional evidence, highlighting the specific influence of the Anglophone conflict on pupil outcomes and illustrating the divergent experiences between the Anglophone and Francophone subsystems.
In this study, we adopt a similar model to examine the impact of the conflict on human capital accumulation, explore potential transmission mechanisms, and conduct identifying assumption tests. Depending on the research question at hand, such as investigating transmission mechanisms, analyzing the effect of the conflict on human capital accumulation, or conducting identifying assumption tests, we adapt the outcome variable and regions while maintaining the core explanatory variables.
## 4 Estimation of the Armed-Conflict Effects
### Effects on Human Capital Accumulation
In this section, we present the key findings regarding the impact of the Anglophone Armed conflict on early human capital accumulation. To capture the varying intensity of violence, we examine the estimated effects using two variables: the level of violence and the rate per 100,000 inhabitants.
The results presented in Table III specifically focus on students enrolled in the Anglophone education subsystem. These findings highlight that conflict-related violence in the Anglophone region significantly impedes the accumulation of human capital among students. Our analysis reveals that an additional ten violent events involving the Ambazo
nian separatists resulted in a 2.5% decrease in language test scores and a 2.1% decrease in mathematics test scores. Similarly, a ten-unit increase in conflict-related deaths led to a 1.9% decline in language test scores and a 1.9% reduction in mathematics test scores. Notably, these negative effects persist even when accounting for the region's population by measuring event occurrence and fatalities per 100,000 inhabitants.
Table A1 in the Appendix examines the heterogeneous effects based on the grade and gender of the students. We expect a potentially stronger effect on grade six pupils due to the conflict's duration covering a significant portion of their primary school education and the challenges they faced in a war-affected learning environment. In contrast, second-grade pupils experienced conflict throughout their entire primary schooling. However, the estimated effects on grade six pupils show a slightly more negative impact, although the difference is not statistically significant. Additionally, female pupils also exhibit a slightly more negative effect of violence on their human capital, although this difference is not statistically significant, at a 5% level, either.
### Discussion of Transmission Mechanisms
The findings presented in the previous section reveal a detrimental impact of violent events stemming from the Anglophone conflict on the early accumulation of human capital. In this section, we examine potential mechanisms through which the acquisition of language and mathematics skills by pupils is hindered, shedding light on their diminished learning outcomes. We explore and discuss two distinct yet interconnected transmission channels.
Firstly, we analyze the effect of conflict-related fatalities on teachers' absenteeism, as illustrated in Table IV (columns 1 and 2). These estimates indicate that an increase in fatalities resulting from events involving Ambazonian separatists leads to a higher probability of teacher absenteeism. This decrease in teachers' presence is observed in both grades 2 and 6. We contend that, from the perspective of educators and adults, the heightened
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{(1)} & \multicolumn{2}{c}{(2)} & (3) & (4) \\ \hline _Panel A:_ & \multicolumn{4}{c}{Dependent variable: reading score} \\ \cline{2-5} & Number & Rate & Number & Rate \\ \cline{2-5} Events involving Ambazonian rebels & -0.259*** & -48.018*** & & \\ & (0.042) & (8.124) & & \\ & & & -0.193*** & -35.025*** \\ & & & (0.031) & (6.015) \\ R-sq & 0.111 & 0.111 & 0.100 & 0.100 \\ Observations & 4585 & 4585 & 4585 & 4585 \\ \hline _Panel B:_ & \multicolumn{4}{c}{Dependent variable: math score} \\ \cline{2-5} & Number & Rate & Number & Rate \\ \cline{2-5} Events involving Ambazonian rebels & -0.211*** & -38.420*** & & \\ & (0.038) & (7.284) & & \\ & & & -0.191*** & -34.753*** \\ & & & (0.029) & (5.643) \\ R-sq & 0.099 & 0.099 & 0.121 & 0.121 \\ Observations & 4585 & 4585 & 4585 & 4585 \\ \hline \hline \end{tabular}
* **Notes: All columns in report the estimates from Eq. (1). Rate is per 100 000 inhabitants. Robust standard errors (in parentheses) are clustered by school. Do not include fixed effects by school.
* denotes statistically significant estimates at 10%, *
* denotes significant at 5% and **
* denotes significant at 1%.
\end{table}
Table III: Effect of Conflict-related violence by Ambazonian Separatists on Academic Performance (period 2014-2019)
risk associated with the conflict detrimentally affects the quality and quantity of pupils' learning experiences, thereby impeding their skill acquisition. For instance, the presence of conflict may trigger temporary migration to safer areas among both pupils and teachers, resulting in learning gaps.
Secondly, we posit that the presence of conflict contributes to economic hardship, potentially diverting pupils from their studies to engage in work within the legal or illegal economy, further exacerbating learning gaps. Table IV (column 3) provides estimates that shed light on this relationship. They reveal that an increase in conflict-related fatalities in events involving Ambazonian separatists decreases the likelihood of a school having access to electricity. This suggests that the violence associated with the conflict may have led to the destruction of existing infrastructure or created economic challenges that prioritize other urgent needs over educational resources.
## 5 Identifying Assumptions Checks: Effects on the Franco-phone Subsystem
In this section, we analyze the impact of the Anglophone conflict on Francophone pupils, focusing on two key objectives. Firstly, we assess the implications arising from the existence of two separate educational systems in the context of this conflict. By closely examining the
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Dependents variables:} \\ & Teacher & Teacher & Electricity \\ & absent in Grade 2 & absent in Grade 6 & in school \\ \cline{2-4} \multirow{2}{*}{Fatalities by Ambazonian rebels (rate)} & 2.037\({}^{***}\) & 1.288\({}^{***}\) & -1.222\({}^{***}\) \\ & (0.513) & (0.451) & (0.213) \\ R-sq & 0.118 & 0.042 & 0.051 \\ Observations & 1802 & 3436 & 3150 \\ \hline \hline \end{tabular}
* **Notes: All columns in report the estimates from Eq. (1). Rate is per 10000 inhabitants. Robust standard errors (in parentheses) are clustered by school.
* denotes statistically significant estimates at 10%, *
* denotes significant at 5% and **
* denotes significant at 1%.
\end{table}
Table IV: Effect of Conflict-related violence by Ambazonian Separatists on teacher absenteeism and access to electricity
experiences of Francophone pupils, we aim to shed light on any disparities or divergences that may emerge as a result of this unique socio-political situation. Secondly, we use this analysis to validate the plausibility of our identifying assumptions.
In our research design, outlined in Section 3, we primarily studied a sample of pupils within the Anglophone subsystem. Now, our attention turns to evaluating the influence of the Anglophone conflict on pupils belonging to the Francophone subsystem. To achieve this, we designate the Grand-Ouest (4) and Grand-Nord (6) strata as the treatment and control groups, respectively.
The estimation results, presented in Table 5, yield significant insights into the effect of conflict-related violence on human capital accumulation within the Francophone subsystem. In particular, we find that an increase in violent events or deaths does not have a significant impact on the level of human capital accumulated in both language and mathematics. Specifically, we observe a negligible decrease in mathematics skill acquisition and a slight increase in language skill acquisition. However, these effects are minimal compared to those observed in the sample of Anglophone pupils residing in conflict-affected regions (North-West and South-West). Thus, pupils in the Francophone educational subsystem were largely unaffected. Indeed, the effects of conflicts are approximately 10 times smaller than the effects observed in the Anglophone educational subsystem sample.
As mentioned earlier, the results in Table 5 not only provide new and relevant evidence on the connection between the Anglophone conflict and the educational outcomes of students in the Francophone education subsystem, but also support the causal interpretation of our estimated effects by aligning with our primary identification assumption: the parallel trend assumption.
Specifically, Table 5 demonstrates that when comparing educational outcomes between 2014 and 2019, within two groups that closely resemble those examined in our baseline specification, but with significantly lower exposure to the Anglophone conflict (as anticipated for students residing in the same regions as those included in the baseline specification, yet enrolled in the Francophone educational subsystem which was not directly affected by the Anglophone conflict due to its concentrated impact on Anglophone schools),11 there is no
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & (1) & (2) & (3) & (4) \\ \hline _Panel A:_ & \multicolumn{4}{c}{Dependent variable: reading score} \\ \cline{2-5} & Number & Rate & Number & Rate \\ \cline{2-5} Events involving Ambazonian rebels & 0.014\({}^{*}\) & 1.219\({}^{*}\) & & \\ \cline{2-5} Fatalities by Ambazonian rebels & (0.008) & (0.715) & & \\ \cline{2-5} R-sq & 0.015 & 0.015 & 0.015 & 0.015 \\ Observations & 4437 & 4437 & 4437 & 4437 \\ \hline _Panel B:_ & \multicolumn{4}{c}{Dependent variable: math score} \\ \cline{2-5} & Number & Rate & Number & Rate \\ \cline{2-5} Events involving Ambazonian rebels & -0.019\({}^{**}\) & -0.703\({}^{**}\) & & \\ \cline{2-5} Fatalities by Ambazonian rebels & (0.008) & (0.270) & & \\ \cline{2-5} R-sq & 0.033 & 0.035 & 0.035 & 0.035 \\ Observations & 4585 & 4585 & 4585 & 4585 \\ \hline \hline \end{tabular}
* **Notes: All columns in report the estimates from Eq. (1). Rate is per 100000 inhabitants. Robust standard errors (in parentheses) are clustered by school. No model includes fixed effects by school.
* denotes statistically significant estimates at 10%, *
* denotes significant at 5% and **
* denotes significant at 1%.
\end{table}
Table 5: Effect of Conflict-related violence by Ambazonian Separatists on Academic Performance: Placebo tests (Francophone system in regions Grand Ouest vs Grand Nord)
substantial effect of the Anglophone conflict on the accumulation of human capital.
Footnote 1: [https://www.caid.com/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/c/caid/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/cc/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/cc/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c](https://www.caid.com/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/c/caid/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/caid/caid/caid/caid/c/caid/caid/caid/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/caid/caid/c/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/caid/c/caid/caid/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/caid/c/caid/c/caid/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/cc/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/cc/caid/c/caid/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/caid/c/c/caid/c/caid/c/caid/c/c)
that conflict violence increases absenteeism among both teachers and pupils, thereby diminishing the quality and quantity of learning interactions and exacerbating the learning gap. Additionally, we provide evidence of heightened economic hardship stemming from conflict-related violence.
The short-term impact of the Anglophone conflict on human capital accumulation is substantial, and the consequences could extend beyond the immediate learning gap, given the critical role of early human capital in adults' health, economic, and social well-being. The evidence presented in this paper offers valuable insights for post-conflict reconstruction efforts in the realm of human capital capacity building in the Anglophone conflict in Cameroon. |
2309.02920 | Determining the Baryon Impact on the Matter Power Spectrum with Galaxy
Clusters | The redistribution of baryonic matter in massive halos through processes like
active galactic nuclei feedback and star formation leads to a suppression of
the matter power spectrum on small scales. This redistribution can be measured
empirically via the gas and stellar mass fractions in galaxy clusters, and
leaves imprints on their electron density profiles. We constrain two
semi-analytical baryon correction models with a compilation of recent Bayesian
population studies of galaxy groups and clusters sampling a mass range above
$\sim 3 \times 10^{13}$ $M_\odot$, and with cluster gas density profiles
derived from deep, high-resolution X-ray observations. We are able to fit all
the considered observational data, but highlight some anomalies in the
observations. The constraints allow us to place precise, physically informed
priors on the matter power spectrum suppression. At a scale of $k=1 h$
Mpc$^{-1}$ we find a suppression of $0.042^{+0.012}_{-0.014}$
($0.049^{+0.016}_{-0.012}$), while at $k=3h$ Mpc$^{-1}$ we find
$0.184^{+0.026}_{-0.031}$ ($0.179^{+0.018}_{-0.020}$), depending on the model
used. In our fiducial setting, we also predict at 97.5 percent credibility,
that at scales $k<0.37h$ Mpc$^{-1}$ baryon feedback impacts the matter power
less than $1\%$. This puts into question if baryon feedback is the driving
factor for the discrepancy between cosmic shear and primary CMB results. We
independently confirm results on this suppression from small-scale cosmic shear
studies, while we exclude some hydro-dynamical simulations with too strong and
too weak baryonic feedback. Our empirical prediction of the power spectrum
suppression shows that studies of galaxy groups and clusters will be
instrumental in unlocking the cosmological constraining power of future cosmic
shear experiments like \textit{Euclid} and Rubin-LSST, and invites further
investigation of the baryon correction models. | Sebastian Grandis, Giovanni Arico', Aurel Schneider, Laila Linke | 2023-09-06T11:32:06Z | http://arxiv.org/abs/2309.02920v2 | # Determining the Baryon Impact on the Matter Power Spectrum with Galaxy Clusters
###### Abstract
The redistribution of baryonic matter in massive halos through processes like active galactic nuclei feedback and star formation leads to a suppression of the matter power spectrum on small scales. This redistribution can be measured empirically via the gas and stellar mass fractions in galaxy clusters, and leaves imprints on their electron density profiles. We constrain two semi-analytical baryon correction models with a compilation of recent Bayesian population studies of galaxy groups and clusters sampling a mass range above \(\sim 3\times 10^{13}\)\(M_{\odot}\), and with cluster gas density profiles derived from deep, high-resolution X-ray observations. We are able to fit all the considered observational data, but highlight some anomalies in the observations. The constraints allow us to place precise, physically informed priors on the matter power spectrum. At a scale of \(k=1h\) Mpc\({}^{-1}\) we find a suppression of \(0.042^{+0.012}_{-0.014}\) (\(0.049^{+0.016}_{-0.012}\)), while at \(k=3h\) Mpc\({}^{-1}\) we find \(0.184^{+0.026}_{-0.031}\) (\(0.179^{+0.018}_{-0.020}\)), depending on the model used. We also predict at 97.5 percent credibility, that at scales \(k<0.37h\) Mpc\({}^{-1}\) baryon feedback impacts the matter power less than 1%. This puts into question if baryon feedback is the driving factor for the discrepancy between cosmic shear and primary CMB results. We independently confirm results on this suppression from small-scale cosmic shear studies, while we exclude some hydro-dynamical simulations with too strong and too weak baryonic feedback. Our empirical prediction of the power spectrum suppression shows that studies of galaxy groups and clusters will be instrumental in unlocking the cosmological constraining power of future cosmic shear experiments like _Euclid_ and Rubin-LSST.
keywords: large-scale structure of Universe - galaxies: clusters: general - methods: data analysis
## 1 Introduction
Cosmological inference on the matter distribution of the Universe from future surveys like _Euclid1_, LSST2, or Roman3 will be limited by our knowledge of non-gravitational effects on the matter power spectrum (for reviews, see Chisari et al., 2019; Eckert et al., 2021, and reference therein). Specifically, on physical scales below \(\sim 10\) Mpc, active galactic nuclei are known to lead to a significant redistribution of a fraction of the baryonic matter, while on even smaller scales star formation allows another fraction of baryons to condensate into galaxies inhabiting the halo centers. Both these redistribution mechanisms lead to a gravitational back reaction on the collisionless dark matter, further altering the matter distribution. These effects are collectively referred to as _baryon feedback_, and are degenerate with interesting cosmological signals like neutrino masses, dark energy equation of state modifications, modified gravity signatures or non cold dark matter components (e.g. Harnois-Deraps et al., 2015; Schneider et al., 2020).
Footnote 1: [http://www.euclid-ec.org/](http://www.euclid-ec.org/)
Footnote 2: [https://www.lsst.org/](https://www.lsst.org/)
Footnote 3: [https://roman.gsfc.nasa.gov/](https://roman.gsfc.nasa.gov/)
While hydro-dynamical simulations are in principle able to accurately predict baryon feedback effects on cosmology, practical concerns like run-time and resolution limits require the tuning of several recipes that attempt the modelling of baryon feedback by summarizing sub resolution processes (Schaye et al., 2015; McCarthy et al., 2017; Pillepich et al., 2018). While providing physically self-consistent solutions, hydro-dynamical simulations are thus only able to present point estimates for individual, best guess feedback models. They lack the predictive power and flexibility to be employed in cosmological inference tasks, as they probe only dozens of feedback models (Mead et al., 2015; Chisari et al., 2018; van Daalen et al., 2020), though noticeable progress has been made by the recent CAMELS (Villaescusa-Navarro et al., 2021), MillenniumTNG (Pakmor et al., 2022)4, ANTILLES (Salcido et al., 2023), and FLAMING05(Schaye et al., 2023; Kugel et al., 2023) projects.
Footnote 4: [https://www.mtng-project.org/](https://www.mtng-project.org/)
Footnote 5: [https://flamingo.strw.leidenuniv.nl/](https://flamingo.strw.leidenuniv.nl/)
Filling this gap are so called Baryon Correction Models (hereafter BCM, originally proposed by Schneider & Teyssier, 2015). Starting from assumptions on the fraction and distribution of baryonic matter in and around halos, these models alter gravity-only simulations in a semi-analytical way, to mimic the impact of baryon feedback. Recently, Arico et al. (2020, 2021) and Giri & Schneider (2021) have shown that their respective BCMs are able simultaneously to
fit the stellar and gas components of halos over several orders of magnitude in mass, as well as the power and bispectrum in different hydro-dynamical simulations with a handful of physically motivated parameters. The resulting gas and stellar fractions as well as suppression of the matter power and bispectrum are easy to compute. This provides a significant benefit compared to feedback prescriptions that only parameterize the matter power spectrum suppression, as for instance presented by Mead et al. (2021), and offer no predictive power on other quantities.
First attempts to constrain these BCMs have been undertaken with a variety of data sets. Schneider et al. (2022) used Kilo Degree Survey cosmic shear, Atacama Cosmology Telescope kinematic Sunyaev-Zeldovich profiles and a compilation of galaxy cluster and group measurements to simultaneously fit for the cosmological parameters and the BCM. On the cluster and group side that work was limited by the use of individual hydrostatic mass estimators from compilations of X-ray observed objects. Such compilations suffer from a lack of selection effects and mass calibration modelling, as they assume that the compiled list of clusters and groups is a fair sample of the underlying halo population, and that their halo masses can be calibrated by specifying on single parameter, the hydrostatic mass bias. Chen et al. (2022) and Arico et al. (2023) have derived first constraints on BCMs using Dark Energy Survey year 3 cosmic shear on small scale, and on all scales, respectively. Both works find weak constraints on one of the seven parameters of the BCM.
Parallel to these studies, major progress has been achieved in the observational study of galaxy clusters, which inhabit massive halos. Bayesian Population models simultaneously describe cluster selection and mass calibration, preferentially via the weak gravitational lensing (WL) signal these massive objects impress on background galaxies (Bocquet et al., 2015; Mantz et al., 2015; Sereno and Ettori, 2017; Dietrich et al., 2019; Chiu et al., 2022). The robustness of these modelling techniques is demonstrated by the fact that the number counts of clusters are an independent, competitive cosmological probe (Mantz et al., 2015; Bocquet et al., 2019; Chiu et al., 2023). This indicates that the mass distribution of these samples can be reliably reconstructed in the aforementioned Bayesian frameworks. Additionally, multi-wavelength studies of samples of galaxy clusters measure the gas and stellar mass in massive halos, and its mass trend (see for instance Mantz et al., 2016; Chiu et al., 2018; Akino et al., 2022; Chiu et al., 2022). Crucially, these studies forego the use of hydrostatic masses in favor of the more accurate WL signal. This lifts the uncertainties on sample selection and mass calibration that plague compilation of X-ray observed clusters, like the one recently presented by Kugel et al. (2023).
In this work, we combine recent weak lensing informed gas and stellar mass fraction measurements of massive halos (Mantz et al., 2016; Chiu et al., 2018; Akino et al., 2022; Chiu et al., 2022) to inform the BCM and predict the matter power spectrum suppression due to baryon feedback. The introduction of weak lensing mass calibration allows us to avoid using hydrostatic masses, significantly improving our mass accuracy. We also include measurements of the gas profile from deep, high-resolution X-ray observations (Ghirandini et al., 2019) to narrow down the shape of the gas density profiles of massive halos.
Halo masses in this work are reported as spherical over density masses \(M_{\rm{AC}}\), with \(\Delta=500,200\). This means that they are defined via the radius \(R_{\rm{AC}}\) enclosing a sphere of average density \(\Delta\) times the critical density of the Universe at that redshift, i.e. \(M_{\rm{AC}}=\frac{4\pi}{3}\Delta\rho_{\rm{crit}}(z)R_{\rm{AC}}^{3}\). Throughout this work, we use a flat \(\Lambda\)CDM cosmology with parameters \(\Omega_{\rm{M}}=0.315\), \(\sigma_{8}=0.83\), \(n_{\rm{S}}=0.96\) and \(h=0.67\) as reference cosmology.
## 2 Data
### Weak lensing calibrated mass fractions
In this work we use measurements of the relation between the gas / stellar mass and the halo mass in galaxy clusters to constrain Baryon Correction Models. We use the following compilation of stellar and gas mass fractions extracted from Bayesian Population models, summarized also in Table 1:
* The analysis of 139 ROSAT selected clusters, followed up with Chandra imaging analysed by Mantz et al. (2016), 26 of which have dedicated weak lensing measurements. This analysis, called Weighing the Giants (WiG), reports a hot gas fraction \(f_{\rm{ICM}}=0.125\pm 0.005\) at pivot mass \(M_{\rm{pix}}=10^{15}\)\(M_{\odot}\)6. The mass information comes from a combination of cluster abundance measurements and weak lensing. We discard the slope measurement of this work, as it has implausibly tight error bars that would otherwise dominate our fitting.
Footnote 6: Note that all these observational results measure baryonic fractions within \(r_{500c}\), and w.r.t. halo masses \(M_{500c}\).
* The study of 91 South Pole Telescope (SPT) selected clusters, whose Chandra, DES, Wise and Spitzer follow-up has been studied by Chiu et al. (2018), reporting a hot gas fraction \(f_{\rm{ICM}}=0.119\pm 0.013\) and a mass trend of the gas mass \(B_{\rm{ICM}}=1.33\pm 0.09\) at \(M_{\rm{pix}}=4.8\times 10^{14}\)\(M_{\odot}\), as well as stellar mass fraction \(10^{3}f_{\star}=8.3\pm 0.6\)
\begin{table}
\begin{tabular}{c c c c} & comp. & \(M_{\rm{pix}}\) [\(M_{\odot}\)] & fraction & slope \\ \hline SPT & ICM & 4.80e+14 & \(0.12\pm 0.01\) & \(0.33\pm 0.09\) \\ SPT & stars & 4.80e+14 & \(0.008\pm 0.001\) & \(-0.20\pm 0.12\) \\ eFEDS & ICM & 2.00e+14 & \(0.05\pm 0.01\) & \(0.19\pm 0.11\) \\ HSC-XXL & ICM & 1.00e+14 & \(0.08\pm 0.01\) & \(0.23\pm 0.12\) \\ HSC-XXL & stars & 1.00e+14 & \(0.021\pm 0.003\) & \(-0.20\pm 0.11\) \\ WtG & ICM & 1.00e+15 & \(0.12\pm 0.00\) & \\ \end{tabular}
\end{table}
Table 1: Baryonic fractions and mass trends of weak lensing calibrated cluster and group population studies compiled by this work.
Figure 1: Approximate mass distributions of the galaxy cluster and group samples used. These distributions are not used in our inference and are presented for visualisation purposes only. Our compilation of studies samples the mass range \(M_{500c}\geq 3\times 10^{13}\)\(M_{\odot}\). Sources: eFEDS-HSC Chiu et al. (2022, Tab. 1); SPT Bocquet et al. (2019), with \(\xi>6.8\); HSC-XXL redshifts from Umetsu et al. (2020, Tab. 2) together the mass distribution parameters from Akino et al. (2022); WiG Mantz et al. (2016, Tab. 2 and Eq. 1).
and mass trend of the stellar mass \(B_{\star}=0.80\pm 0.13\). The mass information in this analysis comes from fitting the cluster abundance measurements (de Haan et al., 2016). The resulting mass calibration has been confirmed via weak lensing (Dietrich et al., 2019; Stern et al., 2019; Bocquet et al., 2019; Schrabback et al., 2021) and dynamical analysis (Capasso et al., 2019).
* The analysis of 136 galaxy clusters and groups by Akino et al. (2022), which we call HSC-XXL, reporting \(f_{\rm ICM}=0.075\pm 0.008\) and \(B_{\rm ICM}=1.23\pm 0.12\) at \(M_{\rm piv}=10^{14}\)\(M_{\odot}\), as well as \(10^{3}f_{\star}=21.2\pm 2.8\) and \(B_{\star}=0.80\pm 0.11\). These objects were selected in the XMM-Newton XXL survey, and have HSC-SSP weak lensing measurements, XMM-Newton gas mass measurements and stellar mass measurements for HSC and SDSS photometry.
* The study by Chiu et al. (2022) of 434 clusters and groups selected in the eROSITA Final Equatorial Depth Survey (eFEDS), 313 of which have HSC-SSP weak lensing measurements, reporting \(f_{\rm ICM}=0.0540\pm 0.0065\) and \(B_{\rm ICM}=1.19\pm 0.11\) at \(M_{\rm piv}=2\times 10^{14}\)\(M_{\odot}\).
The selection of these samples is performed via their extended Bremsstrahlung emission in the X-rays (HSC-XXL, eFEDS, WtG), or via the Sunyaev-Zel'dovich effect (SZe, see Carlstrom et al. (2002) for a review) in (sub-)millimeter observation (SPT), i.e via inverse Compton scattering signatures in cosmic microwave background observations, and confirmed in optical wavelength. These methods provide high purity samples of massive halos, albeit with different redshift trends in the limiting mass. The information on the mass scale of these samples comes from weak lensing measurements in dedicated deep photometric follow-up (WtG) or deep photometric survey data (HSC-XXL, eFEDS), and in some cases from abundance matching to the theoretical halo mass function (SPT, WtG). Their gas mass is measured in high angular resolution X-ray follow-ups (SPT, WtG) or X-ray survey data (HSC-XXL, eFEDS). The stellar masses are determined with optical and near-infrared observations of the cluster and group member galaxies. We refer the interested reader to the respective papers for more information.
Of interest to this work is that the different samples have minimal to no overlap in actual objects. We can thus treat the different measurements as mutually independent. Also, their mass distributions probe significantly different mass ranges, as seen in Fig. 1. These mass distributions are derived outcomes of the Bayesian Population analyses described below (cf. section 3.1), and are not directly used in this analysis. We instead employ only the reconstructed observable halo mass relations.
### Electron density profiles
We complement the measurements of the relation between gas / stellar mass and halo mass for large populations of clusters with information on the gas density profile of 13 high mass, low redshift clusters observed with deep and high resolution dedicated X-ray imaging, carried out by the XMM-Newton Cluster Outskirts Project (X-COP, Eckert et al., 2017). Analysis of their X-ray surface brightness and spectral information by Ghirardini et al. (2019) resulted in a measurement of the hydro-static mass of these objects, with median hydro static mass \(M_{\rm med,500c}^{\rm hydro}=3.79\times 10^{14}\)\(h^{-1}M_{\odot}\), as well as their electron density profile \(n_{e}\left(r/r_{\rm 500c}^{\rm hydro}\right)\). We utilize the results from the piece wise linear interpolation of the electron density profiles given in Ghirardini et al. (2019, Tab. 2) and the median redshift of the X-COP sample, \(z_{\rm med}=0.064\), to generate a measurement of the electron density profile \(n_{e}\) in units of cm\({}^{-3}\) at 8 radii \(r\) in units of Mpc\(h^{-1}\). The resulting data vector is reported in Tab. 2. The resulting electron density profile is shown in Fig. 3 as black points. Note that we scaled the electron density profile by the radius \(r\) to reduce the dynamical range in the plot. Many more high resolution X-ray measurements of the electron density of clusters would, in principle, be available (e.g. Croston et al., 2008; McDonald et al., 2013; Sanders et al., 2018; Bulbul et al., 2019). Adapting their results to our fitting procedure would, however, far exceed the scope of this work.
## 3 Method
We shall describe in the following how the multi-wavelength observations of galaxy groups and clusters are analysed to extract the gas and stellar mass fractions as a function of halo mass. We then describe how we use the BCM to fit these fractions, as well as the electron density profile of halos. Finally, we outline our fitting procedure and the adopted priors.
### Bayesian Population Models
The galaxy cluster and group studies used in this work, and listed in Section 2, infer the relation between the gas / stellar mass \(M_{\rm ICM,\star}\) enclosed in \(R_{\rm 500c}\) and the halo mass \(M_{\rm 500c}\) via Bayesian Population models. These models are underpinned by a statistical prescription that describes the generation of the cluster catalog data starting from the parent distribution in mass and redshift space, \(P(M,z)\) (often the halo mass function times the cosmological volume). A stochsical mapping from halo mass \(M\) and redshift \(z\) to intrinsic (noise free) observables \(\mathcal{O}\) is then applied, \(P(\mathcal{O}|M,z)\). Motivated by empirical evidence and simulation results (Angulo et al., 2012), this mapping is modelled as a multivariate log-normal distribution in the observables. The means are power laws in mass and redshift, called _mass observable relations_. The simple power law relation is justified by the fact that clusters deviate only to second order from self similar behaviour characteristic of gravity and adiabatic thermodynamics7. The multivariate scatter around these mean relations is an expression of the heterogeneity of the cluster population at a given mass and redshift (see, for instance, Farahi et al., 2019). Measurement noise in the observables is modelled via a stochsical mapping between intrinsic (noise-free) observables and measured observables, \(P(\mathcal{O}|\mathcal{O},z)\). This
\begin{table}
\begin{tabular}{c c c} \(r\) [Mpc\(h^{-1}\)] & \(n_{e}\) [\(10^{-3}\) cm\({}^{-3}\)] & \(\delta n_{e}\) [\(10^{-3}\) cm\({}^{-3}\)] \\ \hline
0.034 & 6.159 & 0.605 \\
0.085 & 3.371 & 0.380 \\
0.145 & 2.240 & 0.196 \\
0.221 & 1.395 & 0.085 \\
0.327 & 0.886 & 0.031 \\
0.502 & 0.445 & 0.011 \\
0.791 & 0.179 & 0.006 \\
1.339 & 0.053 & 0.004 \\ \end{tabular}
\end{table}
Table 2: Mean electron density profile \(n_{e}\), and its error \(\delta n_{e}\), as a function of radius, from the X-COP sample at median redshift \(z_{\rm med}=0.064\) and median hydro static mass \(M_{\rm med,500c}^{\rm hydro}=3.79\times 10^{14}\)\(h^{-1}M_{\odot}\), adapted from Ghirardini et al. (2019, Tab. 2).
last mapping is determined directly from the noise properties of the observations, or via image simulations.
The distribution of observed cluster properties \(P(\hat{\mathcal{O}},z)\) is then obtained by marginalising of mass and intrinsic properties
\[P(\hat{\mathcal{O}},z)=\int\,\mathrm{d}M\,P(M,z)\int\,\mathcal{O}\mathcal{P}( \mathcal{O}|M,z)P(\hat{\mathcal{O}}|\mathcal{O},z). \tag{1}\]
This distribution is normalized to account for the selection criteria imposed in the sample's selection. The likelihood of a sample of clusters \(\{\hat{\mathcal{O}}_{i},z_{i}\}\) results from evaluating the probability of this sample given the constructed distribution
\[\ln\mathcal{L}=\sum_{i}\ln P(\hat{\mathcal{O}}_{i},z_{i}), \tag{2}\]
as in Bayesian inference, the likelihood is simply the probability of the data (here the catalog) given the model (the distribution of observed cluster properties). This analysis approach ensures that selection biases and mass calibration uncertainties are correctly accounted for.
In the analyses we use for this work, the measured observables include the gas mass \(\hat{M}_{\mathrm{ICM}}\), the weak lensing signal in the form of a shear profile, or a best fit mass, and, where applicable, the stellar mass \(\hat{M}_{\star}\), among other observables. For further details on each cluster study we refer the reader to the relative papers and references therein, noting that many details of the implementation, as well as the individual notations used to describe the Bayesian population model might differ significantly between the different works. Our notations follow most closely (Chiu et al., 2022, sec. 5).
Among the model parameters of the population likelihood, there are the amplitude of the gas / stellar mass - halo mass relation \(A_{\mathrm{ICM},\star}\), and the mass slope \(B_{\mathrm{ICM},\star}\) of that relation. They define scaling relations that follow power laws in mass around a fixed pivot mass \(M_{\mathrm{Piv}}\), chosen close to the median mass of the sample, reading
\[\ln\left(\frac{M_{\mathrm{ICM},\star}}{M_{\mathrm{ICM},\star}^{\mathrm{Piv}}} \right)=A_{\mathrm{ICM},\star}+B_{\mathrm{ICM},\star}\ln\left(\frac{M_{500c}}{ M^{\mathrm{Piv}}}\right). \tag{3}\]
Here, we marginalize over any redshift dependencies, that are usually also studied, as they have all empirically been shown to be consistent with zero. Crucially, the statistically and systematic uncertainties incurring in the gas / stellar content and total mass measurement process, as well as in the selection of the cluster sample, are accounted for and distilled into posteriors on the scaling relation parameters (\(A_{\mathrm{ICM},\star},B_{\mathrm{ICM},\star}\)). We therefore base our analysis on these summary statistics, together with their reported uncertainties.
### Baryon Correction Model
The Baryon Correction Model proposed by Schneider & Teyssier (2015) provides a semi-analytical framework to modify N-body, gravity-only simulations accounting for the effects of gas, stars and feedback. The framework is based on small, radial shifts of simulation particles around halo centres. In particular, the gravity-only profile of a given halo is mapped to the sum of a dark matter, a gas and a stellar matter profile,
\[\rho(r)\mapsto\rho_{\mathrm{DM}}(r)+\rho_{\mathrm{ICM}}(r)+\rho_{\star}(r). \tag{4}\]
The model parameters determine the amplitude and shape of these profiles for each input halo mass. For each set of model parameters, the particles in the gravity-only simulation are displaced in order to obtain the new, _baryonified_ halo profiles. Quasi-adiabatic relaxation of the corrected profile is used to account for the gravitational backreaction of the displaced baryons. In this work, we use two BCM models: the "S19 model" proposed by Schneider & Teyssier (2015); Schneider et al. (2019); Giri & Schneider (2021), and the backco-model proposed by Arico et al. (2020, 2021). In spirit, the two models are similar, and we shall only quickly outline their differences in the following.
#### 3.2.1 S19 model
The BCM from Schneider et al. (2019, hereafter S19) and Giri & Schneider (2021) provides a direct parametrisation of the density profiles for the gas and the stellar components. For the gas component, a power-law profile with an additional core in the centre and steep truncation in the outskirts is assumed. The functional form is given by
\[\rho_{\mathrm{ICM}}(r)\propto\frac{\Omega_{\mathrm{b}}/\Omega_{\mathrm{M}}-f_ {\mathrm{star}}(M_{\mathrm{vir}})}{\left[1+10\left(\frac{r}{r_{\mathrm{vir}}} \right)\right]^{\beta\left(M_{\mathrm{vir}}\right)}\left[1+\left(\frac{r}{q_{ \mathrm{vir}}q_{\mathrm{vir}}}\right)^{\gamma}\right]^{\delta-\beta\left(M_{ \mathrm{vir}}\right)/\gamma}}, \tag{5}\]
with a mass dependent inner slope
\[\beta(M_{\mathrm{vir}})=\frac{3\left(M_{\mathrm{vir}}/M_{\mathrm{c}}\right)^{ \mu}}{1+\left(M_{\mathrm{vir}}/M_{\mathrm{c}}\right)^{\mu}}, \tag{6}\]
where \(M_{\mathrm{vir}}\) and \(r_{\mathrm{vir}}\) are the virial mass and the virial radius, and \(\Omega_{\mathrm{b,M}}\) are the cosmic baryon/matter abundance.
The stellar profile is parametrised as a central stellar component ("cga") and a satellite component ("sat"), the latter following the dark matter profile due to its collisionless nature. The fractions of the two components follow a power law in mass
\[f_{\mathrm{cga,sat}}=0.055\left(\frac{2\times 10^{11}M_{\odot}/h}{M_{\mathrm{vir}} }\right)^{\eta_{\mathrm{gas,sat}}}, \tag{7}\]
with slopes \(\eta_{\mathrm{sat}}=\eta\) and \(\eta_{\mathrm{cga}}=\eta+\delta\eta\). Upon specifying the baryonic components, the collisional profiles of the dark matter and satellite galaxies are subjected to adiabatic relaxation. This leaves us with 7 free parameters for the S19-model: \((\log_{10}M_{\mathrm{c}}h/M_{\odot},\,\mu,\theta_{\mathrm{6j}},\gamma,\delta)\) for the gas profile and \((\eta,\delta\eta)\) for the stellar component.
Integration of the relative profiles to \(r_{500c}\) allows us to compute the total mass at that over-density \(M_{500c}\), as well as the enclosed gas and stellar masses \(M_{\mathrm{ICM},\star}\), from which we can readily derive the stellar and gas fractions.
#### 3.2.2 backco model
The backco simulation projects (Contreras et al., 2020; Angulo et al., 2021; Zennaro et al., 2023) proposed a BCM that follows a slightly different approach to define baryonic profiles. For each gravity-only halo of mass \(M_{200c}\), a fraction of its mass \(f_{\mathrm{ICM}}(M_{200c})\) is redistributed into an empirically motivated gas distribution
\[\rho_{\mathrm{ICM}}(r)\propto\begin{cases}\left(1+\frac{r}{r_{\mathrm{min}}} \right)^{-\beta_{\mathrm{min}}}\left(1+\left(\frac{r}{r_{\mathrm{out}}}\right) ^{2}\right)^{-2},&\text{if }r<r_{\mathrm{out}}\\ \rho_{\mathrm{NFW}}(r),&\text{otherwise},\end{cases} \tag{8}\]
with an inner scale \(r_{\mathrm{min}}=\theta_{\mathrm{lim}}r_{200c}\), the inner slope of the gas profiles \(\beta_{\mathrm{lim}}=3-\left(\frac{M_{\mathrm{lim}}}{M_{200c}}\right)^{\mu_{ \mathrm{min}}}\), with \(\mu_{\mathrm{lim}}=0.31\), and the outer scale
\(\theta_{\rm out}^{-1}r_{\rm 200c}\)8. \(\rho_{\rm NFW}(r)\) is the dark matter density profile proposed by Navarro et al. (1996, hereafter NFW) with the concentration mass relation fitted on the respective simulation. The normalisations of the two components are adjusted such that the transition is continuous and the integrated profiles attain the gas fraction \(f_{\rm ICM}(M_{\rm 200c})\) at the corresponding radius.
Footnote 8: This definition deviates from the notation in previous works (Aréò et al., 2020, 2021, 2021, 2022). It reflects the actual implementation in the code at the time of writing. Changing \(\theta_{\rm out}^{-1}\mapsto\theta_{\rm out}\) is planned for future releases.
Another fraction \(f_{\bullet}(M_{\rm 200c})\) is assumed to be stellar matter and distributed between a component mimicking the central galaxy, and another representing the satellite galaxies. This provides the prescription for the stellar matter density profile \(\rho_{\bullet}(r)\). Thus, this model defines the fractions at \(r_{\rm 200c}\) first, then defines the shapes of baryonic profiles.
Arico et al. (2020, 2021) refined this model by allowing for a late time re-accreted gas component, and an ejected gas component. Specifically, they model the halo gas fraction within \(r_{\rm 200c}\) as
\[f_{\rm ICM,\ 200c}=\left(\frac{\Omega_{\rm b}}{\Omega_{\rm M}}-f_{\bullet}(M_{ \rm 200c})\right)\left(1+\left(\frac{M_{\rm c}}{M_{\rm 200c}}\right)^{\beta} \right)^{-1}, \tag{9}\]
where \(M_{\rm c}\) and \(\beta\) are free parameters of the bacco-model. Note that both models define a parameter \(M_{\rm c}\), which however is not the same between the models. The stellar fraction is modulated by the parameter \(M_{1}\), which is the characteristic halo mass with a stellar mass fraction 0.023. Fitting for \(M_{1}\) is thus equivalent to finding the halo mass, whose stellar mass fraction is 0.023. The stellar-halo mass relation is modelled with the sub-halo abundance matching proposed by Behroozi et al. (2013), and best-fitting parameters found by Kravtsov et al. (2018). The free parameters of the gas profile are the characteristic radii \(\theta_{\rm inm}\) and \(\theta_{\rm out}\), as well as \(M_{\rm inm}\), which models the inner slope of the gas profile. Finally, the parameter \(\eta\) describes the position of the ejected gas cut-off. This gives a total of seven free parameters for the bacco model.
Contrary to the S19-model, the bacco model only displaces particles within a given radius \(r_{\rm p}\) of halos for computational ease. Deviating from previous works, which fixed \(r_{\rm p}=r_{\rm 200c}\), we set \(r_{\rm p}=r_{\rm out}\), and discuss the resulting differences in Appendix B. We justify this choice with the argument that beyond \(r_{\rm out}\) the hot gas is assumed to perfectly trace the dark matter. The only component which deviates from the NFW profile beyond \(r_{\rm out}\) is the ejected gas, which is going to be traced with particles displaced from inside \(r_{\rm out}\). We note that previous works always assumed \(r_{\rm out}\leq r_{\rm 200c}\), thus this condition was always satisfied in previous work.
#### 3.2.3 Link to scaling relations
The BCM is directly linked to the parameters of the scaling relation via a Taylor expansion of the gas / stellar mass fraction enclosed in \(R_{\rm 500c}\) around the pivot mass, yielding
\[f_{\rm ICM,\bullet}^{\rm 500c}\big{|}_{M_{\rm 500c}=M_{\rm grav}}=e^{A_{\rm ICM, \bullet}}\frac{M_{\rm ICM,\bullet}^{\rm piv}}{M^{\rm piv}},\ {\rm and} \tag{10}\]
\[\frac{{\rm d}\ln f_{\rm ICM,\bullet}^{\rm 500c}}{{\rm d}\ln M_{\rm 500c}} \Big{|}_{M_{\rm 500c}=M_{\rm grav}}=B_{\rm ICM,\bullet}-1. \tag{11}\]
This expression provides the link between the baryonification models and the scaling relation studies performed with Bayesian Population models. This constitutes a crucial methodological advancement with respect to the use of compilations of individual clusters that do not account for mass accuracy and selection modelling. WL calibrated Bayesian population models of cluster samples are the tool of choice to account for selection effects and mass accuracy. Compared to un-binned compilations of heterogeneously selected clusters, like the one used by Kugel et al. (2023), they are able to accurately reconstruct the underlying true halo mass distribution of the cluster samples, as well as its relation to observable quantities like the gas mass. This methodological improvement is corroborated by the fact that WL calibrated cluster population models are able to derive competitive cosmological constraints from the abundance clusters (Mantz et al., 2015; Bocquet et al., 2019; Chiu et al., 2023).
Practically speaking, we also need to convert between the different mass definitions used in X-ray studies of clusters (\(M_{\rm 500c}\)) and the BCM models (\(M_{\rm 200c}\)). In the S19-model the baryon fractions at \(r_{\rm 500c}\) are directly predicted by integration of the respective profiles. For the bacco-model the conversion from \(M_{\rm 500c}\) to \(M_{\rm 200c}\) is performed using the concentration mass relation by Ragagnin et al. (2021) at its pivot cosmology, while the enclosed ICM mass is corrected using the gas distribution of the BCM, as further described in Appendix A.
#### 3.2.4 Predicting electron density profiles
The ICM is, to good approximation, a fully ionized gas, such that the electron density profiles \(n_{\rm e}(r)\) can readily be estimated from the gas density as
\[n_{\rm e}(r)=\frac{\rho_{\rm ICM}(r)}{\mu_{\rm e}m_{\rm p}}, \tag{12}\]
with the mean molecular weight of electrons \(\mu_{\rm e}=1.17\)(following Bulbul et al., 2019), and the proton mass \(m_{\rm p}\). We can thus readily transform the ICM profiles assumed by the BCM into predictions for the electron density profiles.
One operational note is that studies of X-ray electron density profiles of massive clusters have to date not been carried out together with accurate WL mass calibration, as described for the gas and stellar fractions. We are therefore forced to rely on the hydro static mass reported by Ghirardini et al. (2019), and transform this to a halo mass using the hydrostatic mass bias \(b_{\rm HS}\), with prior \(p(b_{\rm HS})=\mathcal{N}(b_{\rm HS}|0.26,0.07^{2})\)(Hurier and Angulo, 2018).
### Fitting Procedure
We sample a Gaussian likelihood for the measurements reported in Table 1 and Table 2. The mean values of the individual data point are given by the model prediction of the BCMs, and the respective variances are given by the square of the measurement uncertainties reported above (cf. section 2, Table 1, and Table 2). The measurements are treated as independent, as justified in section 2. The model prediction for the gas and stellar fraction from the S19-model is computed by integrating the respective profiles to \(R_{\rm 500c}\), while the prediction of the bacco model relies on a combination of functions provided by the package bacco9(Arico et al., 2021) and the necessary conversions described in section 3.2. Predictions for the electron density profiles directly result from appropriate re-scaling of the gas density profile (cf. Section. 3.2.4)
Footnote 9: [https://bacco.dipole.org/](https://bacco.dipole.org/)
We employ the Monte Carlo Markov Chain sampler emcee10Goodman & Weare (2010) to sample the posteriors. We sample all 7 parameters of the BCMs within flat priors (for the exact ranges, see Table 3). The cosmic baryon density \(\Omega_{\rm b}\) is sampled in all cases within the prior ranges (\(0.0473,0.0535\)) to emulate a residual uncertainty in the cosmic baryon fraction, which impacts the gas mass fraction predictions and the amplitude of the predicted electron density profiles.
Footnote 10: [https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/)
The cosmological parameters are kept fixed at \(\Omega_{\rm M}=0.315\), and \(h=0.67\). The choice of cosmic baryon density \(\Omega_{\rm b}\) reflects the range of cosmic baryon fractions sampled by Schneider et al. (2022), \(0.15<f_{\rm b}<0.17\). The Hubble constant is kept fixed as our analysis is, to first order, independent of the Hubble parameter. The BCMs require input masses in units of \(M_{\odot}/h\) by design. Observations only have access to the angular scale of clusters or groups. As such, even if not explicitly reported, observational masses are effectively in units of \(M_{\odot}/h\). Similarly, the electron density profile is effectively measured \(h\)-scaled units, matching the density profile predictions. To first order, this analysis thus depends on the main cosmological parameters only via the cosmic baryon fraction. In future work, especially when used as priors for cosmological analyses, we plan to take account also of the residual cosmological dependence.
## 4 Results
In this section, we first discuss the posteriors of our analysis, both with regard to the model parameters themselves, as well as the posterior predictive distributions of the data. We then derive the posterior predictive distributions for the matter power spectrum suppression due to baryonic effects.
### Parameter Posteriors and Goodness of Fit
When constructing a posterior sample, we discard the burn-in phase of our sampling and visually inspect the trace plots to ensure convergence, especially the trace plot of the total log-likelihood. For sake of brevity, the 1- and 2-d marginal posteriors are reported in Appendix, Fig. 12 and Fig. 12.
For the bacco model, we find \(\log_{10}M_{\rm c}h/M_{\odot}=13.82\pm 0.36\)
\begin{table}
\begin{tabular}{l c c c} \multicolumn{2}{c}{AS-model} & \multicolumn{2}{c}{bacco} \\ \hline \(\log_{10}M_{c}h/M_{\odot}\) & \((11,\ 15)\) & \(\log_{10}M_{c}h/M_{\odot}\) & \((12,\ 15)\) \\ \(\mu\) & \((0,\ 2)\) & \(\log_{10}\beta\) & \((-1,\ 0.7)\) \\ \(\mu_{\rm ej}\) & \((2,\ 8)\) & \(\log_{10}M_{1}h/M_{\odot}\) & \((11.5,\ 15)\) \\ \(\gamma\) & \((1,\ 4)\) & \(\log_{10}\eta\) & \((-0.7,\ 0.7)\) \\ \(\delta\) & \((3,\ 11)\) & \(\log_{10}\theta_{\rm out}\) & \((-1,\ 0.5)\) \\ \(\eta\) & \((0.05,\ 0.4)\) & \(\log_{10}M_{\rm max}h/M_{\odot}\) & \((5,\ 15)\) \\ \(\delta\eta\) & \((0.05,\ 0.4)\) & \(\log_{10}\theta_{\rm out}\) & \((-2,\ 0)\) \\ \hline \multicolumn{2}{c}{cosmological parameters} \\ \hline \(\Omega_{\rm M}=0.315\) & \(h=0.67\) & \(\Omega_{\rm b}\,\in\,(0.0473,0.0535)\) \\ \hline \multicolumn{2}{c}{hydro static mass bias} \\ \multicolumn{2}{c}{\(b_{\rm HS}\sim\mathcal{N}(0.26,0.07^{2})\)} \\ \end{tabular}
\end{table}
Table 3: Prior choices of this analysis. All priors are chosen uniform within the bounds that we report, except for the hydrostatic mass bias, where we use a Gaussian prior and indicate the mean and variance of the distribution. Note that both BCMs use the parameter \(\log_{10}M_{c}h/M_{\odot}\), but it has different meanings in the two models, as described in section 3.2.
Figure 2: _Top:_ Prediction for the gas (magenta) and stellar (orange) mass fraction together with the fitted data points from the cluster analyses considered. We show the 1 sigma region as filled and represent also the 2 sigma region with faded lines. _Bottom:_ Prediction for the mass slope of the gas (magenta) and stellar (orange) mass fraction together with the fitted data points from the cluster analyses considered. The data point for the slope a measured by clsuter population studies, and are statistically independent of the baryon fraction measurements. The model for the slope is the log-derivative w.r.t. to log-mass of model predictions for the fractions. In grey the slope measurement we discard due to its implausibly tight error bar. The left panel shows the bacco fit, while the right shows the S19-model fit. Due to anomalies in the data, the BCMs are only able to capture the qualitative trends in the data.
\(\log_{10}\beta=-0.29\pm 0.11\), and \(\log_{10}M_{1}h/M_{\odot}=12.00\pm 0.06\). For the parameters defining the scales of the gas profile, we only find upper limits: \(\log_{10}\theta_{\rm lim}<-0.65\) and \(\log_{10}\theta_{\rm out}<-0.11\) (at the 97.5th percentile). The position of the expelled gas \(\eta\), and the characteristic mass of the inner gas profile slope \(M_{\rm inn}\) remain unconstrained. As seen in the 1- and 2-d marginal contours shown in Fig. 11, we find a major degeneracy between the characteristic mass of the inner gas profile slope \(M_{\rm inn}\) and corresponding scale \(\log_{10}\theta_{\rm inn}\).
For the S19 model, we find \(\log_{10}M_{\rm c}h/M_{\odot}=14.53\pm 0.20\), and \(\mu=0.54\pm 0.10\). We get upper limits on \(\theta_{\rm ej}<6.72\), and \(\gamma<1.42\) (at the 2.5th percentile); and a lower limit on \(\delta>5.37\) (at the 97.5th percentile). The mass slopes of the satellite stellar mass \(\eta\), and the difference between the satellite and central stellar mass slopes \(\delta\eta\) are strongly degenerate with each other (see Fig. 12). The best constrained parameter combination is \(\eta\,(\delta\eta/0.24)^{0.19}=0.239\pm 0.009\).
In Fig. 2 we present the model prediction for the ICM (magenta) and stellar (orange) mass fraction (upper panels) as a function of mass over plotting the data points used for the fitting. Similarly, in the lowers panel of Fig. 2 we show the model prediction for the logarithmic mass slope of the ICM and stellar mass fraction (same color scheme as upper panel), together with the slope measurements. The left images show the best fit backoc model for the baryon fractions and their mass slopes. The right plot shows the baryon fraction and mass slope predictions for the S19-model. Both models that we investigate are able to capture the main trends, while the overall quality of the fit is hampered by anomalies in the data.
A closer inspection reveals a few anomalies. In grey we plot the discarded slope measurement by Mantz et al. (2016), which has implausibly small error bars, as can be seen in relation to the other analyses. Furthermore, the eFEDS-HSC ICM fraction measurement seems to be several \(\sigma\) low with respect to the predicted range of gas fractions at that mass, independently on the BCM. We discuss this below (cf. Section 5). The model predicts very accurately the logarithmic slope of the ICM mass fraction. The logarithmic mass slope of the stellar mass fraction from the backoc model is almost 2 sigma lower than our measurements. Conversely, the AS model seems to better predict the stellar mass fraction slopes, but at the cost of under predicting the stellar fraction of the HSC-XXL analysis. We discuss these minor anomalies below (cf. Section 5).
The fit of both models fares better on the electron density data, shown in Fig. 3. Here the median posterior predictive derived from the backoc model attains a chi-squared of \(\chi^{2}=3.8\) on the 8 data points of the electron density profile, while effectively constraining 3 model parameters and having a tight degeneracy on another pair of parameters. For the S19 model, the median posterior predictive has a chi-squared of \(\chi^{2}=4.8\) on the 8 data points of the electron density profile, while effectively constraining 2 model parameters, placing two limits on other parameters, and having a tight degeneracy on another pair of parameters. The chi-squared values of the median model predictions are in line with the expectation from the effective number of degrees of freedom. This means that the BCM are able to fit the electron density data well.
### Impact on the matter power spectrum
Given our posterior of the BCM model parameters, we can predict the suppression of the matter power spectrum induced by the evacuation of baryons from massive halos. Note that our upper limit on the outer characteristic scale of the backoc model (\(\theta_{\rm out}<-0.11\)) falls outside of the range of the backoc emulator (\(0<\theta_{\rm out}<0.5\), Arico et al. 2021b). We therefore recompute the matter power spectrum suppression for this model for \(\mathcal{O}(1000)\) points drawn from our posterior. For the S19-model, we use the BComm(Giri & Schneider, 2021) to compute the baryon suppression of the matter power spectrum. The resulting posterior predictive distributions are shown in Fig. 4, with the full line representing the 2.5th, 50th, and 97.5th percentile at each wave-number \(k\), while the filled regions encompass the range between the 16th and 84th percentile. In light green we show the suppression for the backoc-model, while the orange shows the suppression for the S19-model. Via the BCM models, our data is able to predict matter power spectrum suppression on all scales of interest with a noteworthy precision.
We find excellent agreement between the matter power suppression predictions from the two models on scales larger than \(k<6h\) Mpc\({}^{-1}\). Both models quickly converge towards no suppression at scales larger than \(k<0.5h\) Mpc\({}^{-1}\). Specifically, for \(k<0.37h\) Mpc\({}^{-1}\) (\(k<0.29h\) Mpc\({}^{-1}\)) 97.5 percent of our predictive posterior shows suppression weaker than 1% for the backoc (S19) model, indicating that baryon feedback processes do not impact the matter power spectrum at larger scales.
The two BCMs also show the same strong suppression trend at intermediate scales, \(0.5h\) Mpc\({}^{-1}<k<6h\) Mpc\({}^{-1}\). The predicted suppression is still rather weak at \(k=1h\) Mpc\({}^{-1}\). \(0.042^{+0.012}_{-0.014}\) for backoc, and \(0.049^{+0.016}_{-0.012}\) for the S19-model. The suppression grows sizeably at \(k=3h\) Mpc\({}^{-1}\), predicting \(0.184^{+0.026}_{-0.031}\) for backoc, and \(0.179^{+0.018}_{-0.020}\) for the S19-model. At smaller scales, the backoc prediction has the characteristic up-turn at larger scales than the S19 prediction. The prediction of the two models thus diverges. At \(k=10h\) Mpc\({}^{-1}\) we find \(0.185^{+0.023}_{-0.017}\) for backoc, and \(0.271^{+0.023}_{-0.033}\) for the S19-model. This discrepancy at small scales is to be expected given our lack of data for \(M_{500c}<10^{14}M_{\odot}\). Arico et al. (2021a) highlight that halos in the mass range \(10^{14}h^{-1}M_{\odot}<M_{200c}<10^{15}h^{-1}M_{\odot}\), contribute most to the matter power spectrum suppression above scales of \(k\lesssim 3h\) Mpc\({}^{-1}\), while halos in the mass range of \(10^{13}h^{-1}M_{\odot}<M_{200c}<10^{14}h^{-1}M_{\odot}\) contribute the most to scales \(k\gtrsim 3h\) Mpc\({}^{-1}\). We highlight this by shading the area in Fig. 4 that we do not directly constrain with our data. Discrepancies in the stellar mass fraction results might also play a role in the divergence of the two model predictions, as discussed below.
Figure 3: Fit to the electron density profile (black points), re-scaled by the radius to reduce the dynamical range of the plot. Color coded the different BCM fits. Computing the chi-squared of the median posterior predictions and the data, we find that both models provide a good fit to the data.
## 5 Discussion
In the following, we will discuss several aspects of this work.
Weak lensing massesThis work provides a clear improvement compared to prior attempts to use the gas mass - halo mass relation of galaxy clusters and groups to constrain the baryonic suppression of the matter power spectrum (Schneider et al., 2019; Debackere et al., 2020; Giri & Schneider, 2021), which use hydrostatic mass estimates to interpret baryon fractions. Those works showed that the limiting factor in using hydrostatic masses was the weakly constrained hydrostatic mass bias. Also recent empirical calibrations of hydrodynamical simulations are limited by the use of hydrostatic masses (Kugel et al., 2023). This uncertainty is reduced by using weak lensing mass calibration for the baryon fractions. Furthermore, the cluster and group measurements in those works did not account for selection effects via Bayesian population models, while the analyses used in this work do. In future work, we plan to also use weak lensing mass calibration and model selection effects also for the exploitation of electron density profiles.
Comparison to previous resultsPrevious work by Giri & Schneider (2021); Schneider et al. (2022) fitted the S19 model with various data sets. While neither of the two works resulted in a constraint on any of the BCM model parameters, their posteriors suggest that \(\log_{10}M_{\rm c}<14\), which is in slight tension with our measurement. Both works also indicate that \(\mu\sim 0.4-0.5\), though at low significance. Our measurement \(\mu=0.54\pm 0.10\) is in agreement with that. Also our constraint that \(\mu=0.239\pm 0.009\) at \(\delta\eta=0.24\) coincides with the posterior distributions they plot. Arico et al. (2023) recently analysed the Dark Energy survey year 3 cosmic shear auto-correlation with the backco model. They find a value of \(\log_{10}M_{\rm c}h/M_{\odot}=14.38^{+0.60}_{-0.56}\) - in agreement with our measurement. We therefore confirm the best fit small scale baryonic effects found by Arico et al. (2023) when analyzing cluster gas density profiles, and gas and stellar fractions. Our results are also in excellent agreement with the analysis by Chen et al. (2022) on cosmic shear data only, though in that work, 6 of the
Figure 4: Posterior predictive distribution of the suppression in the matter power spectrum, that is the ratio of the matter power spectrum with baryonic effects, and the gravity only matter power spectrum, \(\mathbf{P}_{\rm withharyons}/\mathbf{P}_{\rm gravity\,only}\), obtained with the S19 and backco models (orange and green shaded area, respectively) informed with the galaxy cluster data collected in this work. The full lines represent the 2.5th, 50th, and 97.5th percentile at each wave-number \(k\), while the filled regions encompass the range between the 16th and 84th percentile. Greyed out the scales impacted by extrapolating our data to lower halo masses. _Upper left:_ Comparison with S19-model prediction from KiDS 1000 cosmic shear, kSZ measurements and individual cluster with hydrostatic masses (Schneider et al., 2022, hatched blue), which find a two sigma stronger suppression. _Upper right:_ Comparison with the backco prediction from DES Y3 cosmic shear (Arico et al., 2023, hatched red), which is in good statistical agreement. _Lower left:_ in cyan the suppression needed to reconcile DES Y3 cosmic shear and Planck primary CMB constraints (a.k.a. S8 tension, hatched in cyan, Preston et al., 2023). Predictions based on galaxy clusters can not explain the large scale suppression needed to solve the S8 tension. _Lower right:_ the prediction of several state-of-the-art hydrodynamical simulations, according to the legend. Our data suggest a intermediate to strong feedback scenario.
7 parameters where fixed to the best fit values from the BAHAMAS simulation. Compared to those works, which only constrain one BCM parameter, we are able to measure 3 out of 7 parameters tightly, while placing upper limits on the other 2. In Fig. 4, upper panels, we directly compare our matter power spectrum suppression predictions with the ones from the backco fit to DES Y3 cosmic shear Arico et al. (2023) (backco, DES Y3 WL in hatched red), and from the fit of the S19-model to KiDS cosmic shear, kSZ measurements and a compilation of gas and stellar fractions (Schneider et al., 2022). While the former is in excellent agreement with our results, the prediction by Schneider et al. (2022) is 2-sigma lower on all scales. If this is due to our inclusion of the X-ray profiles, or their use of kSZ data, is left to future investigation.
### S\({}_{8}\) tension
Current cosmological constraints from wide photometric surveys suggest that the amplitude of fluctuation in the low redshift Universe is lower than predicted by extrapolating cosmic Microwave background experiments (see Huterer, 2022, for a review). The recent CMB lensing analysis of ACT (Qu et al., 2023), which is in agreement with primary CMB predictions, indicates that this tension might not be present at large scales and redshift larger than \(\sim 1\). As pointed out by Amon & Efstathiou (2022); Preston et al. (2023), the solution would likely have to be non-linear, and at low redshift. To elaborate on this, we compare the matter power spectrum suppression derived by redistributing baryons in and around halos, with the suppression needed to reconcile DES Y3 cosmic shear and Planck primary CMB constraints, derived by Preston et al. (2023), whose 1-sigma region is shown in cyan in Fig. 4, lower left panel. That work, as well as a comparable analysis with KiDS cosmic shear (Amon & Efstathiou, 2022), find that a solution to the "S8 tension" would require strong suppression of large scale, \(0.2h\ {\rm Mpc}^{-1}<k<1h\ {\rm Mpc}^{-1}\). Given our predictions, it seems unlikely the baryon feedback can provide such a suppression by redistribution baryons in and around halos. Using \(k\approx\pi/R\), solving the S8 tension would require a suppression at scales of from \(3h^{-1}\ {\rm Mpc}\) to \(15h^{-1}\ {\rm Mpc}\), far outside even the outermost regions of halos. Following the interpretation by Amon & Efstathiou (2022), this would hint at yet unknown physical effects impacting the dark matter at (mildly) non linear scales. This also confirms that in the context of cosmic shear, appropriate scale cuts can make that experiments impervious to baryonic effects by only considering large scales (Krause et al., 2021; Arico et al., 2023; Dark Energy Survey & Kilo-Degree Survey Collaboration et al., 2023). Previous constraints on BCM model confirm that baryon feedback only plays a sub-ordinate role in solving the S8 tension. Indeed, a re-analysis of the Kilo Degree Survey 1000 cosmic shear data by Schneider et al. (2022) using the S19 model, shows that the baryon feedback can not resolve, and only marginally alleviate the discrepancy between cosmic shear and CMB.
### Hydro-dynamical simulations
We compare in Fig. 4, lower right panel, our predicted matter power suppression to several point estimates from state-of-the-art hydro-dynamical simulations: Magneticum (Martinet et al., 2021, and references therein), TNG300 (Pillepich et al., 2018; Springel et al., 2018), BAHAMAS (McCarthy et al., 2017, 2018), EAGLE (Schaye et al., 2015; Crain et al., 2015; McAlpine et al., 2016), Illustris (Vogelsberger et al., 2014), and OWLS (Schaye et al., 2010). Both very strong feedback models (like the Illustris simulation), and very weak feedback models (like TNG300 and EAGLE) are effectively outside of the combined range of our constraints, even when considering the variation introduced by the difference between the baryonification models. Our posteriors qualitatively follow most closely the trend of the OWLS simulations. Our work further supports the strong feedback scenario previously empirically suggested by Eckert et al. (2016); Giri & Schneider (2021).
### Selection effects
The poor fit the BCM provides to the gas fraction of eFEDS-HSC (see Fig. 2, upper panel) could be indicative of some of the challenges faced by Bayesian Population studies of galaxy clusters and groups. Objects of the same mass with higher gas mass fraction would plausibly also be more X-ray luminous as objects who have retained, at the same mass, less baryons (for simulational evidence, see Raggagnin et al., 2022). Popesso et al. (2023) have shown, that X-ray selected groups are more concentrated, both in X-rays as well as in total mass. They also preferentially live in nodes of cosmic web and have redder central galaxies than groups detected only via friends-of-friends algorithms in spectroscopic galaxy data. In a Bayesian Population model, this can be accounted for by a correlation between the intrinsic scatters of different observables (Angulo et al., 2012; Mantz et al., 2015; Bocquet et al., 2015; Farahi et al., 2019; Grandis et al., 2021). While Akino et al. (2022) did not explicitly fit for a possible correlation between selection observable scatter and gas mass scatter, Chiu et al. (2022) find that such correlation is consistent with zero, albeit with large error bars. Both these analyses will be soon superseded by the publication of the first eROSITA all sky survey (Predehl et al., 2021). In that context, multi wavelength cross check to validate the selection modelling, such as proposed by Grandis et al. (2020, 2021), should be performed. We plan to revisit the predictions of the matter power spectrum suppression once low mass halo selection in better understood.
### Internal consistency
Further anomalies can be found in our compilation of data sets. For instance, if one was to take the two measurements of the stellar mass fraction as nodes for a linear interpolation, one would find, after accounting for errors, a stellar fraction slope of \(B_{\star}-1=-0.58\pm 0.10\). This is more than \(2\sigma\) (and less than \(3\sigma\)) steeper than the measured slopes \(B_{\star}-1=-0.20\pm 0.11\), and \(B_{\star}-1=-0.20\pm 0.12\), from HSC-XXL, and SPT respectively. This explains why neither of our models is able to simultaneously fit the stellar mass fractions and their mass slopes. Such inconsistencies could be avoided by fitting larger future cluster and groups samples directly with the prescriptions from baryon correction models. The relative stiffness of the BCM models also results in rather low stellar fractions, especially in the S19 model. This is likely the reason for the different behaviours of the BCM predictions at small scales. Less stellar material would imply more baryons as hot gas. In light of the measured gas fraction, it would seem that we do not find this excess hot gas in the halos. To reconcile this, we would need strong feedback. In summary, in our inference set-up, under-predicted stellar fractions lead to more feedback.
### Mass measurement accuracy
Another limiting factor for the accuracy of gas and stellar mass fraction measurements for galaxy clusters and groups is the accuracy of the weak lensing mass estimation. Many of the observational systematics in shape and photometric redshift measurement, as well as background selection will be alleviated by the increased data quality of future data provided by surveys like _Euclid_ and LSST. Grandis et al. (2021) however showed that also the theoretical knowledge of the total matter distribution of halos can lead to systematic uncertainties no smaller than 2% (see also Debackere et al., 2021). This systematics floor results from the comparison between Magneticum and TNG300. Given that the weak feedback scenario in the latter is excluded by this work, the actual uncertainty on baryon feedback might well be smaller than the reported 2%. In this context data constrained BCM models, like the
one we present here, can be used to marginalise over the residual theoretical uncertainty in baryon feedback effects on the matter profiles of galaxy clusters.
Modelling improvementsAt a qualitative level the performance of the BCMs in fitting our data satisfies us, so we see no immediate source of concern. It is also noticeable that the high precision of the used data can be effectively propagated by the BCMs to high precision matter power spectrum suppression predictions. Concerning the predictive accuracy of the BCMs, Schneider et al. (2019); Giri and Schneider (2021); Schneider et al. (2022) explicitly show that when fitting the baryon fraction measured in different hydrodynamical simulations, the S19-model is able to correctly predict the matter power spectrum suppression, and vice versa. While the S19 model is also able to fit gas density profiles derived from X-ray observations, it has not been shown explicitly that fits to the electron density profiles and baryonic fraction correctly predict the matter power spectrum. For the backco-model, Arico et al. (2021) have shown that is able to jointly fit the matter power spectrum and bi-spectrum in hydrodynamical simulations, and thereby reproduce baryon fractions. Separate fits are able to describe the gas and stellar density profiles. The predictive accuracy on the matter power spectrum of fits to baryon fraction and electron density profiles remains to be tested. Similarly, future work should consider the redshift evolution of baryon feedback, both in the collection of data, as well as the modelling.
Other data setsWhile cluster baryon fractions are clearly a potent way to empirically constrain baryon feedback, the only put constraints of a rather narrow range of scales, given the limited mass range they probe. For scales \(k\geq 1h\) Mpc\({}^{-1}\), the X-ray surface brightness profile of clusters provide further empirical constraint, as shown in this work. We only used a small fraction of the recorded data. It will surely be fruitful to ingest more X-ray data, especially at lower halo masses, into the BCM fitting procedure, an effort we hope this work provides the motivation for. At even smaller scales, the relation between central galaxy stellar mass and host halo mass for low mass groups will be crucial. Also here, ample observational information exists, though, in the authors opinion, rarely in an easy to access format. For both of these observational channels, the challenge is to interface existing and ongoing observations with the BCM. It remains to be tested if data covering a wider mass range would be well fitted by the BCM, or if the current extrapolation to lower masses leads to artificially precise, but inaccurate prediction. On scales larger than \(k<2h\) Mpc\({}^{-1}\), the hot gas can be traced with the Sunyaev-Zeldovich effect (SZe), as recently shown by Pandey et al. (2019); Chen et al. (2022); Gatti et al. (2022); Troster et al. (2022) for thermal SZe. Schneider et al. (2022); Chaves-Montero et al. (2021) showed constraints from kinematic SZE, which requires less modelling assumptions, but is also lower signal to noise than thermal SZe. The dispersion of localized fast radio bursts also probes the cosmic baryon fraction (Macquart et al., 2020), and future larger samples of localised fast radio bursts will be able to probe the baryon distribution (Nicola et al., 2022). Another approach is to directly exploit the high signal to noise of small scale galaxy and cosmic shear lensing to constrain the matter power spectrum suppression (Yoon and Jee, 2021; Huang et al., 2021; Amon and Efstathiou, 2022; Chen et al., 2022; Preston et al., 2023; Arico et al., 2023), with the disadvantage that it would not be clear, if such a suppression signal is physically due to baryon feedback, or some other physically interesting effect on non-linear scales. The tightest and most robust constraints will naturally result from the combination of all these observations.
## 6 Conclusions
In this work, we use gas and stellar mass fraction measurements derived by several weak lensing informed multi wavelength galaxy cluster studies (Mantz et al., 2016; Chiu et al., 2018, 2022; Akino et al., 2022), as well as the electron density profile derived by deep, high resolution X-ray observations (Ghiardini et al., 2019), to empirically inform the Baryon Correction Model proposed by Arico et al. (2020, 2021) and by Schneider and Teyssier (2015); Schneider et al. (2019). Despite some anomalies in the observed data, we find that the data points used are qualitatively described by the proposed models, once the free parameters of these models are constrained by Bayesian inference. Using our constraints on the BCM parameters, we predict the suppression of the non linear matter power spectrum with sub-percent precision. We find that at scales smaller than \(k<0.37h\) Mpc\({}^{-1}\) the baryon suppression is \(<1\%\) at \(97.5\) percent credibility. At \(k=1h\) Mpc\({}^{-1}\) we find a suppression of \(0.042^{+0.012}_{-0.014}\) for backco, and \(0.049^{+0.016}_{-0.012}\) for the S19-model, while at \(k=3h\) Mpc\({}^{-1}\), we find \(0.184^{+0.026}_{-0.031}\) for backco, and \(0.179^{+0.018}_{-0.020}\) for the S19-model. The predictions diverge at small scales for \(k>5h\) Mpc\({}^{-1}\) most likely due to data anomalies driving different model fits, and our limited range in halo mass leading to extrapolation inaccuracies.
Given our predictions, we can exclude a series on simulations that feature both too strong, or too weak feedback effects. Our predicted matter power suppression seems to be reproduced most closely by the hydro-dynamical simulation results from OWLS (Schaye et al., 2010), while other simulations deviated from our posterior predictive on small or large scales. Constraints on the baryonic effects derived from small scale cosmic shear by Chen et al. (2022); Arico et al. (2023) are empirically confirmed by our fit to stellar and gas mass fractions, as well as electron density profiles. Comparing our results with the matter power spectrum suppression needed to solve the S8 tension (Amon and Efstathiou, 2022; Preston et al., 2023), we find that baryon feedback can likely not provide the necessary large scale suppression, and therefore is unlikely to solve the S8 tension.
In summary, we demonstrate that current measurements of galaxy cluster gas and stellar fraction, and cluster gas profiles can strongly constrain the baryon correction to the matter power spectrum. The cluster data can thus act as ancillary data products, calibrating astrophysical uncertainties in cosmic shear experiments. This opens up the possibility of using small scale cosmic shear measurement to constrain deviation from the standard cosmological model, while controlling the baryonic effects via external data, if other systematics are similar well constrained. The improved understanding of baryon feedback will also reduce theoretical uncertainties in weak lensing calibrated cluster number counts.
## Acknowledgements
The authors thank Sebastian Bocquet, Inon Chiu, Tim Schrabback, Daisuke Nagai, Vittorio Ghirardini, Dominique Eckert, Alexandra Amon and Raul Angulo, as well as the eROSITA Cluster Working Group for the useful comments provided at different stages of this work. Marginal contour plots of high-dimensional samples are visualized with pyGTC (Bocquet and Carter, 2016).
## Data Availability
All data will be made available upon reasonable request to the authors. |
2307.14436 | Phenotype-preserving metric design for high-content image reconstruction
by generative inpainting | In the past decades, automated high-content microscopy demonstrated its
ability to deliver large quantities of image-based data powering the
versatility of phenotypic drug screening and systems biology applications.
However, as the sizes of image-based datasets grew, it became infeasible for
humans to control, avoid and overcome the presence of imaging and sample
preparation artefacts in the images. While novel techniques like machine
learning and deep learning may address these shortcomings through generative
image inpainting, when applied to sensitive research data this may come at the
cost of undesired image manipulation. Undesired manipulation may be caused by
phenomena such as neural hallucinations, to which some artificial neural
networks are prone. To address this, here we evaluate the state-of-the-art
inpainting methods for image restoration in a high-content fluorescence
microscopy dataset of cultured cells with labelled nuclei. We show that
architectures like DeepFill V2 and Edge Connect can faithfully restore
microscopy images upon fine-tuning with relatively little data. Our results
demonstrate that the area of the region to be restored is of higher importance
than shape. Furthermore, to control for the quality of restoration, we propose
a novel phenotype-preserving metric design strategy. In this strategy, the size
and count of the restored biological phenotypes like cell nuclei are quantified
to penalise undesirable manipulation. We argue that the design principles of
our approach may also generalise to other applications. | Vaibhav Sharma, Artur Yakimovich | 2023-07-26T18:13:16Z | http://arxiv.org/abs/2307.14436v3 | # Phenotype-preserving metric design for high-content image reconstruction by generative inpainting
###### Abstract
In the past decades, automated high-content microscopy demonstrated its ability to deliver large quantities of image-based data powering the versatility of phenotypic drug screening and systems biology applications. However, as the sizes of image-based datasets grew, it became infeasible for humans to control, avoid and overcome the presence of imaging and sample preparation artefacts in the images. While novel techniques like machine learning and deep learning may address these shortcomings through generative image inpainting, when applied to sensitive research data this may come at the cost of undesired image manipulation. Undesired manipulation may be caused by phenomena such as neural hallucinations, to which some artificial neural networks are prone. To address this, here we evaluate the state-of-the-art inpainting methods for image restoration in a high-content fluorescence microscopy dataset of cultured cells with labelled nuclei. We show that architectures like DeepFill V2 and Edge Connect can faithfully restore microscopy images upon fine-tuning with relatively little data. Our results demonstrate that the area of the region to be restored is of higher importance than shape. Furthermore, to control for the quality of restoration, we propose a novel phenotype-preserving metric design strategy. In this strategy, the size and count of the restored biological phenotypes like cell nuclei are quantified to penalise undesirable manipulation. We argue that the design principles of our approach may also generalise to other applications.
High-content fluorescence microscopy, Deep Learning, Inpainting, Sample preparation, Artefact, Metric Correspondence: [email protected]
## 1 Introduction
In the past several decades microscopy took a key role in biomedical discovery and diagnostics [1, 2]. Yet, obtaining images of microscopic entities requires sample preparation and staining procedures which are often complex and involve multiple steps [3]. This complexity may lead to the occurrence of sample preparation artefacts (SPA) at various stages of the process [1, 2]. Some common causes of SPA include incomplete fixation of tissue samples, improper embedding of tissue samples, inefficient removal of dehydration artefacts, inefficient removal of staining artefacts, inclusion of dust particles and excessive tissue damage during sample preparation. The presence of SPAs in micrographs may significantly influence downstream data analysis and interpretation leading to misinterpretation of the biological phenotype. While in some cases these issues can be avoided by improving the handling, as well as imaging of artefact-free samples selectively, in cases of high-content screening (HCS) microscopy SPAs are often inevitable due to the use of automated liquid handling and other high-content sample preparations procedures [4, 5].
With the advent of digital image processing, deep learning (DL) and generative models, the removal of SPAs from acquired HCS images may be attained by image inpainting [6]. In this task, missing or corrupted regions are filled in by a trained generative neural network using context data from the source image itself. However, owing to effects like neural hallucinations [7] such image reconstruction tasks may introduce undesired alterations and manipulations of their own. While any generative model manipulates the image to a certain extent, not all
alterations are critical. It is therefore crucial to measure how these alterations would affect the perception of the biomedical observation (phenotype) by the image analysis system.
In this work, we employ an open HCI dataset[8] containing SPA and assess the ability of the state-of-the-art generative DL algorithms to reconstruct the images through generative inpainting. To ensure the scientific or diagnostic accuracy of the results, we propose a phenotype-preserving metric allowing us to assess the quality of inpainting from the phenotypic point of view. Due to the nature of the dataset, the phenotype-preserving metric we explore in this work focuses on the detection and measurement of cell nuclei. Yet, we argue that similar design principles may be applied to other phenotypes.
## 2 Related Work
Biomedical image reconstruction has been an active area of research for several decades. Introduced in the 1960s, one of the earliest approaches for biomedical image reconstruction was the filtered back projection method[9]. This method is widely used in computed tomography and has been improved over the years with variations such as iterative reconstruction and adaptive statistical iterative reconstruction[10]. Another popular approach is based on the use of compressed sensing techniques. These techniques allow the reconstruction of images from a small number of measurements, which can reduce the amount of data acquisition time required for imaging. Compressed sensing techniques have been applied to various types of biomedical images, including magnetic resonance imaging[11], computed tomography[12, 13], and positron emission tomography[14], and microscopy[15]. Notably in microscopy, denoising and reconstruction is often associated with enhancing the resolution[16, 17].
In addition to these general techniques, there are also specific methods developed for certain types of biomedical images. For example, in ultrasound imaging, time-reversal methods have been used for image reconstruction[18]. In optical coherence tomography, methods based on Fourier domain signal processing have been used[19]. However, these techniques are often based on assumptions, which may not hold. Additionally, compressed sensing reconstruction can be computationally expensive, particularly for large volumes of data. To counter the above limitations, DL approaches have also been used for biomedical image reconstruction. Specifically, convolutional neural networks (CNNs) have been used for image superresolution, denoising, and artefact reduction[20, 21, 22].
However, the assessment of the quality of biomedical image reconstruction is currently limited to well-established computer vision metrics, which may not capture the intricacy required for scientific and diagnostic accuracy. Metrics such as peak signal-to-noise ratio (PSNR)[23] and structural similarity index measure (SSIM)[24] are often used to assess quality, yet are known to lack complexity[25]. One approach for circumventing this is to evaluate the metric on multiple scales[26]. Other approaches aim to introduce subjective quality assessment involving human judgement[27]. In this work, we propose a novel metric that incorporates both subjective and objective quality assessment of HCI microscopy image reconstruction. The main criterion of the metric we propose is to preserve the biological phenotype from the perspective of image quantification. We argue that a conceptually similar metric design can be proposed for other biomedical image modalities.
## 3 Methods
### Code Availability
The Python source code developed in this work is available under GPLv3 open-source license at
github.com/casus/PhIRM.
### Computational Setup
All the inpainting models used in this work have been trained using high-performance computing. The Hemera HPC system has been used for this purpose. All the experiments have been carried out using a single 32 GB NVIDIA V100 GPU with 8 (eight) CPU cores. The maximum memory per CPU was set to 1443 MB.
### Sample preparation Artefacts Masks Generation
The multispectral properties of the dataset presented [8] in this work have been used to generate ground truth images containing exclusively nuclei or SPA. The current methodology involves the generation of artefact masks from 2160x2160 pixel TIFF images obtained from the CFP channel. Otsu thresholding [28] has been implemented to obtain a threshold value, which is subsequently multiplied by 0.7 to enhance the quality of the masks. We observed that Otsu's thresholding often eliminates small artefacts or boundary pixels of large artefacts, and hence the multiplication factor was introduced. The resultant image is binarised and subjected to morphological operations in the form of opening and closing.
### Artificial Masks Generation
To understand the influence of mask shape on inpainting performance we have created two kinds of artificial masks: rectangular masks and irregular masks. In the case of the first, we created rectangular mask datasets which contain images with varying mask area: 10-20%, 20-30%, 30-40% and 40-50%. For example, A 10-20% rectangular mask dataset contains images having 10-20% of the total image area covered by a rectangular mask. This is done by placing white rectangles of a particular area range (e.g.: 10-20% of the entire image area) at random regions inside a black image of size 256x256.
To train models with irregular-shaped masks of random sizes, we used an approach to draw masks based on randomly positioned lines with a predefined minimum and maximum number of vertices. Our approach rotates angles and produces thick lines by joining these vertices. Then it puts circles in the intersections of the two lines to guarantee their smoothness. This algorithm has been adapted from the DeepFillV2 architecture [29].
### Data Augmentation
To increase the size of our dataset we have developed an image patch (zoomed-image) generator that extracts smaller zoomed and cropped images from a larger image of size 2160x2160 pixels. The generator generates patches of size 256x256, starting from the left-hand side of the image and moving towards the right until the entire image is covered. After normalization using the min-max method, the resulting intensity range is 0 to 255. All 256x256-sized nuclei and mask images used in this work have been created using this zoomed image generator.
### Model Fine-tuning
To ensure good performance of the Context Encoder, DeepFill V2 and Edge Connect pre-trained networks, these models were fine-tuned using microscopy data. For this we have generated and augmented dataset containing 36112 data points with masks constructed as described in respective sections above. Context Encoder was fine-tuned for 200 epochs with learning rate 0.0002. DeepFill V2 was fine-tuned for 20 epochs with learning rate 0.0001. Edge Connect was fine-tuned for 68 epochs with learning rate 0.0001. All networks were trained until convergence.
### Phenotype-preserving Image Reconstruction Metric Design
In the case of nuclear images, PhIRM can be designed to contain the following components: nuclear count difference defined in Equation (1), nuclear area difference defined in Equation (2) and artefact area difference defined in Equation (3). These values can be computed as follows:
\[NCD=\begin{cases}0\cdot 0&\text{i}f\alpha=0\\ 1\cdot 1^{\alpha}&\text{i}f\alpha>0\\ 2^{|\alpha|}&\text{i}f\alpha<0,\end{cases} \tag{1}\]
where \(\alpha\) (default value is: 1.1) is the difference in number of nuclei between the original and reconstructed image.
\[NAD=\omega_{NAD}\cdot(A_{nuc\ out}-A_{nuc\ in}), \tag{2}\]
where \(A_{nuc~{}out},A_{nuc~{}in}\) is the area of nuclei in the original and reconstructed image, and \(\omega_{NAD}\) (default value is: 0.0002) is the respective weight.
\[AAD=\omega_{AAD}\cdot(A_{art~{}out}-A_{art~{}in}), \tag{3}\]
where \(A_{art~{}out},A_{art~{}in}\) is the area of artefacts in the original and reconstructed image, and \(\omega_{AAD}\) (default value is: 0.001) is the respective weight. These components can be computed in the following algorithm.
**Algorithm 1. PhIRM Factors Calculation**
1. Apply Otsu thresholding[28] to the input image to produce a binary mask image.
2. Perform connected components analysis on the binary image.
3. Avoid components with an area of less than 50 px.
4. Compute of mean and maximum values for each component.
5. Identify components as artefacts if the maximum value computed in step 4 equals 255 and the mean value is greater than or equal to 210.
Otherwise identification as nuclei.
6. Identify components as single nuclei if the maximum value computed in step 4 is less than 255 and the area of the component is less than 2200.
Otherwise, identify as a patch containing two overlapping nuclei.
7. Store the total number of nuclei, total nuclei area, and total artefact area.
After these steps are applied to both the source and inpainted image, the difference in the number of nuclei, total nuclei area, and total artefact area between the two images can be computed. These attributes are then used to calculate the final score defined in the following Equation (4).
\[PhIRM=\frac{10-(NCD+NAD+AAD)}{10}. \tag{4}\]
## 4 Results
Figure 1: **Metric validation set for comparison to expert opinion.** (a) examples of images in the validation set with respective alterations. (b) Comparison of the expert opinion to the PhIRM metric measured on this test image set. (c) Comparison of peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM) on the same set.
### Metric design and validation
HCI microscopy containing SPA may be corrected computationally using generative inpainting. However, the assessment of the scientific accuracy of such corrected images is problematic. To address that we proposed Phenotype-preserving Image Reconstruction Metric (PhIRM). At its core, PhIRM is assessing whether the phenotype-relevant information is still preserved upon image inpainting. In the case of high-content fluorescence microscopy of cell nuclei imaged in the presence or absence of SPA,[8] the preservation of the phenotype constitutes factors including the area of the nuclei, their count, as well as the area of the remaining artefact. These factors were taken into account and heuristically weighted for this specific phenotype and task (see Methods).
Next, to assess how PhIRM compares to expert assessment we have constructed a test set containing the original image together with images manipulated in a manner to corrupt the phenotype. Specifically, manipulations created images with the missing nuclear area, missing nuclei or introduced artefacts (Fig 0(a)). We then asked four image analysis experts (including one senior author of this work) to assess the difference between the original and manipulated images from the perspective of phenotype consistency. Remarkably, despite the judgement discrepancies in the extreme cases, overall expert opinion showed a good agreement with PhIRM (Fig 0(b)).
To understand how PhIRM would compare to existing image reconstruction metrics like SSIM and PSNR we measured these on the same set of images. (Fig 0(c)). Our comparison suggested that while PSNR somewhat correlated with the expert opinion, SSIM completely failed to capture the differences. Notably, in extreme cases like very low consistency or very high consistency, PhIRM seems to capture the differences better than PSNR. Interestingly, these were the cases experts had a rather low consensus on as well. We attributed these observations to an eventuality that both experts and PSNR would likely penalise differences that may not be crucial for phenotypic measurements. PhIRM at the same time would focus exclusively on aspects decisive for the phenotypic accuracy.
### High-content image reconstruction by generative inpainting
To measure the ability of generative inpainting to restore high-content images we selected three state-of-the-art pre-trained models with varying expressive capacity of the deep artificial neural networks (DNN) they are based on. Namely, we selected Context Encoder,[30] DeepFill V2[29] and Edge Connect.[31] Next, we fine-tuned these models on our dataset and evaluated their performance using PhIRM. It is worth noting, however, that Context Encoder was not designed to perform inpainting with irregularly shaped masks. Therefore, to ensure a fair comparison, we have first created a synthetic set of images with rectangular masks (Fig 1(a)). To understand how the area of the mask impacts performance we have used four distinct ranges 10-20%, 20-30%, 30-40%, 40-50% (Fig 1(b)). Our comparison showed that Deep Fill V2 and Edge Connect performed significantly better than the Context Encoder, especially with the largest masks. On mask area above 30% Edge Connect outperformed Deep Fill V2.
Finally, we compared the performance of Edge Connect architecture on rectangular masks to the performance on irregularly-shaped masks (Fig 2(a)) and the masks derived from high-content image SPAs (Fig 2(b)). Remarkably, despite quite obvious differences in shapes, with PhIRM between 0.86 and 0.99 the performance remained high for both types of masks. Similarly to the case of rectangular masks, performance seemed to be stronger connected to the area obstructed, rather then shape of the mask. Based on this we concluded that the state-of-the-art Edge Connect architecture may be employed for reconstructive inpainting of high-content fluorescent microscopy images obstructed by SPAs without significant alteration of nuclear phenotype.
## 5 Conclusions
This research highlights the potential of using generative inpainting for microscopy image reconstruction. While generative inpainting bears a great promise to microscopy reconstruction, it inevitably creates artefacts. To ensure the scientific accuracy of such reconstructed images we propose a novel metric - phenotype-preserving image reconstruction metric (PhIRM). The metric score ranges from 0.0 to 1.0, with higher scores indicating better inpainting quality. Compared to traditional metrics like PSNR and SSIM, the proposed metric demonstrates higher relevance and sensitivity. PhIRM aims to take into account only the changes crucial for the preservation
of the biological phenotype. In the specific case of quantification of fluorescent cell nuclei in the presence of sample preparation artefacts in high-content imaging, these changes are cell count, cell area and artefact area. We demonstrate that the metric we proposed is in good agreement with expert opinion.
Noteworthy, in order to make PhIRM useful for other applications, a different set of phenotype factors may need to be constructed. While this may seem to limit the application of PhIRM in an off-the-shelf manner, we argue that the construction of PhIRM for each individual application should be rather straightforward for a subject matter expert. For example, should cell cycle detection be important for the phenotype, one could include the difference in the total fluorescence intensity of the marker.
Furthermore, we employed the PhIRM metric to evaluate the state-of-the-art inpainting architectures including Context Encoder,[30] DeepFill V2[29] and Edge Connect.[31] We demonstrate that both visually and according to the PhIRM values DeepFill V2 and Edge Connect outperformed an older Context Encoder architecture. Additionally, we showed that for our particular task Edge Connect performed consistently better. Remarkably, we showed that, in the case of Edge Connect inpainting, the size of the damaged area significantly affects the inpainting quality. However, the shape of the damaged regions had only a minor impact. In summary, in this work, we argue that if accompanied by an adequate metric generative inpainting may be useful for image reconstruction in HCI microscopy.
###### Acknowledgements.
We thank Dr. Vardan Andriasyan and Anthony Petkidis for their expert opinion on microscopy image evaluation. This work was partially funded by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture, and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament.
Figure 2: **State-of-the-art image inpainting architectures performance on high-content microscopy** (a) example of images with square masks covering 30-40% of the area with respective inpainting results using pre-trained Context Encoder, DeepFill V2 and Edge Connect networks. (b) PhIRM metric output for Context Encoder, DeepFill V2 and Edge Connect networks depending on the area covered by the mask. |
2306.05612 | Spatial Re-parameterization for N:M Sparsity | This paper presents a Spatial Re-parameterization (SpRe) method for the N:M
sparsity in CNNs. SpRe is stemmed from an observation regarding the restricted
variety in spatial sparsity present in N:M sparsity compared with unstructured
sparsity. Particularly, N:M sparsity exhibits a fixed sparsity rate within the
spatial domains due to its distinctive pattern that mandates N non-zero
components among M successive weights in the input channel dimension of
convolution filters. On the contrary, we observe that unstructured sparsity
displays a substantial divergence in sparsity across the spatial domains, which
we experimentally verified to be very crucial for its robust performance
retention compared with N:M sparsity. Therefore, SpRe employs the
spatial-sparsity distribution of unstructured sparsity to assign an extra
branch in conjunction with the original N:M branch at training time, which
allows the N:M sparse network to sustain a similar distribution of spatial
sparsity with unstructured sparsity. During inference, the extra branch can be
further re-parameterized into the main N:M branch, without exerting any
distortion on the sparse pattern or additional computation costs. SpRe has
achieved a commendable feat by matching the performance of N:M sparsity methods
with state-of-the-art unstructured sparsity methods across various benchmarks.
Code and models are anonymously available at
\url{https://github.com/zyxxmu/SpRe}. | Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Mengzhao Chen, Fei Chao, Rongrong Ji | 2023-06-09T01:11:50Z | http://arxiv.org/abs/2306.05612v1 | # Spatial Re-parameterization for N:M Sparsity
###### Abstract
This paper presents a Spatial Re-parameterization (SpRe) method for the N:M sparsity in CNNs. SpRe is stemmed from an observation regarding the restricted variety in spatial sparsity present in N:M sparsity compared with unstructured sparsity. Particularly, N:M sparsity exhibits a fixed sparsity rate within the spatial domains due to its distinctive pattern that mandates N non-zero components among M successive weights in the input channel dimension of convolution filters. On the contrary, we observe that unstructured sparsity displays a substantial divergence in sparsity across the spatial domains, which we experimentally verified to be very crucial for its robust performance retention compared with N:M sparsity. Therefore, SpRe employs the spatial-sparsity distribution of unstructured sparsity to assign an extra branch in conjunction with the original N:M branch at training time, which allows the N:M sparse network to sustain a similar distribution of spatial sparsity with unstructured sparsity. During inference, the extra branch can be further re-parameterized into the main N:M branch, without exerting any distortion on the sparse pattern or additional computation costs. SpRe has achieved a commendable feat by matching the performance of N:M sparsity methods with state-of-the-art unstructured sparsity methods across various benchmarks. Code and models are anonymously available at [https://github.com/zyxxmu/SpRe](https://github.com/zyxxmu/SpRe).
## 1 Introduction
Network sparsity has proven highly successful in reducing the complexity of Convolutional Neural Networks (CNNs) [11; 19; 25]. Concretely speaking, a sparse network can be obtained by pruning weights at different levels of granularity, from fine to coarse. Fine-grained sparsity (unstructured sparsity) [19; 7] prunes at the level of each individual weight, enabling a negligible performance drop even at high sparsity rates. Unfortunately, the deployment of fine-grained sparse networks on off-the-shelf hardware is cumbersome due to the irregularity of sparse weight matrices. On the other hand, coarse-grained sparsity (structured sparsity) [14; 21] achieves significant acceleration by eliminating entire convolution filters [24; 21] or weight blocks [16; 26], yet suffers severe performance degradation under high sparsity rates.
N:M sparsity has lately surfaced as a promising direction of augmenting the trade-off between acceleration effects and performance retention [36; 30]. By stipulating N non-zero components within M consecutive weights across the input channel dimension, it considerably improves the performance of structured sparsity while simultaneously ensuring expeditious inference aided by the N:M sparse tensor core [28]. In recent years, various methods have been proposed to train N:M
sparse networks from pre-trained weights [28; 30] or randomly-initialized weights [36; 35]. In spite of the continuous progress, the efficacy of N:M sparsity concerning performance retention still lags behind unstructured sparsity, particularly at high sparsity rates such as 95% [23; 36]. We ask: _what causes the performance gap between N:M sparsity and unstructured sparsity?_
In this paper, we address this inquiry through empirical observation of network sparsity over the spatial domain, _i.e._, spatial sparsity. Particularly, N:M sparsity displays a consistent sparsity rate of \(1-\frac{\text{N}}{\text{M}}\) at every spatial location of convolution filters owing to the distinctive sparse pattern depicted in the input channel dimension (Fig. 1a). Conversely, unstructured sparsity can exhibit a notable variation in spatial sparsity (Fig. 1b), which we confirm to be ubiquitous in existing unstructured sparsity methods and crucial for their robust performance retention compared with N:M sparsity methods (Sec. 3.2). To explain this, a heterogeneous distribution of weights across various positions aids in giving precedence to the most informative visual elements in the spatial domain, consequently resulting in enhanced performance for the sparse networks. Self-evidently, N:M sparsity falters to assign adequate weights for the informative visual elements especially when the sparsity rate is high, therefore leading to more performance degradation.
Driven by this analysis, we present Spatial Re-parameterization (SpRe) as a way of matching the performance between N:M sparsity and unstructured sparsity. SpRe utilizes the spatial sparsity distribution of unstructured sparsity to allocate an extra weight branch in conjunction with the original N:M sparse weights. This enables the N:M sparse network to maintain a sparsity distribution comparable to that of unstructured sparsity. Moreover, we constrain the newly introduced parameters to adhere to the N:M sparse distribution of the main branch in the input channel dimension. This results in an advantage for a re-parameterization after training, where the newly-added branch can be merged into the main block without impacting the output at inference stage. Thus, SpRe introduces no additional inference burden for the original N:M sparse networks. The advantages of our proposed SpRe include:
* Trackable. SpRE is traceable in principle, due to our innovative observation of the discrepancy in spatial sparsity between N:M sparsity and unstructured sparsity.
* Scalable. SpRe is easy to use and orthogonal to other methods of N:M sparsity, whether applied from randomly-initialized or pre-trained weights.
* High-performance. SpRe is validated to be highly successful in boosting the performance of N:M sparsity methods across various benchmarks. Specifically, SpRe enhances the top-1 accuracy of SR-STE [36], a leading N:M method, by 1.2% when training a 1:16 sparse ResNet-50 [13] on ImageNet [1]. Moreover, the boosted performance even surpasses state-of-the-art unstructured sparsity method GraNet [23] by 0.4% at a similar sparsity rate.
Figure 1: A toy example of the discrepancy in spatial sparsity between N:M sparsity at 1:4 pattern and unstructured sparsity at 75% sparsity rate. (a) N:M sparsity requires N non-zero components among M consecutive weights in the input channel dimension, resulting in equal spatial sparsity. (b) Unstructured sparsity removes weights at arbitrary locations, resulting in uneven spatial sparsity.
Related Work
**Unstructured Sparsity**. Unstructured sparsity removes individual weights at arbitrary positions of the network. Gradient [20], momentum [7] and magnitude [11] are often used to identify and remove insignificant weights. Recent advancements learn to train an unstructured sparse network. RigL [8] alternatively removes and revives weights based on magnitudes and gradients. Sparse Momentum [2] considers the mean momentum magnitude in each layer to redistribute weights. Besides, gradual sparsity is widely adopted to boost performance [37; 23]. Unstructured sparsity is demonstrated to well retain performance, even at a very high sparsity over 95% [23]. However, it defects in the resulting irregular sparse tensors that gain rare speedups on general hardware [33].
**N:M Sparsity**. N:M sparsity preserves N out of M consecutive weights along the input channel dimension of CNNs, and achieves practical speedups thanks to the hardware innovation of N:M sparse tensor core [28; 9]. The pioneering ASP [28] goes through model pre-training, high-magnitude weight removal [11], and model fine-tuning. Pool _et al._[30] proposed channel permutation to increase performance of ASP. Sun _et al._[31] proposed a layerwise fine-grained N:M sparsity to replace the common uniform version. To avoid heavy burden on model pre-training, Zhou _et al._[36] proposed a sparse-refined straight-through estimator (SR-STE) to learn from scratch. LBC [35] further forms the N:M sparsity as a combinatorial problem and learns the best combination for the sparse weights.
**Structural Re-parameterization**. Structural re-parameterization mutually converts different architectures through an equivalent transformation of parameters. The representative RepVGG [6] merges kernels of smaller sizes to these of larger ones in inference. For example, 1\(\times\)1 kernels can be added onto the central points of the 3\(\times\)3 kernels. Along this line, researchers have devised various blocks to boost the performance of a regular CNN without extra inference costs, _e.g._, Asymmetric Convolution Block (ACB) [3], RepLKNet [5]. Besides, structural re-parameterization is also leveraged to guide channel pruning [4], where the original CNN is re-parameterized into two parts to respectively maintain the performance and prune convolutional filters.
We focus on developing an N:M sparsity method that is orthogonal to the aforementioned methods for performance improvement, simply due to two advantages of our method: First, we utilize the distribution of unstructured sparsity across the spatial domain. Second, we re-parameterize a newly-added branch into the main branch of N:M block in the inference.
## 3 Methodology
### Background
For convolutional weights \(\mathbf{W}\in\mathbb{R}^{C_{o}\times C_{i}\times K\times K}\) (\(C_{o}\): output channel, \(C_{i}\): input channel, \(K\): kernel size), the convolution operation with input features \(\mathbf{X}\in\mathbb{R}^{H\times W\times C_{i}}\) (\(H\): height, \(W\): width) is generally formulated as:
\[\mathbf{Y}=BN(\mathbf{W}\mathbin{\textcircled{\char 37}}\mathbf{X}), \tag{1}\]
where \(\mathbin{\textcircled{\char 37}}\) represents convolution operation and \(BN(\cdot)\) stands for the follow-up batch normalization. Network sparsity can be realized with a 0-1 mask \(\mathbf{B}\) of the same shape to \(\mathbf{W}\):
\[\mathbf{Y}=BN((\mathbf{B}\odot\mathbf{W})\mathbin{\textcircled{\char 37}} \mathbf{X}), \tag{2}\]
where \(\odot\) denotes the element-wise multiplication. Therefore, \(\mathbf{B}_{p,q,u,v}=0\) removes \(\mathbf{W}_{p,q,u,v}\) and \(\mathbf{B}_{p,q,u,v}=1\) preserves \(\mathbf{W}_{p,q,u,v}\).
For unstructured sparsity, the zero entries are irregular in the positions of \(\mathbf{B}\). Instead, N:M sparsity stipulates at N non-zero entries for every M consecutive weights along the input channel dimension. Therefore, \(\mathbf{B}\) is restricted to satisfy:
\[\left\|\mathbf{B}_{p,\left[q/\mathbf{M}\right]\cdot\mathbf{M}\cdot\left[q/ \mathbf{M}\right]\cdot\mathbf{M}+\mathbf{M},u,v}\right\|_{0}=\text{N}, \tag{3}\]
where \(p,q,u,v\) enumerates \(C_{o},C_{i},K,K\), respectively.
Besides, for ease of the following representation, we define Spatial Sparsity (SS) to measure the weight sparsity in the spatial domain. The spatial sparsity at location (\(u\), \(v\)) is calculated as:
\[\text{SS}(\mathbf{B},u,v)=1-\frac{1}{C_{o}\cdot C_{i}}\sum_{p=1}^{C_{o}}\sum_{ q=1}^{C_{i}}\mathbf{B}_{p,q,u,v}. \tag{4}\]
### Discrepancy in Spatial Sparsity
Compared to unstructured sparsity, N:M sparsity achieves practical speedups with the support of the N:M sparse tensor core [28; 9], yet it suffers more performance degradation, particularly at high sparsity rates [36; 23]. In this section, we demonstrate that such a performance gap comes from a discrepancy in the spatial sparsity between N:M sparsity and unstructured sparsity. Simply due to the constraint of Eq. (3), N:M sparsity possesses consistent spatial sparsity across different positions and layers as
\[\text{SS}(\mathbf{B},u,v)=1-\frac{\text{N}}{\text{M}}. \tag{5}\]
In other words, N:M sparsity equally allocates weights across the spatial domain of convolution operation. On the other hand, such balanced spatial sparsity is not imperative for unstructured sparsity, given that no sparsity restriction is imposed across the channel dimension. Indeed, we observe that unstructured sparsity exhibits significant variability in terms of spatial sparsity, as illustrated by Fig. 2. This variability in spatial sparsity remains consistent across different layers and network types, irrespective of the specific unstructured sparsity technique employed [11; 8; 27].
Upon closer examination of Fig. 2, it can be inferred that the majority of unstructured weights even maintain a similar distribution spatial sparsity, more interestingly resembling a cross-like configuration in the shallow layers. This intriguing phenomenon warrants further exploration and investigation within the domain of unstructured sparsity. We do not delve deeply into this matter at present, but it is unequivocal that unstructured sparsity methods [11; 8; 27] allocate more weights to certain fixed visual points simultaneously.
CNNs analyze visual imagery by sliding filters along input features to empower each weight to interact with different visual regions. We conjecture that unstructured sparsity methods implicitly acquire weights in the spatial domain that can better tell importance of input visual regions, such that the sparse network learns to focus more on crucial visual areas. To verify our hypothesis, we destroy the variable spatial sparsity of the state-of-the-art unstructured sparsity method GraNet [23] by constraining the sparsity at any location (\(u\), \(v\)) to be the same. Fig. 3 manifests the performance comparison. We can observe performance degenerates a lot, even close to the N:M method SR-STE [36], when the spatial sparsity variability crashes. Therefore, we can ascertain that the variable spatial sparsity is the key to the impressive performance of unstructured sparsity methods. From
Figure 2: Spatial sparsity of common unstructured sparsity methods including Magnitude-based sparsity [11], RigL [8], GraNet [23]. We show spatial sparsity of 3\(\times\)3 kernels from different layers of ResNet-50 [13] with overall 95% sparsity, close to the sparsity level of 1:16 pattern. Experiments are performed on ImageNet-1K [1].
here we can also see that how to develop an enhancement policy to reimburse the spatial sparsity variability is the core to improve the performance of N:M methods.
### Spatial Re-parameterization
Based on the aforementioned analysis, we propose Spatial Re-parameterization (SpRe) as a way of attaining comparable performance to that of N:M sparsity and unstructured sparsity. To address the issue of limited variation in spatial sparsity observed in N:M sparsity, SpRe introduces an extra branch \(\mathbf{B}^{S}\odot\mathbf{W}^{S}\) in conjunction with the original N:M sparse weights \(\mathbf{B}\odot\mathbf{W}\) (Fig. 4). In particular, we first obtain mask \(\mathbf{B}^{U}\) of unstructured sparsity with a sparsity rate \(P=1-\frac{\mathrm{N}}{\mathbf{M}}\). Here we utilize the classic magnitude-based pruning [11], while other metrics for unstructured sparsity [27; 20; 32] can also adapt well. As previously shown in Eq. (5), the N:M weights \(\mathbf{B}\odot\mathbf{W}\) maintain a constant spatial sparsity of rate \(1-\frac{\mathrm{N}}{\mathbf{M}}\). Therefore, in the extra branch, we allocate parameters at spatial locations where unstructured sparsity exhibits less spatial sparsity than N:M sparsity as
\[\mathbf{B}^{S}_{:,:,u,v}=\left\{\begin{array}{ll}\mathbf{B}_{:,:,u,v},&\text {if }\text{SS}(\mathbf{B}^{U},u,v)<1-\frac{\mathrm{N}}{\mathbf{M}},\\ 0,&\text{otherwise,}\end{array}\right. \tag{6}\]
in which \(u=1,...,K,v=1,...,K\). In terms of formula, the forward propagation of SpRe in the training stage for N:M sparsity can be expressed as
\[\mathbf{Y}=BN((\mathbf{B}\odot\mathbf{W})\varoccurlyeq\mathbf{X})+BN(( \mathbf{B}^{S}\odot\mathbf{W}^{S})\varoccurlyeq\mathbf{X}). \tag{7}\]
In this manner, the spatial sparsity of N:M sparsity is reimbursed to a comparable level of unstructured sparsity during training. Moreover, the extra branch can be re-parameterized into the main branch without incurring additional inference burden. Particularly, the BN layer can be firstly merged into the convolution layer [6]. Then, the final N:M sparse weights are obtained by adding up the weights of the extra branch and the main branch in a point-wise manner. As \(\mathbf{B}^{S}\) is a subset of \(\mathbf{B}\) according to Eq. (6), neither interference on the N:M pattern nor extra inference burden is introduced by SpRe. Eventually, the forward propagation after sparse training returns to its original N:M form as:
\[\mathbf{Y}=(\mathbf{B}\odot\bar{\mathbf{W}})\varoccurlyeq\mathbf{X}, \tag{8}\]
where \(\bar{\mathbf{W}}\) represents the merged weight.
**Implementation of SpRe.** SpRe is orthogonal to existing N:M sparsity techniques and can be effortlessly employed to enhance their lack of spatial sparsity variety. For techniques that attain N:M sparsity after pre-training the dense weights [11; 30], SpRe can be directly applied to build the re-parameterization branch on the basis of pre-trained weights. After fine-tuning, the re-parameterization is employed to derive the N:M sparse weights for inference. For methods that excavate N:M sparsity
Figure 3: Performance comparison of sparse ResNet-50 [13] on ImageNet-1K [1] and ResNet-32 [13] on CIFAR-10 [17]. The involved methods include the dense version of ResNet-50 (Baseline), SR-STE [36] (N:M sparsity), unstructured sparse GraNet [23] and GraNet with the same spatial sparsity along the spatial dimension (GraNet\({}^{*}\)). Performance drops when unstructured sparse method is confined to having the same spatial sparsity.
from scratch in a sparse training manner [36; 35], SpRe can also dynamically adapt the binary mask \(\mathbf{B}^{S}\) of the extra branch by looking at the updated weights during training. This ensures that the extra branch can simultaneously sustain proper spatial sparsity distribution and re-parameterization capability. As a result, SpRe can be readily integrated into existing N:M sparsity methods to gain a consistent enhancement in their performance, which is experimentally substantiated in the subsequent section.
## 4 Experiments
### Settings
Our experiments include image classification on the CIFAR-10 [17] and ImageNet-1K datasets [1], object detection and instance segmentation on the COCO benchmark [22]. The specific settings are respectively expounded as follows. We validate the efficacy of SpRe in elevating the performance of prominent N:M sparsity methods, including ASP [28], SR-STE [36], and LBC [35]. We keep the same training configuration as their original implementation for a fair comparison. Our experiments cover a wide range of N:M patterns including 2:4, 1:4, and 1:16. For image classification, we sparsify ResNet-32 [13] on CIFAR-10 dataset, and ResNet-18 [13], ResNet-50 [13], MobileNet-V1 [15] on ImageNet-1K dataset. Besides, we exploit the efficacy of SpRe to aid SR-STE [36] in training 2:4 and 2:8 sparse Faster RCNN [10] for object detection and Mask RCNN [12] for instance segmentation, utilizing ResNet-50 as the backbone. We employ SpRe to all N:M weights except for 1\(\times\) 1 kernels wherein the notion of spatial sparsity is deemed inapplicable. Our experiments are implemented with PyTorch [29] and run on NVIDIA Tesla A100 GPUs.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Model & Method & N:M & Top-1 Acc & N:M & Top-1 Acc & N:M & Top-1 Acc \\ \hline ResNet-32 & Baseline & - & 95.2 & - & 95.2 & - & 95.2 \\ \hline ResNet-32 & ASP [28] & 2:4 & 95.0 & 1:4 & 94.3 & 1:16 & 92.6 \\ ResNet-32 & +SpRe & 2:4 & **95.1(+0.1)** & 1:4 & **94.7(+0.4)** & 1:16 & **93.7(+1.1)** \\ \hline ResNet-32 & SR-STE [36] & 2:4 & 94.9 & 1:4 & 94.3 & 1:16 & 92.6 \\ ResNet-32 & +SpRe & 2:4 & **95.1(+0.2)** & 1:4 & **94.6(+0.3)** & 1:16 & **93.9(+1.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for sparsifying ResNet-32 on CIFAR-10.
Figure 4: Framework of the proposed SpRe. (a) An extra branch is built upon the spatial sparsity arising from unstructured sparse weights and the N:M sparse mask (Eq. (6)). (b) The extra branch is trained in conjunction with the main N:M branch to reimburse the spatial sparsity. After training, a re-parameterization is performed to merge these two branches, without altering the output.
### Image Classification
We first evaluate the efficacy of SpRe for sparsifying ResNet-32 on the CIFAR-10 dataset, which includes 50,000 training images and 10,000 validation images within 10 classes. Tab. 1 shows that SpRe can be effortlessly leveraged to enhance the performance of N:M methods. For instance, the top-1 classification accuracy of SR-STE is improved by 0.2%, 0.3%, and 0.9% for 2:4, 1:4, and 1:16 patterns, respectively. Given that no extra inference burden is introduced, the effectiveness of SpRe for N:M sparsity is obvious.
For the large-scale ImageNet-1K dataset that contains over 1.2 million images for training and 50,000 images for validation in 1,000 categories, we first present the quantitative results for sparsifying ResNet [13] with depths of 18 and 50 in Tab. 3. Encouragingly, SpRe substantially enlarges the performance of N:M methods over a wide range of sparse patterns. Without extra inference burden introduced, \(0.2\%\) and \(0.5\%\) top-1 accuracy improvements are gained by equipping ASP [28] with SpRe when sparsifying ResNet-18 at 2:4 and 1:4 patterns. The advantage of SpRe becomes more pronounced with an increase in the sparsity rate, where classic N:M sparsity methods fail to allocate enough weights for processing the important visual points due to a constant spatial sparsity. Upon training 1:16 sparse ResNet-50, SpRe exhibits a remarkable enhancement in the top-1 accuracy of SR-STE [36] and LBC [35] by 1.2% and 1.1%, respectively. This improvement can be intuitively attributed to compensation for the absence of weight preservation at crucial visual locations.
Furthermore, we examine the generalization capability of SpRe for sparsifying MobileNet-V1 [15], a lightweight network incorporating depth-wise convolution design and poses more challenges for compression. Tab. 4 demonstrates a distinct trend whereby the performance improvements achieved by SpRe remain consistent and increase as the sparsity level increases, irrespective of the specific N:M methods employed. For instance, SpRe is able to enhance the top-1 accuracy of ASP by \(0.3\%\), \(2.5\%\), and \(7.6\%\) at 2:4, 1:4, 1:16 sparse patterns, respectively. These results well highlight the efficacy of SpRe in advancing existing N:M sparsity methods for sparsifying light-weight networks.
Additionally, we provide the performance comparison between unstructured sparsity and N:M sparsity methods in Tab. 5. The results reveal that N:M sparsity methods struggle to maintain accuracy on par with advanced unstructured sparsity techniques at comparable levels of sparsity. This disparity in performance can be attributed to the fact that unstructured sparsity, as we previously discussed, sustains adequate processing of crucial visual points at high levels of sparsity. In contrast, N:M sparsity fails to effectively handle this aspect due to its constant spatial sparsity and hence lags behind in achieving comparable outcomes. Fortunately, by allocating re-parameterizable weights that follow the spatial sparsity distribution of unstructured sparsity, SpRe effectively elevates the accuracy of N:M
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Model & Method & N:M & Top-1 Acc & N:M & Top-1 Acc & N:M & Top-1 Acc \\ \hline ResNet-18 & Baseline & - & 70.9 & - & 70.9 & - & 70.9 \\ \hline ResNet-18 & ASP [28] & 2:4 & 70.6 & 1:4 & 69.1 & 1:16 & 65.0 \\ ResNet-18 & +SpRe & 2:4 & **70.8(+0.2)** & 1:4 & **69.6(+0.5)** & 1:16 & **65.5(+0.4)** \\ \hline ResNet-18 & SR-STE [36] & 2:4 & 71.2 & 1:4 & 69.2 & 1:16 & 64.9 \\ ResNet-18 & +SpRe & 2:4 & **71.3(+0.1)** & 1:4 & **69.9(+0.7)** & 1:16 & **65.7(+0.8)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for sparsifying ResNet-18 on ImageNet-1K.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Model & Method & N:M & Top-1 Acc & N:M & Top-1 Acc & N:M & Top-1 Acc \\ \hline ResNet-50 & Baseline & - & 77.2 & - & 77.2 & - & 77.2 \\ \hline ResNet-50 & ASP [28] & 2:4 & 77.4 & 1:4 & 76.5 & 1:16 & 71.5 \\ ResNet-50 & +SpRe & 2:4 & **77.7(+0.3)** & 1:4 & **76.8(+0.3)** & 1:16 & **72.3(+0.8)** \\ \hline ResNet-50 & SR-STE [36] & 2:4 & 77.0 & 1:4 & 75.3 & 1:16 & 71.5 \\ ResNet-50 & +SpRe & 2:4 & **77.2(+0.2)** & 1:4 & **76.1(+0.8)** & 1:16 & **72.7(+1.2)** \\ \hline ResNet-50 & LBC [35] & 2:4 & 77.2 & 1:4 & 75.9 & 1:16 & 71.8 \\ ResNet-50 & +SpRe & 2:4 & **77.3(+0.1)** & 1:4 & **76.4(+0.5)** & 1:16 & **72.9(+1.1)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for sparsifying ResNet-50 on ImageNet-1K.
sparsity methods to the level of state-of-the-art unstructured sparsity methods without incurring any extra inference overhead. For instance, SR-STE surpassed the recent unstructured sparsity method GraNet [23] by 0.4% top-1 accuracy with the aid of SpRe (72.7% for SR-STE boosted by SpRe and 72.3% for GraNet at similar sparsity rates). Given the distinct advantage of N:M sparsity for practical acceleration on the N:M sparse tensor core, the significance of SpRe in bridging the performance gap between N:M sparsity and unstructured sparsity is apparent.
### Object Detection and Instance Segmentation
Beyond fundamental image classification benchmarks, we exploit the generalization ability of SpRe on the object detection and instance segmentation tasks of COCO benchmark [22]. Tab. 6 compares our proposed SpRe to SR-STE for training N:M sparse Faster-RCNN [10]. Notably, SpRe yields robust performance improvement of \(0.2\) and \(0.3\) mAP at 2:4 and 2:8 sparse patterns, respectively. Similar trends can be observed from Table 7 when sparsifying Mask-RCNN [12] for the instance segmentation task. These results well substantiate the robustness and effectiveness of SpRe on downstream computer vision tasks.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Method & N:M & mAP & Model & N:M & Box mAP & Mask mAP \\ \hline F-RCNN & Baseline & - & 37.4 & M-RCNN & Baseline & - & 38.2 & 34.7 \\ \hline F-RCNN & SR-STE & 2:4 & 38.2 & M-RCNN & SR-STE & 2:4 & 39.0 & 35.3 \\ F-RCNN & **+SpRe** & 2:4 & **38.4(+0.2)** & M-RCNN & **+SpRe** & 2:4 & **39.2(+0.2)** & **35.7(+0.4)** \\ \hline F-RCNN & SR-STE & 2:8 & 37.2 & M-RCNN & SR-STE & 2:8 & 37.6 & 33.9 \\ F-RCNN & **+SpRe** & 2:8 & **37.5(+0.3)** & M-RCNN & **+SpRe** & 2:8 & **37.9(+0.3)** & **34.2(+0.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on object detection.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & Method & N:M & Top-1 Acc & N:M & Top-1 Acc & N:M & Top-1 Acc \\ \hline MobileNet-V1 & Baseline & - & 71.9 & - & 71.9 & - & 71.9 \\ \hline MobileNet-V1 & ASP [28] & 2:4 & 70.2 & 1:4 & 63.9 & 1:16 & 50.4 \\ MobileNet-V1 & +SpRe & 2:4 & **70.5(+0.3)** & 1:4 & **66.6(+2.5)** & 1:16 & **58.0(+7.6)** \\ \hline MobileNet-V1 & SR-STE [36] & 2:4 & 70.4 & 1:4 & 63.2 & 1:16 & 25.4 \\ MobileNet-V1 & +SpRe & 2:4 & **70.8(+0.4)** & 1:4 & **64.9(+1.7)** & 1:16 & **34.2(+8.8)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for sparsifying MobileNet-V1 on ImageNet-1K.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Method & S-parity & Top-1 Acc & Sparsity & Top-1 Acc & Structured \\ \hline ResNet-50 & DNW [34] & 90 & 74.0 & 95 & 68.3 & ✗ \\ ResNet-50 & RigL [8] & 90 & 73.0 & 95 & 70.0 & ✗ \\ ResNet-50 & GMP [37] & 90 & 73.9 & 95 & 70.6 & ✗ \\ ResNet-50 & STR [18] & 91 & 74.0 & 95 & 70.4 & ✗ \\ ResNet-50 & GraNet [23] & 90 & 74.2 & 95 & 72.3 & ✗ \\ \hline ResNet-50 & SR-STE [36] & 88(1:8) & 73.8 & 94(1:16) & 71.5 & ✓ \\ ResNet-50 & **+SpRe** & 88(1:8) & **74.7(+0.9)** & 94(1:16) & **72.7(+1.2)** & ✓ \\ \hline ResNet-50 & LBC [35] & 88(1:8) & 74.0 & 94(1:16) & 71.8 & ✓ \\ ResNet-50 & **+SpRe** & 88(1:8) & **74.8(+0.8)** & 94(1:16) & **72.9(+1.1)** & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of the N:M and unstructured sparsity methods for sparsifying ResNet-50.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Method & N:M & mAP & Model & N:M & Box mAP & Mask mAP \\ \hline F-RCNN & Baseline & - & 37.4 & M-RCNN & Baseline & - & 38.2 & 34.7 \\ \hline F-RCNN & SR-STE & 2:4 & 38.2 & M-RCNN & SR-STE & 2:4 & 39.0 & 35.3 \\ F-RCNN & **+SpRe** & 2:4 & **38.4(+0.2)** & M-RCNN & **+SpRe** & 2:4 & **39.2(+0.2)** & **35.7(+0.4)** \\ \hline F-RCNN & SR-STE & 2:8 & 37.2 & M-RCNN & SR-STE & 2:8 & 37.6 & 33.9 \\ F-RCNN & **+SpRe** & 2:8 & **37.5(+0.3)** & M-RCNN & **+SpRe** & 2:8 & **37.9(+0.3)** & **34.2(+0.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results on instance segmentation.
### Performance Analysis
We present performance analysis of SpRe by investigating two variants on the spatial sparsity distribution of the extra branch, _i.e._, Eq. (6). Experiments include 2:4, 1:4, and 1:16 patterns for sparsifying ResNet-50 on ImageNet-1K. The baseline method is SR-STE [36]. In detail, we first perform ablation that maintains the same mask with the main N:M sparse weights as \(\mathbf{B}^{S}=\mathbf{B}\). This variation carries out more preserved weights in the extra branch, yet the spatial sparsity stays the same as the vanilla N:M sparsity. As seen from Tab. 8, more weights in the extra branch (denoted as Same) even bring less performance improvement. We attribute this phenomenon to limited variability in spatial sparsity as discussed in Sec. 3.2. Besides, we consider another variation that allocates parameters at spatial locations where unstructured sparsity exhibits less spatial sparsity than N:M sparsity (denoted as Inverse). As can be seen, the inverse allocation for the weights in the extra branch brings even negative performance, which further demonstrates our point that a correct spatial sparsity variation is the key to the performance retention of sparse networks.
## 5 Limitation
We further discuss unexplored limitations, which will be our future focus. SpRe is particularly presented for N:M sparsity in convolutional neural networks upon our observations of spatial sparsity variability. Although not the focus of this current work, it would be interesting for future work to examine similar discoveries to drive further enhancement for N:M sparsity in networks with different typologies, _e.g._, Vision Transformers (ViTs). Furthermore, apart from the notable performance improvement, SpRe also brings some extra burdens for N:M sparsity in the training time. A promising direction in the future is to develop more efficient ways to mitigate the lack in spatial sparsity variability of N:M sparsity.
## 6 Conclusion
In this work, we present Spatial Re-parameterization, an effective and easy-to-use method for N:M sparsity. By introducing a re-parameterizable branch that follows the spatial sparsity distribution of unstructured sparsity, SpRe is able to reimburse the spatial sparsity variability during the training time of N:M sparsity, which we examine to be the core for performance retention. Our proposed SpRe is benchmarked on several computer vision benchmarks, consistently delivering enhanced performance for representative N:M methods without extra inference burden. Notably, SpRe brings the performance of N:M sparsity methods to a comparable level with unstructured sparsity methods for the first time. Hopefully, this work shields a convincing trail to dive into the intrinsic property of N:M sparsity.
## Acknowledgement
This work was supported by National Key R&D Program of China (No.2022ZD0118202), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001).
|
2310.09916 | Socially reactive navigation models for mobile robots in dynamic
environments | The objective of this work is to expand upon previous works, considering
socially acceptable behaviours within robot navigation and interaction, and
allow a robot to closely approach static and dynamic individuals or groups. The
space models developed in this dissertation are adaptive, that is, capable of
changing over time to accommodate the changing circumstances often existent
within a social environment. The space model's parameters' adaptation occurs
with the end goal of enabling a close interaction between humans and robots and
is thus capable of taking into account not only the arrangement of the groups,
but also the basic characteristics of the robot itself. This work also further
develops a preexisting approach pose estimation algorithm in order to better
guarantee the safety and comfort of the humans involved in the interaction, by
taking into account basic human sensibilities. The algorithms are integrated
into ROS's navigation system through the use of the $costmap2d$ and the
$move\_base$ packages. The space model adaptation is tested via comparative
evaluation against previous algorithms through the use of datasets. The entire
navigation system is then evaluated through both simulations (static and
dynamic) and real life situations (static). These experiments demonstrate that
the developed space model and approach pose estimation algorithms are capable
of enabling a robot to closely approach individual humans and groups, while
maintaining considerations for their comfort and sensibilities. | Ricarte Ribeiro, Plinio Moreno | 2023-10-15T18:55:55Z | http://arxiv.org/abs/2310.09916v1 | # Socially reactive navigation models for mobile robots in dynamic environments
###### Abstract
The objective of this work is to expand upon previous works, considering socially acceptable behaviours within robot navigation and interaction, and allow a robot to closely approach static and dynamic individuals or groups. The space models developed in this dissertation are adaptive, that is, capable of changing over time to accommodate the changing circumstances often existent within a social environment. The space model's parameters' adaptation occurs with the end goal of enabling a close interaction between humans and robots and is thus capable of taking into account not only the arrangement of the groups, but also the basic characteristics of the robot itself. This work also further develops a preexisting approach pose estimation algorithm in order to better guarantee the safety and comfort of the humans involved in the interaction, by taking into account basic human sensibilities. The algorithms are integrated into ROS's navigation system through the use of the \(costmap2d\) and the \(move\_base\) packages. The space model adaptation is tested via comparative evaluation against previous algorithms through the use of datasets. The entire navigation system is then evaluated through both simulations (static and dynamic) and real life situations (static). These experiments demonstrate that the developed space model and approach pose estimation algorithms are capable of enabling a robot to closely approach individual humans and groups, while maintaining considerations for their comfort and sensibilities.
Human-robot interaction, social robot, prox-emics, adaptive space, dynamic environment
## I Introduction
With the continuous advancements in robotics, robots have been steadily moving away from purely industrial and controlled environments to environments populated by people of all walks of life. Many such robots already exist within public spaces and even people's own homes [1]. Few of these robots, though, are capable of properly accommodating a human's sensibilities. It is thus necessary to study the nature of social interaction and how to best adapt it to robotics.
The greater motivation for this dissertation, within the aforementioned context, is the importance of a robot within a social environment being capable of proactively interacting with an individual or a group of individuals in a socially acceptable manner. This requires that the personal and group spaces are modelled in a way to facilitate the robot's navigation for interaction purposes and to properly calculate a position for the robot to execute an approach and initiate this interaction, both in static and dynamic environments.
Most state-of-the-art methods utilize fixed parameters within their social navigation systems, which leads to a degree of inflexibility. The methods that do possess adaptive qualities are often restricted to static environments or utilize adaptations oriented towards human avoidance. In order to enable more comfortable human-robot interactions, several factors must be taken into account. First, humans have flexible considerations of their own personal or group spaces that are influenced by the situations they are inserted into. It is necessary for space models to be adaptive in order to properly address this factor. Secondly, a human within a social space is often in motion. Preparing a robot to take into account the dynamic nature of people is essential to guaranteeing their safety and comfort. Lastly, robots are variable in nature, especially in a constantly changing field such as social robotics. It is thus necessary to be able to adapt existing considerations to the spatial characteristics of the robot itself. The work presented in this dissertation thus aims to be able to adapt the personal and group space models according to these aspects and enable proactive human-robot interaction. The main problem is the selection of a proper pose for interaction with individuals and groups, ensuring as much safety and comfort as possible.
Several modules are necessary for this work, such as the proper detection of humans and their pose (position and orientation) and of the groups they might form, in order to properly determine what parameters suit each unique situation.
This work has the following contributions: (i) Flexible personal and group space models for human-robot interaction, that take into account dynamic considerations and the robot's characteristics, (ii) An adaptation to an approach pose estimator to ensure the safety of humans involved in social situations and the dynamic nature thereof, (iii) Changes to previously developed ROS packages to integrate these changes. These are: a package that provides ROS messages that contain group information 1, a package responsible for handling the calculation of the personal and group space parameters and their integration within a costmap2, and a package that estimates an approach pose for a chosen individual or group 3.
Footnote 1: [https://github.com/Ricarte-Ribeiro/group_msgs](https://github.com/Ricarte-Ribeiro/group_msgs)
Footnote 2: [https://github.com/Ricarte-Ribeiro/adaptive_social_layers](https://github.com/Ricarte-Ribeiro/adaptive_social_layers)
Footnote 3: [https://github.com/Ricarte-Ribeiro/approach_group](https://github.com/Ricarte-Ribeiro/approach_group)
The remainder of this document is organized as follows: Section II presents the background and related works, Section III describes the approach taken to the subject matter of the dissertation and presents preliminary results, Section IV describes how the algorithms are integrated into the navigation system and presents experimental results, and Section V presents the conclusions of this work.
## II Related Works
### _Background_
#### Ii-A1 Mapping
The common representation of obstacles and free space is the occupancy grid method used to fill out the map. [2][3]. It divides the space into a grid of evenly spaced cells represented by binary random variables. The variables, depending on their values, represent whether a cell is occupied and thus non-traversable. At each cell, its cost is computed from the detected obstacles. To increase performance, Lu et. al. [4] introduced the multi-layered costmap, where each layer includes different information that can be merged in a custom manner by the user. This also allows for only areas of the costmap that suffered changes to be updated, removing the need to fully update the master costmap.
#### Ii-A2 Navigation
Navigation occurs through the use of planners to calculate a path and translate it into velocity commands. There are two specific types of planners relevant to this work: global planners and local planners. Global planners take into account the entire breadth of the mapping information available to the robot and calculate a route through which it can reach its goal, but require a map. These algorithms are slower than their local counterparts. Local planners, on the other hand, can address any immediate problems the robot might face during its navigation, detected by its sensors, due to their smaller runtime.
#### Ii-A3 Proxemics
Hall [5], who coined the term proxemics, defined it as "the interrelated observations and theories of humans' use of space as a specialized elaboration of culture", and studied how humans manage the space around them during interpersonal communication.
Personal Space: Hall defined personal space as concentric circles around a human and divided it into four parts:(i) Intimate Space (\(x<0.45m\)), (ii) Personal Space (\(0.45<x<1.2m\)), (iii) Social Space (\(1.2<x<3.6m\)) and (iv) Public Space (\(x>3.6m\)).
Several studies since have studied different forms of shaping the personal space. [6] determined that people give greater importance to their front, and thus personal space is larger in that direction, giving it an egg-shape. [7] proposed that personal space is defined as elliptical equipotential lines centered on the person and oriented in the direction of motion. [8] theorized that personal space could be smaller on the person's dominant side. [9] determined that personal space is not a constant and can shift in response to differing circumstances. [10] introduced the concept of a spatio-temporal personal space that can be adapted taking into account the person's velocity. This work's approach is justified in these last two works.
Group Space: During social interaction, multiple individuals can form a group which follows its own specific rules in terms of spatial interaction. This group can be defined through the knowledge of the people's positions and orientations and multiple forms of defining it have been studied. The most well known is Adam Kendon's Facing Formation (F-Formation) [11]. Kendon organizes an F-Formation into three different spaces, defined through concentric circles: o-space, the innermost space of the group, delimited by every member of the group and which no stranger may breach; p-space, the area of space after the o-space and where all members of a group are located; and r-space, the area surrounding o-space and p-space. It is the last area separating a group from the rest of the world and it is also where people must move to in order to leave or enter the group.
### _Human perception of a robot's approach_
[12] determined that people prefer a robot that approaches a group at a distance around 80 to 100 \(cm\) from the center. They also noted that people prefer that the robot does not breach the intimate space of any of the group members and that the comfort of the approach may depend on which individuals the robot inserts itself between when entering the group. [13, 14] studied the approach direction of a robot. They concluded that frontal approaches are preferred and rear approaches cause discomfort. [15] study the comfortability of the approach direction of a robot in regard to a group of people. They concluded that the approach direction does not matter as long as the robot is within the field of view of a group member.
### _Socially-Aware Robot Navigation - Literature Review_
#### Ii-C1 Space Models and approach pose estimation
[16, 17, 18, 19] model personal space through the use of a merger of two 2D Gaussian functions, to represent the difference in the rear and frontal directions of the personal space. [16]'s model requires a lookup table that has to be adjusted a priori in order to properly function and the capability to differentiate the age and gender of humans. [17]'s model is dependent on velocity to determine orientation and thus remains a circle when the person is static. [18, 19] utilize adaptive parameters in order to create a flexible space model. [18]'s model can adapt to spatial context and human intention. [19] expands upon [18] enabling the adaptation to work on group space too and further adapts the personal space of people in a group to ensure that there is no overlap between each individual's personal space. This last work is the primary inspiration for this dissertation.
[20] utilizes a skew-normal probability density function to define a personal space that can adapt to the certainty the robot possesses of a human's characteristics. It utilizes four shapes defined in Section II-A3.
[21] defines the concept of Dynamic Social Zone (DSZ), defined by two parts. The first of them, the Extended Personal Space, expands Hall's idea of personal space by taking into account several new elements related to human sensibilities. The other part, the Social Interaction Space, covers the o-space of a group or the possibility of an interaction between a human and an object. This work also goes on to describe a method to determine approach poses by ensuring that they must be located within the field of view of the individuals and outside the DSZ. Later in [22], the concept is expanded to take into account the status of the human (sitting, standing, etc).
[23] present a data-driven model to estimate an appropriate approaching pose. The model was trained utilizing a dataset consisting of real life situations. While the algorithm estimates approach poses with a good success rate, it is dependent on the size of the training set, and there is also the possibility of failure for a situation sufficiently different from any within it.
[24] utilizes the Fast Marching Square method (\(FM^{2}\)) [25] of path planning with an adaptation for social environments. This method lacks experimental validation with humans. There is also the issue that the approach method chosen does not take into account the directionality of the approach or more than the o-space of a group.
[26] extends the preexisting Risk-RRT method [27] of path planning with social considerations. Only two experiments are presented, both with F-formations in a vis-a-vis formation.
#### Ii-B2 Dynamic environments
[28] creates new costmap layers, each representing a predicted future human position, in sequential timesteps. The paper focuses purely on obstacle avoidance, and is thus not as useful when attempting to achieve an interaction pose.
The SFM [7] represents a social space as a set of repulsive forces from humans and obstacles, alongside an attractive force towards a given goal. While the SFM by itself would allow a robot to navigate in a dynamic environment and avoid people, it would not be able to execute an approach without significantly tweaking the force values of any approach target, greatly risking their safety.
The Hybrid Reciprocal Velocity Object (HRVO) [29], is the result of successive expansions of the Velocity Object (VO), which by itself represents the set of velocities that, if taken by an agent A, would lead to a collision with an agent B. It is expanded into the Reciprocal Velocity Object (RVO) [30] which takes into account the reaction of agent B. The HRVO, which takes aspects of the VO and the RVO is purely oriented towards obstacle avoidance.
[31] proposed a proactive social motion model that merged the concepts of SFM and HRVO. The merger expands both the SFM and HRVO to take into account not only individuals or objects, but also groups of people, human-object interactions, and human characteristics. The SFM is expanded to generate a repulsive force from these new elements. The HRVO on the other hand generates HRVOs from these new elements, forcing the robot to choose velocities that will not result in disturbing any of them. The two algorithms are then merged by utilizing the velocity that is generated as output from the HRVO as the desired velocity to generate the SFM's attractive force. This algorithm is utilized purely for avoidance and not adapted towards human approach.
[21] also tackles dynamic environments with the DSZ. An individual's frontal personal space is expanded proportionally to their velocity. [22] expands this change to the group space function and alters the approach pose prediction to be capable of taking movement into account. The limit in this work lies in its use of fixed parameters to model personal space.
## III Approach-oriented adaptive space
The approach taken in this work expands upon [19] taking additional considerations for the safety of humans and adding adaptations geared towards close interaction.
### _Space modeling_
The base personal space model utilized in this work is a modification of the model utilized in [19]. While the base 2D Asymmetric Gaussian function proposed in [17] considers different standard deviations for the positive and negative x-axis (if centered at the origin), to represent the front and rear of a person, it is still limited to equal deviations on the y-axis. This work alters the function to allow for four different deviations, creating a more flexible model :
\((x_{0},y_{0})\) is the center of the function;
\(A\) is the amplitude of the function;
\(\theta_{0}\) is the orientation of the function;
\(\sigma_{f}\) is the frontal standard deviation (\(\theta_{0}\));
\(\sigma_{r}\) is the rear standard deviation (\(\theta_{0}\));
\(\sigma_{sl}\) is the left standard deviation (\(\theta_{0}+\frac{\pi}{2}\));
\(\sigma_{sr}\) is the right standard deviation (\(\theta_{0}-\frac{\pi}{2}\)).
Algorithm 1 computes the value of the Altered Asymmetric Gaussian function, given the parameters, for a cell (\(x,y\)) given a person or group centered at (\(x_{0},y_{0}\)).
```
1:functionAlteredAsymmetricGaussian(\(x,y,x_{0},\)\(y_{0},\theta_{0},A,\sigma_{f},\sigma_{r},\sigma_{sl},\sigma_{sr}\)):
2:\(\theta\longleftarrow\)\(atan(y-y_{0},x-x_{0})\);
3:\(\alpha\longleftarrow\theta-\theta_{0}\);
4:if\(abs(\alpha)\leq\frac{\pi}{2}\)then
5:\(\sigma_{x}\longleftarrow\sigma_{f}\)
6:else
7:\(\sigma_{x}\longleftarrow\sigma_{r}\)
8:endif
9:if\(\alpha<0\)then
10:\(\sigma_{y}\longleftarrow\sigma_{sr}\)
11:else
12:\(\sigma_{y}\longleftarrow\sigma_{sl}\)
13:endif
14:\(d\longleftarrow\sqrt{(x-x_{0})^{2}+(y-y_{0})^{2}}\)
15:return\(A\exp\left(-\left(\left(\frac{d\cos(\theta-\theta_{0})}{2\sigma_{x}}\right)^{2}+ \left(\frac{d\sin(\theta-\theta_{0})}{2\sigma_{y}}\right)^{2}\right)\right)\)
```
**Algorithm 1** Algorithm to compute the altered Asymmetric Gaussian function at a given cell (x,y)
Given a person \(p_{i}\) positioned at (\(x_{i}^{P},y_{i}^{P}\)) and oriented towards \(\theta_{i}^{P}\) it is possible to calculate the value of their personal space for a position (\(x,y\)) through the use of Algorithm 1, if supplied the parameter set [\(A^{P},\sigma_{i\_f}^{P},\sigma_{i\_r}^{P},\sigma_{i\_sl}^{P},\sigma_{i\_sr}^{P}\)], specific to the person in question. These parameters will affect the shape and size of the personal space.
With a set of N individuals, the function that represents the merged personal space of every individual can thus be represented by:
\[F^{P}(x,y)=max(f_{1}^{P}(x,y),...,f_{N}^{P}(x,y)), \tag{1}\]
with \((f_{1}^{P}(x,y),...,f_{N}^{P}(x,y))\) representing the personal space functions of each individual.
Groups of humans, can be divided into the spaces described in Section II-A3. This work also considers another space, introduced in [19] named the group space, limited by a radius defined as the average of the distances of every group member from the group center. Group space is modelled utilizing the Altered Asymmetric Gaussian. A group \(k\) itself can be defined through the parameters \(g_{k}=(x_{k}^{g},y_{k}^{g},r_{k}^{g},\theta_{k}^{g})\) where \((x_{k}^{g},y_{k}^{g})\) represents the center of the group, \(r_{k}^{g}\) is the group radius, and
\(\theta_{k}^{g}\) is the orientation of the group, calculated as the average of the orientations of the velocity vector of every group member. This is modeled through the mentioned Gaussian with its center at the center of the group and defined by the parameters \([A^{g},\sigma_{k\_f}^{g},\sigma_{k\_r}^{g},\sigma_{k\_s}^{g}]\). \(\sigma_{k\_s}^{g}\) represents both side deviations as they do not differ in this case.
With a set of N groups, the function that represents the merged group space of every group is computed through:
\[F^{g}(x,y)=max(f_{1}^{g}(x,y),...,f_{N}^{g}(x,y)), \tag{2}\]
with \((f_{1}^{g}(x,y),...,f_{N}^{g}(x,y))\) representing the group space of every group.
By joining the function that computes every personal space and this function, the model of every individual and group within the space is obtained:
\[F(x,y)=max(F^{P}(x,y),F^{g}(x,y)) \tag{3}\]
### _Adaptive space_
The space model adaptations developed in this dissertation can be divided into three different parts: (i) individual adaptation, (ii) group adaptation, and (iii) velocity adaptation.
#### Iii-B1 Individual adaptation
The adaptation applied to an individual is based on the idea that a robot must approach a human being closer than the edge of its personal space in order to properly interact with them. As most approaches to lone humans should be done from within their field of view (and thus from their front) [13, 14] the only adaptation applied to their personal space model is to lower the value of the frontal standard deviation \(\sigma_{f}^{P}\) by a preset value \(\zeta\). The change is thus simply computed through this calculation, for a given approach target \(i\):
\[\sigma_{aux}=max(0.45,\sigma_{f}^{P}-\zeta), \tag{4}\]
\[\sigma_{i\_f}^{P}=min(\sigma_{f}^{P},\sigma_{aux}), \tag{5}\]
where the lowered value is compared to the threshold of the intimate space as defined by Hall [5] in order to guarantee the sanctity of the person's intimate space.
#### Iii-B2 Group adaptation
The adaptation to group space is done not to the parameters of the group model itself, but to the parameters of each group member individually. The adaptation starts from the parameters provided by the previous algorithm [19] and attempts to determine if it is possible to lower the values of the lateral standard deviations of each personal space model. Each personal space calculation requires, as input, the Gaussian parameters of an individual \(\sigma^{P}\) =(\(A^{P},\sigma_{f}^{P},\sigma_{r}^{P},\sigma_{sl}^{P},\sigma_{sr}^{P}\)), their pose (\(x^{P},y^{P},\theta^{P}\)) and only these last parameters from the individuals to their left (\(x_{l}^{P},y_{l}^{P},\theta_{l}^{P}\)) and to their right (\(x_{r}^{P},y_{r}^{P},\theta_{r}^{P}\)) in the group. In a group of two individuals only one of these will be considered.
In order to prevent collisions between the robot and any of the individuals, points are calculated to the side of each individual, in a line perpendicular to their orientation, at a distance (\(s_{h}\)) determined by user inputted values.
The auxiliary points are calculated as such:
\[A_{left}=\left(x^{P}+s_{h}\cdot\cos\left(\theta^{P}+\frac{\pi}{2}\right),y^{ P}+s_{h}\cdot\sin\left(\theta^{P}+\frac{\pi}{2}\right)\right)\!, \tag{6}\]
\[A_{right}=\left(x^{P}+s_{h}\cdot\cos\left(\theta^{P}-\frac{\pi}{2}\right),y^{ P}+s_{h}\cdot\sin\left(\theta^{P}-\frac{\pi}{2}\right)\right)\!, \tag{7}\]
\[A_{l\_adj}=\left(x_{l}^{P}+s_{h}\cdot\cos\left(\theta_{l}^{P}-\frac{\pi}{2} \right),y_{l}^{P}+s_{h}\cdot\sin\left(\theta_{l}^{P}-\frac{\pi}{2}\right)\right)\!, \tag{8}\]
\[A_{r\_adj}=\left(x_{r}^{P}+s_{h}\cdot\cos\left(\theta_{r}^{P}+\frac{\pi}{2} \right),y_{r}^{P}+s_{h}\cdot\sin\left(\theta_{r}^{P}+\frac{\pi}{2}\right)\right)\!. \tag{9}\]
The algorithm then calculates the distance between each pair of points and determines whether there is space for the robot to insert itself in between, by comparing the distances to a provided value for the lateral dimension of the robot (\(s_{r}\)). The distances calculated are euclidean:
\[d_{left}=\sqrt{(A_{left}-A_{l\_adj})^{2}} \tag{10}\]
\[d_{right}=\sqrt{(A_{right}-A_{r\_adj})^{2}} \tag{11}\]
It also calculates whether the difference in orientation between two individuals is set between two different user defined values. Should the answer be positive, it will determine whether the current lateral standard deviations permit an approach and adapt them should they not. This is done by calculating a point (\(B_{aux}\)) at the minimum distance from the relevant auxiliary point (\(A_{left}\) or \(A_{right}\)), that complies with the space requirements, projecting it on a line perpendicular to the person's direction, and then measuring the distance between the projection and the person. Should this distance be less than the preexisting standard deviation, then it is replaced. This algorithm can be described as such, for the case between an individual and the one on their left:
\[\mathbf{v_{aux}}=\frac{(A_{left}-A_{l\_adj})}{d_{left}}, \tag{12}\]
\[B_{aux}=A_{left}+\frac{(d_{left}-s_{r})}{2}\cdot\mathbf{v_{aux}}, \tag{13}\]
\[d_{aux}=\sqrt{((x^{P},y^{P})-P_{proj})^{2}}, \tag{14}\]
\[\sigma_{sl}^{P}=min(\sigma_{sl}^{P},d_{aux}), \tag{15}\]
\[\sigma_{sl}^{P}=max(\sigma_{sl}^{P},0.225), \tag{16}\]
where \(P_{proj}\) is the projection of \(B_{aux}\) on the line perpendicular to the person's orientation. The same process is done for the right side to obtain \(\sigma_{sr}^{P}\). \(0.225\) represents the average value for half the lateral size of a human.
#### Iii-B3 Velocity adaptation
It is necessary to adapt the space models to take into account human velocity. The choice made in this work is to increase the frontal standard deviation. The adaptation has three parameters: the adaptation factor (\(a_{adapt}\)), the maximum adaptation limit (\(a_{limit}\)) and the distance limit (\(d_{limit}\)). The first parameter determines the proportion by which the velocity affects the standard deviation. The second one provides an upper limit for the adaptation. The distance limit lowers the adaptation as the robot approaches the individual or group. The same algorithm is applied to groups and individuals and thus it will only be described once with the input parameters (\(x^{Pg},y^{Pg},\sigma_{l}^{Pg}\)) to represent the position and
the frontal standard deviation. A group's velocity is calculated by averaging the velocity of all its members.
\[d_{mod}=\begin{cases}min(1,\frac{dist}{d_{limit}}\cdot 2)&\text{if }d<=d_{limit}\\ 1&\text{if }d>d_{limit}\end{cases} \tag{17}\]
\[\sigma_{f}^{Pg}=min\Big{(}\sigma_{f}^{Pg}+a_{limit},\sigma_{f}^{Pg}\cdot(1+d _{mod}\cdot a_{adapt}\cdot v_{mag})\Big{)} \tag{18}\]
where \(d\) is the distance between the robot and an individual or a group and \(v_{mag}\) is the magnitude of the human velocity.
### _Approach pose estimation_
Utilizing the space models calculated through the algorithms presented in the previous section, it is possible to estimate possible approach poses. The algorithm utilized is mostly based on the one defined in [19] with changes and additions that will be described in this section. These changes can be separated into two types: (i) safety and comfort oriented changes, and (ii) dynamic environment changes.
#### Iii-C1 Safety and comfort
The changes in this section intend to enable a safe and comfortable approach for the people being approached. The first change is to attempt an approach within the field of view of all involved people. This is done by filtering the approach circumference for areas within the common field of view of every individual. Should no valid zones be detected with this condition, the algorithm defaults to the original filtering behaviour, ignoring the field of view as a factor.
The preexisting algorithm does not check if the robot fits in the approach zone. With a preset robot size as input alongside a list of all approaching zones found, the width of the zones themselves is evaluated by calculating the distance between the two farthest points within the zone, and that distance is then compared to the preset size. Should the distance be greater, then this zone should be considered valid for approach. The approach pose chosen is the one closest to the robot itself, of all the valid ones.
#### Iii-C2 Dynamic environment adaptation
The first change in this section mirrors Section III-B3. The maximum limit the approach circumference is allowed to expand to is raised proportionally to the magnitude of the individual's or group's velocity, limited by a preset value. This change is also more limited the closer the robot is to the approach target. All equations applied in Section III-B3 apply to this section with the standard deviation replaced by the maximum limit of the approach circumference radius. For the purposes of differentiation, the approach pose equivalents of the variables presented in III-B3 are named: \(d_{a\_limit}\), \(a_{a\_limit}\), and \(a_{a\_adapt}\).
A dynamic environment grants the robot less time for calculations, as the movement of a human quickly renders them obsolete. The robot needs to quickly evaluate possible approach positions. This is done by increasing the expansion step taken when no valid approach zone is found proportionally to the velocity. This can be defined as such:
\[step=max(step,step\cdot v_{mag}\cdot v_{mod}), \tag{19}\]
where the \(v_{mod}\) is a user defined value that determines how much the velocity affects the step.
The last change applies to the field of view defined in the previous subsection. The expanded space model leads to farther and thus wider approach zones. This could lead to lateral approach poses being chosen, which could be suboptimal. It is thus necessary to narrow the field of view the farther away the current approach circumference is from its initial radius. This can be calculated as such:
\[fov=\frac{f_{fov}}{f_{mod}\cdot\frac{approach\_radius}{group\_radius}}, \tag{20}\]
where \(f_{ifov}\) is the initial angle of the field of view and \(f_{mod}\) is a user defined value that determines how much the distance from the initial radius narrows the field of view.
### _Results_
In this section, a preliminary study of the space model adaptation algorithm is done, through a comparison to the results obtained when utilizing the algorithm present in [19]. The objective is to evaluate, on an initial basis, how the algorithm performs when utilized on existing datasets and its performance when compared with preexisting work.
#### Iii-D1 Visualization
A dataset, comprised of 17 situations, was utilized for an initial evaluation. Due to the nature of the algorithm, not all configurations will result in an adaptation, with only 8 of the 17 being adapted. Successful situations tend to be those with enough members that a space model adaptation is justified, but also enough free space in between each individual to guarantee a safe approach. Three of the cases present in the dataset are variations of the same situation: 4 people placed equidistantly from each other and at the same radius from the center of the group with varying distances from the center. These distances are \(0.5m\), \(0.75m\) and \(1m\). Only the second case was successfully adapted. The first case failed due to the individuals being too close together, creating a situation in which there were no safe approach poses. In the third case, the individuals were separated enough that safe approach poses existed by default and nothing was adapted. In the second case, depicted in Fig. 1, the individuals were far enough from each other that safe approach poses could be determined, but the original space model blocked these, leading to the adaptation by the new algorithm.
#### Iii-D2 Comparative Study
For a more formal comparative study of the adaptation algorithm, it is necessary to utilize larger datasets to obtain representative results. The two datasets utilized were the same as those used in the comparative work: **Synthetic Data**[32] and **IDIAP Poster Data**[33]. Table I describes the composition of the groups present within the datasets.
Both algorithms were fed the datasets as input and an analysis of the resultant approach perimeter was done. Several
iterations of the algorithm were executed in order to allow for a test of various possible scenarios and tolerances, resulting from the variability of the algorithm's input parameters. These iterations are defined by changes in the \(s_{r}\) and \(s_{h}\) parameters representing the size of the robot that would be expected to approach the groups and the desired minimum safety tolerance for the distance between the robot and the person, respectively. \(s_{r}\) has two values, \(0.45m\) and \(0.8m\). These values represent, respectively, a roughly human-sized robot, and a size approaching that of the Vizzy robot. \(s_{h}\) varies from \(0.225m\) to \(0.45m\), the minimum allowed value for the space model per the original algorithm and the value for intimate space radius per Hall, respectively.
Figs. 2 to 4 represent the results of the experiment for a human-sized robot with the **IDIAP Poster Data** dataset. As can be observed by analysing the plots, the algorithm developed in this work does increase the approach perimeter in comparison to the original algorithm. As would be expected, loosening the tolerance in the distance allowed between the robot and the human results in a larger perimeter. At its greatest increase with \(s_{h}=0.225m\), the algorithm has a total increase in perimeter of \(10.7\%\). As the situation with the starkest differences, it is this minimum tolerance experiment that will be analysed in depth, though all the rest follow its trends and thus similar results can be extrapolated.
By observing Fig. 4, it is possible to note that the increases were not uniform for every group type. Groups with three members were the ones most affected at \(15\%\), followed by groups of four, five, two, seven and lastly six members. The groups with six and seven members can not be utilized to provide any viable conclusions, due to the low number of samples. It is apparent that groups of only two members had the smallest percentage increase of these 4 sizes. This is due to groups of two members being unlikely to be positioned in a manner in which adaptation is viable or necessary. Most of the common shapes for these groups (vis-a-vis or side-by-side and similar) would not result in any adaptation. The remaining common forms of group arrangement (L-shape, C-shape or V-shape) often will also not result in adaptation as the available approach pose is already large enough.
Groups of three members can be concluded to be the most viable for adaptation. This could be seen as a combination of having enough members to avoid the situation verified with groups of two members, tending more towards circular formations, alongside having enough free space, due to having less members than groups of four or five people, for the algorithm to find viable spaces to adapt.
The other three scenarios (**IDIAP Poster Data** dataset with a Vizzy-sized robot and **Synthetic Data** dataset with Human and Vizzy-sized robots) present results that possess similar trends to the ones already presented with justifiable variations. The **IDIAP Poster Data** dataset with a Vizzy-sized robot results in a smaller total increase in perimeter with a \(5.4\%\) increase, due to the larger robot being able to safely approach a smaller amount of groups. The cases utilizing the **Synthetic Data** dataset have even smaller increases at \(5\%\) and \(3.6\%\), for the human and Vizzy sized robots respectively, due to the larger number of groups with two members and lesser number of groups with 3 members, which following the previous conclusions is a justifiable result.
## IV Socially Reactive Navigation System
It is necessary to implement the space model and approach pose estimation and integrate them with the robot's navigation system. A socially aware navigation system runs parallel to the classic robot navigation framework. The usual robot navigation framework is composed of four blocks: perception,
Fig. 1: Visualization of the group space and personal space pertaining to Group 17 of the Groups Data dataset, alongside possible approach poses. This group’s members are all at a distance of \(0.75m\) from the center of the group.
Fig. 4: Percentage of increase in perimeter sum compared to old algorithm in Human-sized robot - IDIAP dataset
Fig. 3: Perimeter sum divided by number of group members for Human-sized robot - IDIAP dataset
localization, motion planning and motion control. The socially aware navigation system adds the following four blocks: Human detection and tracking, group detection, modeling of space and approach pose estimation. These last two blocks correspond to the main contribution of this work. The system was implemented utilizing the Robot Operating System (ROS).
### _Detection of Social Scenarios_
#### Iii-A1 Human Detection
It is necessary for the robot to be able to detect humans in order to properly integrate them into the navigation system. This work makes use of OpenPose [34], a real time multi-person 2D pose detection, system to identify humans. It detects humans by identifying body parts and associating them with individuals via a non-parametric representation named Part Affinity Fields. Utilizing the 2D pose, a method developed in [35] is then capable of utilizing homography to determine the 3D pose.
#### Iii-A2 Group Detection
To properly address groups of people it is then necessary to be able to identify them, given the poses of people in the environment. [36] proposes a hierarchical clustering method to identify groups, taking as input the people's poses and outputting the identified groups.
### _Space Modeling_
This work makes use of the \(costmap\_2d\) ROS package to implement the space models described previously. It utilizes the layered costmap method and considers the following standard layers: (i) _Static Layer_, (ii) _Obstacles Layer_, and (iii) _Inflation Layer_. This work also makes use of two custom layers developed in [19], the _Clean People Layer_ and the _Adaptive Layer_. The first of these removes people from the costmap before inflation by the _Inflation Layer_, while the _Adaptive Layer_ implements the space models themselves. Contrary to the original implementation of this last layer, the human body isn't marked entirely as lethal, to guarantee the proper decay of the Gaussian function that represents the space model.
The layers are implemented in the same order as in [19]: (i) _Master_, (ii) _Adaptive Layer_, (iii) _Inflation Layer_, (iv) _Clean People Layer_, (v) _Obstacles Layer_, and (vi) _Static Layer_.
### _Approach Pose Estimation_
The approach pose algorithm is implemented as described in III-C. The approach poses are checked by verifying the cells in the group radius. A cell is marked as free to approach should the value be below a certain threshold defined by the user, being added to the possible approaching area. A tracking of the desired approach target is also done, for lack of software that can keep constant track of specific individuals or groups, by checking the closest group or individual to the original target and checking that they're located under a threshold distance.
### _Evaluation_
In order to properly evaluate some of the presented scenarios it is necessary to utilize criteria that is geared towards analysing social scenarios. To this end a set of Human Safety and Comfort Indexes (HSCIs) [22] were chosen. These indexes evaluate human comfort and approach direction. These indexes are: (i) Social Individual Index (SII), (ii) Social Group Index (SGI), and (iii) Social Direction Index (SDI).
The higher the values of SII and SGI then the closer a robot is to breaching the personal or group spaces, respectively. Nonetheless, certain higher values could be desired as it shows a robot is capable of approaching humans enough for closer interaction without causing discomfort. Greater values of SDI simply signify that the robot's and the human's orientation is closer to being aligned. [22] defined a value of \(0.14\) as the upper threshold that the SII and SGI should not surpass to guarantee the comfort of a human.
### _Results_
This section seeks to evaluate the performance of the algorithms through several scenarios, both simulated and through a real robotic platform to demonstrate that the robot is capable of navigating a social environment and safely interact with humans. While the static scenarios will be evaluated through use of the real robot, due to hardware and software constraints the dynamic scenarios must be evaluated through simulation.
Three ROS packages, originally developed in [19] were altered to implement the algorithms as presented in this work.
#### Iii-E1 Simulation Experiments
The simulations were run in the Gazebo simulator, utilizing the robotic platform Vizzy [37]. The simulation environment is an empty replica of the 7th floor of the North Tower at ISR Lisbon. The parameters utilized to initialize the original adaptive space algorithm were equal to those utilized in Melo's [19] own simulation experiments, with the exception of the amplitude of the Gaussian function which was set to 211. The following parameters were defined for the experiments: \(\zeta=0.3m\), \(s_{h}=0.375m\), \(s_{r}=0.8m\), \(d_{limit}=d_{a\_limit}=6m\), \(a_{adapt}=a_{a\_adapt}=1.5\), \(a_{limit}=1m\), \(a_{a\_limit}=1.2m\), \(v_{mod}=10\), \(f_{ifov}=90^{\circ}\), and \(f_{mod}=1.1\). The simulations were run on a computer with a 2,8 GHz Quad-Core Intel Core i7 processor and 16 GB 2400 MHz DDR4 of RAM running Ubuntu 18.04 and using ROS Melodic. The simulator was Gazebo 9.
**Static Environments**: The models representing people remain in place throughout the entire duration of these experiments, with the robot being the only mobile element as it executed an approach. The experiments were divided into an individual approach experiment and a group approach experiment, and each would be run twice, first without space model adaptation and the second time with.
The individual experiment consisted simply of a single human model placed in front and to the right of the robot, near one of the room's walls, which the robot was then commanded to approach.
The only significant difference in the results is the closer approach of the adapted experiment, enabling more direct and involved interaction. These results are corroborated by the HSCIs extracted during the experiments in Fig. 5 where it is possible to see that the SDIs are similar, but the final value of the SII in the iteration with space model adaptation is higher than the one without, but still below the comfort threshold.
In the group experiment, two groups of 4 people were created and the robot was commanded to approach one of
them. In the case without adaptation, no approach pose was identified as valid. Utilizing this work's adaptation, the space model was changed enough to allow for a safe approach. From an analysis of the HSCIs in Fig. 5 of the successful approach it is possible to see that the indexes were kept below the comfort threshold at all times apart from a moment in the middle of the approach. The SII went slightly over the comfort threshold when the robot moved behind one of the group members on the path to the approach pose. This breach is less worrying than a usual one, as humans are less likely to notice breaches of their personal space when these happen behind them.
**Dynamic Environments**: Simulations in a dynamic environment were divided into two different scenarios: approaching a lone person, or a group of two people side-by-side. In both of these scenarios, the human models move forward at a constant velocity until they reach a specific distance threshold to the robot. All elements of these experiments are controlled via either an automated script or the ROS navigation. Two different scenarios are contemplated both for the individual and group experiments: at the lowest velocity the robot begins its approach at an angle to the humans, while in the two highest velocities the robot begins directly in front of them.
In order to properly compare the performance of the velocity adaptation algorithms against the algorithm without these, each scenario was repeated four times with a different configuration of the algorithms used. The configurations are as follows: (i) space model and approach pose estimation without velocity adaptation, (ii) space model without adaptation and approach pose estimation with adaptation, (iii) space model with velocity adaptation and approach pose estimation without adaptation, and (iv) space model and approach pose estimation with velocity adaptation.
The results from the first two configurations tend to be rather similar, as they're effectively running the same algorithm. On the other hand, the space model adaptation without approach pose estimation adaptation is completely nonfunctional as the estimation is incapable of consistently finding a proper approach pose without being adapted too. The configuration with both adaptations has a more consistent performance than the other three. This is noticeable in both individual and group experiments. Videos of these experiments are available.45
Footnote 4: Individual Approach Experiment - [https://youtu.be/XZ7G7S1gtDk](https://youtu.be/XZ7G7S1gtDk)
Footnote 5: Group Approach Experiment - [https://youtu.be/gjiD-j1-r44](https://youtu.be/gjiD-j1-r44)
The HSCIs presented in Figs. 6 and 7 corroborate the conclusions from the previous paragraph, proving the greater reliability of the algorithm developed in this work.
#### V-B2 Real life experiments
Real life experiments are separated into two sections: experiments run with a single individual and experiments run with a group. In order to run the experiments, all the algorithms were implemented on the previously mentioned robotic platform, Vizzy [37].
Individual experiments were divided into two similar parts, though one evaluated preference while the other served to evaluate distance.
The preference part of the experiment made use of 19 volunteers (12 male, 6 female and 1 that preferred not to say). They were asked to stand in front of the robot, and were given a map. They were instructed to wait for the robot to approach them and, once it stopped, to show it the map as though it were seeking instruction on directions. Upon finishing this task the participants were told to fill in a questionnaire consisting of a Godspeed style questionnaire [38], composed of 10 questions on an altered scale of 1 to 9, and then the task was repeated. They would then be faced with the same questionnaire and a final question which asked in which of the two experiments was it easier to accomplish the task they were given, with the option to answer 'Experiment
Fig. 5: Human Safety and Comfort Indexes from the static simulation experiments
Fig. 6: Human Indexes for the dynamic individual approach experiment. Blue represents the lateral approach at \(0.5m/s\), while green and red are the frontal approaches, at \(1m/s\) and \(1.5m/s\) respectively. The colour progression, from lighter to darker is equal for all experiments.
Fig. 7: Human Indexes for the dynamic group approach experiment. Blue represents the lateral approach at \(0.5m/s\), while green and red are the frontal approaches, at \(1m/s\) and \(1.5m/s\) respectively. The colour progression, from lighter to darker is equal for all experiments.
1', 'Experiment 2' or 'Neither'. The first iteration utilized the non-adaptive space model and approach pose estimation, while the second used the algorithms developed in this work.
The results of the Godspeed Questionnaire are displayed in Fig. 8. These results were then compiled and statistic tests were applied to decide whether any of the results presented a significant statistic difference. The T Test and Mann-Whitney U test were utilized. The results of these tests are displayed in Table II. The null hypothesis being tested is that the distribution of the samples belonging to the two experiments is the same, and thus possesses no relevant statistical difference.
While all the questions related to positive words resulted on a higher average for the algorithm developed in this work and a lower average on the negative words, denoting a possibility that the current algorithm was subconsciously preferred to the original one, only one of the questions had a low enough p value to be considered of any statistical significance.
The final question provided a clearer result with \(52.6\%\) of the participants preferring the current algorithm, while only \(15.8\%\) preferred the original.
The distance part of the experiment was executed using the same experiment as previously with the author of this work as a participant and without the questionnaire. This experiment was run 25 times with each algorithm and its purpose was to get a definite number to the distance between the robot and the person post-approach and to collect the HSCIs.
The results of this part were evaluated by measuring the mean (\(\mu\)) and standard deviation (\(\sigma\)) of final distance between the robot and the individual, with the approach concluded. The HSCIs were also measured to guarantee that the approach would not provoke discomfort. The non-adaptive algorithm (\(\mu=1.265\) and \(\sigma=0.0470\)) showed a clear difference from the adaptive one (\(\mu=1.052\) and \(\sigma=0.0295\)) These results, show a clear difference in the approach distance utilizing both algorithms. The Mann-Whitney U test reported a U statistic of 1 and a p value of \(7.983*10^{-10}\), confirming the statistical significance of the difference between the two experiments.
The real life group experiments were conducted by asking volunteer participants to stand in specific positions, and wait for the robot to approach them. The intent of this experiment was solely to test the changes made to the approach pose estimation algorithm. Experiments were run with groups of two and three people and the final approach pose in reference to the group's center noted down. The robot started each experiment behind the group so as to increase the difficulty of a proper approach. In both experiments, the original algorithm was incapable of choosing a proper approach pose, disturbing the humans. The developed algorithm was capable of identifying the approach pose chosen by the other algorithm as unsafe and approach the group through a proper approach pose. A video of some experiments with the current algorithm is available. 6
Footnote 6: Real Life Group Approach Experiments - [https://youtu.be/qWoHw9TFrE](https://youtu.be/qWoHw9TFrE)
## V Conclusions
The objective of this work is to implement a general socially reactive navigation system capable of enabling a robot to proactively approach and interact with individuals or groups of humans, while taking into account the comfort of the humans. The system is capable of adapting personal and group spaces taking into account a group's arrangement and the velocity of the humans, as well as making minor adaptations as necessary to enable a robot to approach and interact with a human. The approach pose estimation can also take into account the dynamic nature of a human and determine whether a given approach pose is valid for a specific robot.
The group space adaptations were initially tested through the use of static datasets. The experiments demonstrated an increased performance comparatively to the original algorithm, but also demonstrated that the implemented adaptations depend on the size and form of a group. The experiments show that the algorithm is capable of enabling a robot to approach groups it would have otherwise been incapable of.
The socially reactive navigation system was then implemented in ROS, via use of the navigation package, and the adaptive spaces implemented utilizing the ROS costmap.
Experiments for static environments were done in both simulated and real life environments, and demonstrated that the algorithm could enable a robot to approach and interact with human beings in a socially acceptable manner, without running the risk of causing discomfort. The experiments for dynamic environments were limited to simulated environments, but were sufficient to make an initial demonstration of the advantages of the algorithm.
There are, nonetheless, several limitations to the algorithm. It is incapable of taking into account human-object interactions. It is also incapable of determining whether a given valid approach pose is better than another. The velocity adaptation method utilized is also simplistic and unlikely to deal with situations more complex than a linearly moving person. The algorithm has also proven to be under optimized in several
Fig. 8: Responses to the Godspeed-style Questionnaire
aspects, such as the new costmap layer updates, and can be too slow to properly address with real dynamic situations.
|
2304.04375 | Note on the Time Dilation of Charged Quantum Clocks | We derive the time dilation formula for charged quantum clocks in
electromagnetic fields. As a concrete example of non-inertial motion, we
consider a cyclotron motion in a uniform magnetic field. Applying the time
dilation formula to coherent state of the charged quantum clock, we evaluate
the time dilation quantum-mechanically. | Takeshi Chiba, Shunichiro Kinoshita | 2023-04-10T04:20:16Z | http://arxiv.org/abs/2304.04375v2 | # Note on the Time Dilation for Charged Quantum Clocks
###### Abstract
We derive the time dilation formula for charged quantum clocks in electromagnetic fields. As a concrete example of non-inertial motion, we consider a cyclotron motion in a uniform magnetic field. Applying the time dilation formula to coherent state of the charged quantum clock, we evaluate the time dilation quantum-mechanically.
## I Introduction
Motivated by the tests of the weak equivalence principle in quantum regime, in our previous study we derived a formula of the averaged proper time read by one clock conditioned on another clock reading a different proper time in a weak gravitational field [1]. The time dilation measured by these quantum clocks is found to have the same form as that in classical relativity. There, clocks are assumed to be in their inertial motion and their classical trajectories are the geodesics of the spacetime, that is, the clocks are always free-falling.
Then, it would also be interesting to study what would happen for clocks in non-inertial motion. Any non-inertial motion should be caused by other external force than gravitational interaction. In order to study the effect of non-inertial motion on the time dilation of quantum clocks, we consider charged quantum clocks interacting with the external electromagnetic fields. The study of a quantum charged particle is also interesting in the light of quantum mechanics in a rotating frame because there exists a close analogy between the motion in a rotating frame and the motion in a magnetic field [2]. Also, a new class of optical clocks with highly charged ions has been received interest recent years as references for highest-accuracy clocks and precision tests of fundamental physics [3; 4]. Such an optical clock based on a highly charged ion was recently realized [4]. Our study may be applicable to such clocks.
The paper is organized as follows. In Sec. II, we derive the time dilation formula for charged particles in electromagnetic fields and weak gravitational fields as the average of a proper time observable for a quantum clock. In Sec. III, as a non-inertial motion, we consider the cyclotron motion in a uniform magnetic field. We evaluate the quantum time dilation by using the coherent state. In Appendix A, we summarize the several results of the coherent state for the cyclotron motion in quantum mechanics.
## II Charged Quantum Clock Particles in Spacetime
### Classical Particles
We consider a system of \(N\) charged massive particles. Each particle whose mass and charge are \(m_{n}\) and \(q_{n}\) (\(n=1,\ldots,N\)) has a set of internal degrees of freedom, labeled by the configuration variables \(\chi_{n}\) and their conjugate momenta \(P_{\chi_{n}}\)[5]. These internal degrees of freedom are supposed to represent the quantum clock.
The action of such a system in a curved spacetime with the metric \(g_{\mu\nu}\) and an electromagnetic field \(A_{\mu}\) is given by
\[S=\sum_{n}\int d\tau_{n}\left(-m_{n}c^{2}+q_{n}A_{\mu}\frac{dx_{n}^{\mu}}{d \tau_{n}}+P_{\chi_{n}}\frac{d\chi_{n}}{d\tau_{n}}-H_{n}^{\rm clock}\right), \tag{1}\]
where \(\tau_{n}\) is the proper time of the \(n\)th particle and \(H_{n}^{\rm clock}=H_{n}^{\rm clock}(\chi_{n},P_{\chi_{n}})\) is a Hamiltonian for its internal degrees of freedom.
Let \(x_{n}^{\mu}\) denote the spacetime position of the \(n\)th particle. The trajectory of the \(n\)th particle \(x_{n}^{\mu}(t)\) is parameterized by an arbitrary external time parameter \(t\). Noting that \(cd\tau_{n}=\sqrt{-g_{\mu\nu}\dot{x}_{n}^{\mu}\dot{x}_{n}^{\nu}}dt\equiv\sqrt{- \dot{x}_{n}^{2}}dt\), where a dot denotes differentiation with respect to \(t\), the action is rewritten as
\[S=\int dt\sum_{n}\frac{1}{c}\sqrt{-\dot{x}_{n}^{2}}\left(-m_{n}c^{2}+q_{n}A_{ \mu}\frac{\dot{x}_{n}^{\mu}c}{\sqrt{-\dot{x}_{n}^{2}}}+P_{\chi_{n}}\frac{\dot {\chi}_{n}c}{\sqrt{-\dot{x}_{n}^{2}}}-H_{n}^{\rm clock}\right)=:\int dt\ L. \tag{2}\]
The momentum conjugate to \(x_{n}^{\mu}\) is given by
\[P_{n\mu}=\frac{\partial L}{\partial\dot{x}_{n}^{\mu}}=\frac{g_{\mu\nu}\dot{x}_{n} ^{\nu}}{c\sqrt{-\dot{x}_{n}^{2}}}\left(m_{n}c^{2}+H_{n}^{\rm clock}\right)+q_{n} A_{\mu}. \tag{3}\]
Then the Hamiltonian associated with the Lagrangian \(L\) is constrained to vanish:
\[H=\sum_{n}\left(P_{n\mu}\dot{x}_{n}^{\mu}+P_{\chi_{n}}\dot{\chi}_{n}\right)-L \approx 0. \tag{4}\]
In terms of the momentum, the constraints can be expressed in the form
\[C_{H_{n}}:=g^{\mu\nu}(P_{n\mu}-q_{n}A_{\mu})(P_{n\nu}-q_{n}A_{\nu})c^{2}+\left( m_{n}c^{2}+H_{n}^{\rm clock}\right)^{2}\approx 0. \tag{5}\]
Using the \((3+1)\) decomposition of the metric in terms of the lapse function \(\alpha\), the shift vector \(\beta^{i}\) and the three-metric \(\gamma_{ij}\) such that [6]
\[ds^{2}=-\alpha^{2}c^{2}dt^{2}+\gamma_{ij}(dx^{i}+\beta^{i}cdt)(dx^{j}+\beta^{j }cdt), \tag{6}\]
the constraint is factorized in the form
\[C_{H_{n}} = -\alpha^{-2}\left(P_{n0}-q_{n}A_{0}-\beta^{i}\left(P_{ni}-q_{n}A_ {i}\right)\right)^{2}c^{2}+\gamma^{ij}(P_{ni}-q_{n}A_{i})(P_{nj}-q_{n}A_{j})c^{ 2}+\left(m_{n}c^{2}+H_{n}^{\rm clock}\right)^{2} \tag{7}\] \[= -\alpha^{-2}C_{n}^{+}C_{n}^{-}\approx 0,\]
where \(C_{n}^{\pm}\) is defined by
\[C_{n}^{\pm} := \left(P_{n0}-q_{n}A_{0}-\beta^{i}\left(P_{ni}-q_{n}A_{i}\right) \right)c\pm h_{n}, \tag{8}\] \[h_{n} := \alpha\sqrt{\gamma^{ij}(P_{ni}-q_{n}A_{i})(P_{nj}-q_{n}A_{j})c^{ 2}+(m_{n}c^{2}+H_{n}^{\rm clock})^{2}}. \tag{9}\]
Note that we have set \(x^{0}=ct\). Hereafter we assume that the spacetime is stationary. The coordinates \(x_{n}^{\mu}\) and their conjugate momenta \(P_{n\mu}\) satisfy the fundamental Poisson brackets: \(\{x_{m}^{\mu},P_{n\nu}\}=\delta_{mn}\delta_{\nu}^{\mu}\). The canonical momentum \(P_{n\mu}\) generates translations in the spacetime coordinate \(x_{n}^{\mu}\). Therefore, if \(C_{n}^{\pm}\approx 0\), then \(\pm h_{n}-q_{n}A_{0}c-\beta^{i}c\left(P_{ni}-q_{n}A_{i}\right)\) is the generator of translation in the \(n\)th particle's time coordinate and is the Hamiltonian for both the external and internal degrees of freedom of the \(n\)th particle.
### Quantization
We canonically quantize the system of \(N\) particles by promoting the phase space variables to operators acting on appropriate Hilbert spaces: \(x_{n}^{0}\) and \(P_{n0}\) become canonically conjugate self-adjoint operators acting on the Hilbert space \(\mathcal{H}_{n}^{0}\simeq L^{2}(\mathbb{R})\) associated with the \(n\)th particle's temporal degree of freedom; operators \(x_{n}^{i}\) and \(P_{ni}\) acting on the Hilbert space \(\mathcal{H}_{n}^{\rm ext}\simeq L^{2}(\mathbb{R}^{3})\) associated with the particle's external degrees of freedom; operators \(\chi_{n}\) and \(P_{\chi_{n}}\) acting on the Hilbert space \(\mathcal{H}_{n}^{\rm clock}\) associated with the particle's internal degrees of freedom. Then the Hilbert space describing the \(n\)th particle is \(\mathcal{H}_{n}\simeq\mathcal{H}_{n}^{0}\otimes\mathcal{H}_{n}^{\rm ext} \otimes\mathcal{H}_{n}^{\rm clock}\).
The constraint equations (7) now become operator equations restricting the physical state of the theory,
\[C_{n}^{+}C_{n}^{-}|\Psi\rangle=0,\ \ \ \ \forall n, \tag{10}\]
where \(|\Psi\rangle\rangle\in\mathcal{H}_{\rm phys}\) is a physical state of a clock \(C\) and a system \(S\) and lives in the physical Hilbert space \(\mathcal{H}_{\rm phys}\).
To specify \(\mathcal{H}_{\rm phys}\), the normalization of the physical state in \(\mathcal{H}_{\rm phys}\) is performed by projecting a physical state \(|\Psi\rangle\rangle\) onto a subspace in which the temporal degree of freedom of each particle (clock \(C\)) is in an eigenstate \(|t_{n}\rangle\) of the operator \(x_{n}^{0}\) associated with the eigenvalue \(t\in\mathbb{R}\) in the spectrum of \(x_{n}^{0}\): \(x_{n}^{0}|t_{n}\rangle=ct|t_{n}\rangle\). The state of \(S\) by conditioning \(|\Psi\rangle\rangle\) on \(C\) reading the time \(t\) is then given by
\[|\psi_{S}(t)\rangle=\langle t|\otimes I_{S}|\Psi\rangle\rangle, \tag{11}\]
where \(|t\rangle=\otimes_{n}|t_{n}\rangle\) and \(I_{S}\) is the identity on \(\mathcal{H}\simeq\bigotimes_{n}\mathcal{H}_{n}^{\rm ext}\otimes\mathcal{H}_{n}^ {\rm clock}\). We demand that the state \(|\psi_{S}(t)\rangle\) is normalized as \(\langle\psi_{S}(t)|\psi_{S}(t)\rangle=1\) for \(\forall t\in\mathbb{R}\) on a spacelike hypersurface defined by all \(N\) particles' temporal degree of freedom being in the state \(|t_{n}\rangle\). The physical state \(|\Psi\rangle\rangle\) is thus normalized with respect to the inner product [5]:
\[\langle\langle\Psi|\Psi\rangle\rangle_{PW}:=\langle\langle\Psi|t\rangle\langle t |\otimes I_{S}|\Psi\rangle\rangle=\langle\psi_{S}(t)|\psi_{S}(t)\rangle=1, \tag{12}\]
and the physical state \(|\Psi\rangle\rangle\) can be written as
\[|\Psi\rangle\rangle=\int dt|t\rangle\langle t|\otimes I_{S}|\Psi\rangle\rangle= \int dt|t\rangle|\psi_{S}(t)\rangle. \tag{13}\]
Hereafter, we consider physical states that satisfy \(C_{n}^{+}|\Psi\rangle\rangle=0\) for all \(n\in\mathbb{N}\). It can be shown that the conditioned state \(|\psi_{S}(t)\rangle\) satisfies the Schrodinger equation with \(t\) as a time parameter [5]:
\[i\hbar\frac{d}{dt}|\psi_{S}(t)\rangle=H_{S}|\psi_{S}(t)\rangle, \tag{14}\]
where \(H_{S}\) is given by
\[H_{S}=\sum_{n}\left(h_{n}-q_{n}A_{0}c-\beta^{i}c\left(P_{ni}-q_{n}A_{i}\right) \right)\otimes I_{S-n}\equiv\sum_{n}\widetilde{h_{n}}\otimes I_{S-n} \tag{15}\]
with \(I_{S-n}\) being the identity on \(\bigotimes_{m\neq n}\mathcal{H}_{m}^{\text{ext}}\otimes\mathcal{H}_{m}^{ \text{clock}}\). Therefore, \(|\psi_{S}(t)\rangle\) can be regarded as the time-dependent state of the \(N\)-particles with the Hamiltonian \(H_{S}\) evolved with the external time \(t\).
### Probabilistic Time Dilation
Consider two clock particles A and B with internal degrees of freedom. Each clock is defined to be the quadrupole \(\{\mathcal{H}_{n}^{\text{clock}},\rho_{n},H_{n}^{\text{clock}},T_{n}\}\), where \(\rho_{n}\) is a fiducial state and \(T_{n}\) is proper time observable for \(n\in\{\text{A},\text{B}\}\). The proper time observable is defined as a POVM (positive operator valued measure)
\[T_{n}:=\left\{E_{n}(\tau)\ \forall\tau\in G\ \text{s.t.}\int_{G}d\tau E_{n}( \tau)=I_{n}\right\}, \tag{16}\]
where \(E_{n}(\tau)=|\tau\rangle\langle\tau|\) is a positive operator on \(\mathcal{H}_{n}^{\text{clock}}\), \(G\) is the group generated by \(H_{n}^{\text{clock}}\), and \(|\tau\rangle\) is a clock state associated with a measurement of the clock yielding the time \(\tau\).
To probe time dilation effects between two clocks, we consider the probability that clock A reads the proper time \(\tau_{\text{A}}\) conditioned on clock B reading the proper time \(\tau_{\text{B}}\)[7; 8]. This conditional probability is given in terms of the physical state as
\[\text{Prob}[T_{\text{A}}=\tau_{\text{A}}|T_{\text{B}}=\tau_{\text{B}}]=\frac {\langle\langle\Psi|E_{\text{A}}(\tau_{\text{A}})E_{\text{B}}(\tau_{\text{B}} )|\Psi\rangle\rangle}{\langle\langle\Psi|E_{\text{B}}(\tau_{\text{B}})|\Psi \rangle\rangle}\,. \tag{17}\]
Consider the case where two clock particles A and B are moving in a curved spacetime. Suppose that initial conditioned state is unentangled, \(|\psi_{S}(0)\rangle=|\psi_{S_{\text{A}}}\rangle|\psi_{S_{\text{B}}}\rangle\), and that the external and internal degrees of freedom of both particles are unentangled, \(|\psi_{S_{n}}\rangle=|\psi_{n}^{\text{ext}}\rangle|\psi_{n}^{\text{clock}}\rangle\). Then, from Eq. (13), the physical state takes the form
\[|\Psi\rangle\rangle=\int dt|t\rangle|\psi_{S}(t)\rangle=\int dt\bigotimes_{n \in\{\text{A},\text{B}\}}e^{-i\widetilde{h}_{n}t/\hbar}|\psi_{n}^{\text{ext }}\rangle|\psi_{n}^{\text{clock}}\rangle\,, \tag{18}\]
where \(\widetilde{h}_{n}\) is defined in Eq. (15). Further suppose that \(\mathcal{H}_{n}^{\text{clock}}\simeq L^{2}(\mathbb{R})\) so that we may consider an ideal clock such that \(P_{n}=H_{n}^{\text{clock}}/c\) and \(cT_{n}\) are the momentum and position operators on \(\mathcal{H}_{n}^{\text{clock}}\). The canonical commutation relation yields \([cT_{n},P_{n}]=[T_{n},H_{n}^{\text{clock}}]=i\hbar\). Then, the clock states are orthogonal \(\langle\tau|\tau^{\prime}\rangle=\delta(\tau-\tau^{\prime})\) and satisfy the covariance relation \(|\tau+\tau^{\prime}\rangle=e^{-iH_{n}^{\text{clock}}\tau/\hbar}|\tau\rangle\). The conditional probability (17) becomes
\[\text{Prob}[T_{\text{A}}=\tau_{\text{A}}|T_{\text{B}}=\tau_{\text{B}}]=\frac {\int dt\ \text{tr}[E_{\text{A}}(\tau_{\text{A}})\rho_{\text{A}}(t)]\text{tr}[E_{ \text{B}}(\tau_{\text{B}})\rho_{\text{B}}(t)]}{\int dt\ \text{tr}[E_{\text{B}}(\tau_{\text{B}})\rho_{\text{B}}(t)]}, \tag{19}\]
where \(\rho_{n}(t)\) is the reduced state of the internal clock degrees of freedom defined as [5]
\[\rho_{n}(t)=\text{tr}_{\mathcal{H}_{S}\backslash\mathcal{H}_{n}^{\text{clock}}} \left(e^{-iH_{S}t/\hbar}|\psi_{S_{n}}\rangle\langle\psi_{S_{n}}|e^{iH_{S}t/ \hbar}\right) \tag{20}\]
with the trace over the complement of the clock Hilbert space.
We assume that the fiducial states of the internal clock degrees of freedom are the Gaussian wave packets centered at \(\tau=0\) with width \(\sigma\):
\[|\psi_{n}^{\rm clock}\rangle=\frac{1}{\pi^{1/4}\sigma^{1/2}}\int d\tau\ e^{- \frac{\tau^{2}}{2\sigma^{2}}}|\tau\rangle\,. \tag{21}\]
Note that in evaluating the conditional probability (19) by using Eq. (20) and Eq. (21), the terms in the Hamiltonian \(H_{S}\) (15) which involve both the clock Hamiltonian \(H_{n}^{\rm clock}\) and the external degrees of freedom survive. Therefore, as in our previous study [1], the conditional probability depends only on \(h_{n}\) defined in Eq. (9) and is independent of the terms in Hamiltonian \(H_{S}\) which depend only on the external degrees of freedom (such as \(A_{0}\) and \(\beta^{i}\)).
### Time Dilation
In order to find the coupling of the clock Hamiltonian \(H_{n}^{\rm clock}\) and the external degrees of freedom, we expand \(\widetilde{h_{n}}\) in the effective Hamiltonian (15) according to the power of \(H_{n}^{\rm clock}\) assuming \(H_{n}^{\rm clock}\ll m_{n}c^{2}\)
\[\widetilde{h_{n}} = \alpha\sqrt{\gamma^{ij}(P_{ni}-q_{n}A_{i})(P_{nj}-q_{n}A_{j})c^{ 2}+m_{n}^{2}c^{4}}-q_{n}A_{0}c-\beta^{i}c\,(P_{ni}-q_{n}A_{i}) \tag{22}\] \[+\frac{\alpha m^{2}c^{4}}{\sqrt{\gamma^{ij}(P_{ni}-q_{n}A_{i})(P _{nj}-q_{n}A_{j})c^{2}+m_{n}^{2}c^{4}}}\frac{H_{n}^{\rm clock}}{m_{n}c^{2}}+O(( H_{n}^{\rm clock}/m_{n}c^{2})^{2}).\]
The term in the second line which involves both the clock Hamiltonian \(H_{n}^{\rm clock}\) and the external degrees of freedom is relevant in calculating the conditional probability. One may recognize that the coefficient of \(H_{n}^{\rm clock}\) is (minus of) the kinetic term of the \(n\)-th particle in the Lagrangian (2), that is, \(m_{n}c\sqrt{-\dot{x}_{n}^{2}}=m_{n}c^{2}d\tau_{n}/dt\). This implies that the average of the time dilation would be given by the same form as the classical time dilation formula in the leading order of the clock Hamiltonian. In other words, regardless of inertial or non-inertial motions, the time dilation would be given by difference of the proper time and distance between trajectories of each particle.
As a concrete example, in the Newtonian approximation of spacetime, the metric is given by \(g_{00}=-\alpha^{2}=-(1+2\Phi({\bf x})/c^{2}),\gamma_{ij}=\delta_{ij}\), and \(\beta^{i}=0\), where \(\Phi({\bf x})\) is the Newtonian gravitational potential. \(\widetilde{h_{n}}\) is then further expanded according to the number of the inverse power of \(c^{2}\) as
\[\widetilde{h_{n}}=m_{n}c^{2}+H_{n}^{\rm clock}+H_{n}^{\rm ext}+H_{n}^{\rm int} +O(c^{-4}), \tag{23}\]
where the rest-mass energy term \(m_{n}c^{2}\) is a constant and can be disregarded in \(h_{n}\). The external Hamiltonian \(H_{n}^{\rm ext}\) and the interaction Hamiltonian \(H_{n}^{\rm int}\) are given by
\[H_{n}^{\rm ext} := \frac{\delta^{ij}(P_{ni}-q_{n}A_{ni})(P_{nj}-q_{n}A_{nj})}{2m_{n} }+m_{n}\Phi_{n}-q_{n}A_{n0}c\equiv\frac{({\bf P}_{n}-q_{n}{\bf A}_{n})^{2}}{2m _{n}}+m_{n}\Phi_{n}-q_{n}A_{n0}c, \tag{24}\] \[H_{n}^{\rm int} := -\frac{({\bf P}_{n}-q_{n}{\bf A}_{n})^{2}H_{n}^{\rm clock}}{2m_{n} ^{2}c^{2}}+\frac{\Phi_{n}H_{n}^{\rm clock}}{c^{2}}-\frac{1}{2m_{n}c^{2}}\left( \frac{({\bf P}_{n}-q_{n}{\bf A}_{n})^{2}}{2m_{n}}-m_{n}\Phi_{n}\right)^{2}+O( c^{-4})\,, \tag{25}\]
where \(A_{n\mu}:=A_{\mu}({\bf x}_{n}),\Phi_{n}:=\Phi({\bf x}_{n})\).
The reduced state of the internal clock becomes
\[\rho_{n}(t) = {\rm tr}_{{\cal H}_{S}\backslash{\cal H}_{n}^{\rm clock}}\left[e^{ -iH_{S}t/\hbar}|\psi_{S_{n}}\rangle\langle\psi_{S_{n}}|e^{iH_{S}t/\hbar}\right] \tag{26}\] \[= \bar{\rho}_{n}(t)-it\ {\rm tr}_{\rm ext}\left([H_{n}^{\rm int},\bar{ \rho}_{n}^{\rm ext}(t)\otimes\bar{\rho}_{n}(t)]+O((H_{n}^{\rm int}t)^{2})\right)\] \[= \bar{\rho}_{n}(t)+it\left(\frac{(({\bf P}_{n}-q_{n}{\bf A}_{n})^{ 2})}{2m_{n}^{2}c^{2}}-\frac{\langle\Phi_{n}\rangle}{c^{2}}\right)[H_{n}^{\rm clock },\bar{\rho}_{n}(t)]+O(c^{-4}),\]
where \(\bar{\rho}_{n}(t)=e^{-iH_{n}^{\rm clock}t/\hbar}\rho_{n}e^{iH_{n}^{\rm clock}t/\hbar}\) and \(\bar{\rho}_{n}^{\rm ext}(t)=e^{-iH_{n}^{\rm ext}t/\hbar}\rho_{n}^{\rm ext}e^{ iH_{n}^{\rm ext}t/\hbar}\). The conditional probability (17) is evaluated to leading relativistic order as
\[{\rm Prob}[T_{\rm A}=\tau_{\rm A}|T_{\rm B}=\tau_{\rm B}] = \frac{e^{-\frac{(\tau_{\rm A}-\tau_{\rm B})^{2}}{2\sigma^{2}}}}{ \sqrt{2\pi}\sigma}\left[1+\left(\frac{\langle({\bf P}_{\rm A}-q_{\rm A}{\bf A}_{ \rm A})^{2}\rangle}{4m_{\rm A}^{2}c^{2}}-\frac{\langle({\bf P}_{\rm B}-q_{\rm B }{\bf A}_{\rm B})^{2}\rangle}{4m_{\rm B}^{2}c^{2}}-\frac{\langle\Phi_{\rm A} \rangle}{2c^{2}}+\frac{\langle\Phi_{\rm B}\rangle}{2c^{2}}\right) \tag{27}\] \[\times\left(1-\frac{\tau_{\rm A}^{2}-\tau_{\rm B}^{2}}{\sigma^{2} }\right)\right],\]
where \(\langle H_{n}^{\rm ext}\rangle=\langle\psi_{n}^{\rm ext}|H_{n}^{\rm ext}|\psi_{n}^ {\rm ext}\rangle\). Then the average proper time read by clock A conditioned on clock B indicating the time \(\tau_{\rm B}\) is
\[\langle T_{\rm A}\rangle = \int d\tau\ {\rm Prob}[T_{\rm A}=\tau|T_{\rm B}=\tau_{\rm B}]\tau \tag{28}\] \[= \tau_{\rm B}\left[1-\left(\frac{\langle({\bf P}_{\rm A}-q_{\rm A }{\bf A}_{\rm A})^{2}\rangle}{2m_{\rm A}^{2}c^{2}}-\frac{\langle\Phi_{\rm A} \rangle}{c^{2}}\right)+\left(\frac{\langle({\bf P}_{\rm B}-q_{\rm B}{\bf A}_{ \rm B})^{2}\rangle}{2m_{\rm B}^{2}c^{2}}-\frac{\langle\Phi_{\rm B}\rangle}{c^ {2}}\right)\right]\,.\]
This is the quantum analog of time dilation formula for the charged particles in the Newtonian gravity. Noting that the time evolution of the position \(\dot{\bf x}_{n}=\frac{[{\bf x}_{n},H_{n}^{\rm ext}]}{\hbar m}=\frac{{\bf P}_{n }-q_{n}{\bf A}_{n}}{m_{n}}\) from the Heisenberg equation,1 one may recognize that this time dilation formula has the same form as the classical time dilation in the Newtonian gravity. The time dilation of a clock, regargless of whether it is in inertial motion or non-inertial motion, is induced by its velocity and gravitational potential.
Footnote 1: Note that the equation becomes \(\dot{\bf x}_{n}=\frac{{\bf P}_{n}-q_{n}{\bf A}_{n}}{m_{n}}-\mathbf{\beta}c\) in the presence of the shift vector.
## III Time dilation in a uniform magnetic field
As an application of the time dilation formula of Eq. (28), we consider the motion of a charged particle in a uniform magnetic field \(B\) along the \(z\) direction. The quantum mechanics of the charged particle and the coherent state are discussed in detail in Appendix A. For the particle moving in the \(xy\)-plane in the flat spacetime, the Hamiltonian is given by
\[H_{n}^{\rm ext}=\frac{({\bf P}_{n}-q_{n}{\bf A}_{n})^{2}}{2m_{n}}. \tag{29}\]
The time dilation formula (28) is reduced to the difference of the Hamiltonian
\[\langle T_{\rm A}\rangle=\tau_{\rm B}\left(1-\frac{\langle H_{\rm A}^{\rm ext} \rangle}{m_{\rm A}c^{2}}+\frac{\langle H_{\rm B}^{\rm ext}\rangle}{m_{\rm B}c ^{2}}\right). \tag{30}\]
Since \(H_{n}^{\rm ext}\) does not depend on the external time \(t\) explicitly, the expectation value of \(H_{n}^{\rm ext}\) is conserved. Therefore, the time dilation does not depend on \(t\) in contrast to the gravitational time dilation [1]. In the following, we calculate the time dilation between a charged quantum clock A (with its charge \(q_{\rm A}\)) and an uncharged (\(q_{\rm B}=0\)) quantum clock B for coherent state. We also note that as explained in Appendix A the time dilation formula (30) does not change even if we move to a rotating frame.
### Time Dilation in Coherent State
It is known that the cyclotron motion of a charged particle in a uniform magnetic field can be quantum mechanically well-described by the coherent state. We consider the coherent state \(|\alpha,\beta\rangle\) defined by Eq. (14) for the charged clock A.2 Introducing the cyclotron frequency \(\omega_{\rm A}=q_{\rm A}B/m_{\rm A}\) and the radius of the cyclotron motion \(r_{0}\), the center of the cyclotron motion \((X_{0},Y_{0})\) is related to \(\beta\) as \(X_{0}-iY_{0}=\sqrt{\frac{2\hbar}{m_{\rm A}\omega_{\rm A}}}\beta\) and the relative position \((r_{0}\cos\theta_{0},r_{0}\sin\theta_{0})\) is related to \(\alpha\) as \(r_{0}e^{i\theta_{0}}=\sqrt{\frac{2\hbar}{m_{\rm A}\omega_{\rm A}}}\alpha\). Note that the uniform magnetic field \(B\) is given by the vector potential \({\bf A}=\frac{B}{2}(-y,x,0)\) in the symmetric gauge. From Eq. (12), the expectation value of the external Hamiltonian of the clock A becomes
Footnote 2: \(\alpha\) should not be confused with the lapse function in Eq. (6).
\[\langle\alpha,\beta|H_{\rm A}^{\rm ext}|\alpha,\beta\rangle=\hbar\omega_{\rm A }\left(|\alpha|^{2}+\frac{1}{2}\right)=\frac{1}{2}m_{\rm A}\omega_{\rm A}^{2} r_{0}^{2}+\frac{1}{2}\hbar\omega_{\rm A}\,. \tag{31}\]
On the other hand, we assume that the state of the uncharged clock B is a Gaussian state centered at \((x_{\rm B},y_{\rm B})=(x_{\rm B0},y_{\rm B0})\) with width \(\sigma_{\rm B}\), whose wave function is
\[\langle{\bf x}_{\rm B}|\psi_{\rm B}\rangle=\left(\pi\sigma_{\rm B}^{2}\right)^{ -1/2}\exp\left[-\frac{(x_{\rm B}-x_{\rm B0})^{2}+(y_{\rm B}-y_{\rm B0})^{2}}{2 \sigma_{\rm B}^{2}}\right]\,. \tag{32}\]
Then, the expectation value of the external Hamiltonian of the clock B becomes
\[\langle\psi_{\rm B}|H_{\rm B}^{\rm ext}|\psi_{\rm B}\rangle=\frac{\hbar^{2}}{2m_{ \rm B}\sigma_{\rm B}^{2}}\,. \tag{33}\]
Putting these together, the observed average time dilation between two clocks is given by
\[\langle T_{\rm A}\rangle = \tau_{\rm B}\left(1-\frac{\langle H_{\rm A}^{\rm ext}\rangle}{m_{ \rm A}c^{2}}+\frac{\langle H_{\rm B}^{\rm ext}\rangle}{m_{\rm B}c^{2}}\right) \tag{34}\] \[= \tau_{\rm B}\left(1-\frac{\omega_{\rm A}^{2}r_{0}^{2}}{2c^{2}}- \frac{\hbar\omega_{\rm A}}{2m_{\rm A}c^{2}}+\frac{\hbar^{2}}{2m_{\rm B}^{2}c^{2 }\sigma_{\rm B}^{2}}\right)\,.\]
### Superposition
Next, we consider two clocks A and B and suppose that initially clock A is in a superposition of two coherent state [9]:
\[|\psi_{\rm A}\rangle=\frac{1}{\sqrt{N}}\left(|\alpha,\beta\rangle+e^{i\phi}| \alpha^{\prime},\beta\rangle\right)\,. \tag{35}\]
Two coherent states are assumed to have the same center of circle, namely the same \(\beta\), but have different positions on the circle as shown in Fig. 1 :
\[\alpha = \sqrt{\frac{m_{\rm A}\omega_{\rm A}}{2\hbar}}r_{0}e^{i\theta_{0}}, \tag{36}\] \[\alpha^{\prime} = \sqrt{\frac{m_{\rm A}\omega_{\rm A}}{2\hbar}}r_{0}e^{-i\theta_{0}} =\alpha^{*}\,, \tag{37}\]
which means the angular separation is \(2\theta_{0}\) for \(0\leq\theta_{0}<\pi\). The normalization factor \(N\) is given by
\[N=2+2{\rm Re}\left(e^{-i\phi}\langle\alpha^{\prime},\beta|\alpha,\beta\rangle \right)=2+2{\rm Re}\left(e^{-i\phi}e^{\alpha^{2}-|\alpha|^{2}}\right). \tag{38}\]
Then, the average of \(H_{\rm A}^{\rm ext}\) is
\[\langle\psi_{\rm A}|H_{\rm A}^{\rm ext}|\psi_{\rm A}\rangle = \hbar\omega_{\rm A}\left(|\alpha|^{2}+\frac{1}{2}\right)+\frac{2 \hbar\omega_{\rm A}}{N}{\rm Re}\left((\alpha^{2}-|\alpha|^{2})e^{-i\phi}e^{ \alpha^{2}-|\alpha|^{2}}\right) \tag{39}\] \[= \frac{1}{2}m_{\rm A}\omega_{\rm A}r_{0}^{2}+\frac{1}{2}\hbar \omega_{\rm A}+2\sin\theta_{0}\frac{m_{\rm A}\omega_{\rm A}^{2}r_{0}^{2}}{N}{ \rm Re}\left(ie^{i(\theta_{0}-\phi)}e^{\alpha^{2}-|\alpha|^{2}}\right).\]
Hence the time dilation between two clocks becomes
\[\langle T_{\rm A}\rangle=\tau_{\rm B}\left(1-\frac{\omega_{\rm A}^{2}r_{0}^{2}}{2 c^{2}}-\frac{\hbar\omega_{\rm A}}{2m_{\rm A}c^{2}}-2\sin\theta_{0}\frac{\omega_{ \rm A}^{2}r_{0}^{2}}{Nc^{2}}{\rm Re}\left(ie^{i(\theta_{0}-\phi)}e^{\alpha^{2} -|\alpha|^{2}}\right)+\frac{\hbar^{2}}{2m_{\rm B}^{2}c^{2}\sigma_{\rm B}^{2}} \right)\,. \tag{40}\]
The term proportional to \(\sin\theta_{0}\) arises from quantum interference due to the superposition and may be regarded as the quantum time dilation.
To make the effect of quantum time dilation manifest, as in [1] we split the time dilation formula (40) into \(K_{C}\) and \(K_{Q}\) as \(\langle T_{\rm A}\rangle=\tau_{\rm B}(1-K_{C}-K_{Q})\). \(K_{C}\) is given by the contribution of a statistical mixture of the coherent states of clock A and clock B, and \(K_{Q}\) is the term due to the interference effect
\[K_{C} = \frac{\omega_{\rm A}^{2}r_{0}^{2}}{2c^{2}}+\frac{\hbar\omega_{ \rm A}}{2m_{\rm A}c^{2}}-\frac{\hbar^{2}}{2m_{\rm B}^{2}c^{2}\sigma_{\rm B}^{2 }}, \tag{41}\] \[K_{Q} = 2\sin\theta_{0}\frac{\omega_{\rm A}^{2}r_{0}^{2}}{Nc^{2}}{\rm Re }\left(ie^{i(\theta_{0}-\phi)}e^{\alpha^{2}-|\alpha|^{2}}\right)\,. \tag{42}\]
Positive \(K_{Q}\) implies the enhanced time dilation. In Fig. 2, \(K_{Q}\) normalized by the classical time dilation factor \(\omega_{\rm A}^{2}r_{0}^{2}/2c^{2}\) is shown. In this example, we assumed \(q_{\rm A}=13e\), \(m_{\rm A}=6.6\times 10^{-26}\)kg, \(B=1.0\)T and \(r_{0}=1.0\times 10^{-7}\)m, so that the classical time dilation factor becomes \(\omega_{\rm A}^{2}r_{0}^{2}/2c^{2}=5.5\times 10^{-17}\). The quantum effect can either enhance or reduce the time dilation and can be as large as \(10\%\) of the classical time dilation.
## IV Summary
As an extension of the proper time observable proposed in [5] and applied to a weak gravitational field [1], we studied charged quantum clocks interacting with the external electromagnetic fields. We derived a formula of the average proper time read by one clock conditioned on another clock reading a different proper time, Eq. (28), which has the same form as that in classical relativity consisting of kinetic part (velocity squared term) and gravitational part (gravitational redshift term). We found that the time dilation is given by difference of the proper time and distance between trajectories of each clock, regardless of whether the clock is in inertial motion or non-inertial motion.
When applied to a charged quantum clock in a uniform magnetic field, we considered the case in which the state of one clock is in a superposition. We found that the effect arising from quantum interference appears in the time dilation which can be as large as \(10\%\) of the classical time dilation.
Optical clocks based on highly charged ions have been considered as a new class of references for highest-accuracy clocks and precision tests of fundamental physics [3]. Moreover, such an optical clock based on a highly charged ion was realized recently [4]. Our study may be relevant in interpreting the measurements of the time dilation of a highly charged optical clock.
###### Acknowledgements.
This work is supported by JSPS Grant-in-Aid for Scientific Research Number 22K03640 (TC), Nos. 16K17704 and 21H05186 (SK), and in part by Nihon University.
## Appendix A Quantum Mechanics of a Charged Particle in a Uniform Magnetic Field
Here, we summerize the basic results on quantum mechanics of a charged particle in a uniform magnetic field [10; 11]
### Hamiltonian and Relative Coordinate
Consider a particle with the mass \(m\) and the charge \(q\) moving in a uniform magnetic field \(B\). Take the \(z\)-axis in the direction of the magnetic field and assume that the particle moves in the \(xy\)-plane.
The Hamiltonian in the symmetric gauge
\[\mathbf{A}=\frac{1}{2}\mathbf{B}\times\mathbf{x}=\frac{B}{2}(-y,x,0) \tag{10}\]
is given by
\[H=\frac{(\mathbf{p}-q\mathbf{A})^{2}}{2m}=\frac{1}{2m}\left(p_{x}+\frac{m \omega}{2}y\right)^{2}+\frac{1}{2m}\left(p_{y}-\frac{m\omega}{2}x\right)^{2}\,, \tag{11}\]
where we have introduced the cyclotron frequency \(\omega=qB/m\).
Since the time evolution of position operator is given from the Heisenberg equation by \(\dot{x}_{i}=\frac{[x_{i},H]}{i\hbar}=\frac{p_{i}-qA_{i}}{m}\), considering the classical cyclotron motion, we introduce the position operators \(X\) and \(Y\) corresponding to the center of the circle
\[X=\frac{p_{y}+m\omega x/2}{m\omega},\quad Y=-\frac{p_{x}-m\omega y/2}{m\omega }\,, \tag{12}\]
and the operators \(\xi\) and \(\eta\) corresponding to the relative coordinates
\[\xi=x-X=-\frac{p_{y}-m\omega x/2}{m\omega},\qquad\eta=y-Y=\frac{p_{x}+m\omega y /2}{m\omega}\,. \tag{13}\]
Note that both \(X\) and \(Y\) commute with the Hamiltonian, \([X,H]=0=[Y,H]\), and hence they are conserved, but \(X\) and \(Y\) do not commute with each other, \([X,Y]=-i\hbar/m\omega\).
### Creation and Annihilation Operators
We introduce the following creation and annihilation operators
\[a = \sqrt{\frac{m\omega}{2\hbar}}\left(\xi+i\eta\right)=\sqrt{\frac{ m\omega}{2\hbar}}\left(\left(\frac{x}{2}+i\frac{p_{x}}{m\omega}\right)+i \left(\frac{y}{2}+i\frac{p_{y}}{m\omega}\right)\right), \tag{14}\] \[a^{\dagger} = \sqrt{\frac{m\omega}{2\hbar}}\left(\xi-i\eta\right)=\sqrt{\frac{ m\omega}{2\hbar}}\left(\left(\frac{x}{2}-i\frac{p_{x}}{m\omega}\right)-i\left( \frac{y}{2}-i\frac{p_{y}}{m\omega}\right)\right),\] (15) \[b = \sqrt{\frac{m\omega}{2\hbar}}\left(X-iY\right)=\sqrt{\frac{m \omega}{2\hbar}}\left(\left(\frac{x}{2}+i\frac{p_{x}}{m\omega}\right)-i\left( \frac{y}{2}+i\frac{p_{y}}{m\omega}\right)\right),\] (16) \[b^{\dagger} = \sqrt{\frac{m\omega}{2\hbar}}\left(X+iY\right)=\sqrt{\frac{m \omega}{2\hbar}}\left(\left(\frac{x}{2}-i\frac{p_{x}}{m\omega}\right)+i\left( \frac{y}{2}-i\frac{p_{y}}{m\omega}\right)\right), \tag{17}\]
where \(a\) and \(b\) commute with each other and obey the usual commutation relations
\[[a,a^{\dagger}]=1,\ \ \ \ \ [b,b^{\dagger}]=1\,. \tag{10}\]
Then, the Hamiltonian and the \(z\) component of the angular momentum \(L_{z}\) are written in terms of \(a\) and \(b\) in simple form as
\[H = \hbar\omega\left(a^{\dagger}a+\frac{1}{2}\right), \tag{11}\] \[L_{z} = xp_{y}-yp_{x}=\hbar(-a^{\dagger}a+b^{\dagger}b)\,. \tag{12}\]
From Eqs. (10)-(11), the number operator \(a^{\dagger}a\) corresponds to the squared distance from the center of the circle and \(b^{\dagger}b\) corresponds to the squared distance of the center from the origin of the coordinates.
We also note that the center of the circle and the relative coordinates are written in terms of creation and annihilation operators as
\[X = \frac{1}{2}\sqrt{\frac{2\hbar}{m\omega}}(b+b^{\dagger}),\ \ \ \ \ Y=\frac{i}{2}\sqrt{\frac{2\hbar}{m\omega}}(b-b^{\dagger}), \tag{13}\] \[\xi = \frac{1}{2}\sqrt{\frac{2\hbar}{m\omega}}(a+a^{\dagger}),\ \ \ \ \ \eta=\frac{i}{2}\sqrt{\frac{2\hbar}{m\omega}}(-a+a^{\dagger}). \tag{14}\]
### Coherent State
As in the case of one-dimensional harmonic oscillator, we introduce the coherent state \(|\alpha,\beta\rangle\) such that \(a|\alpha,\beta\rangle=\alpha|\alpha,\beta\rangle\) and \(b|\alpha,\beta\rangle=\beta|\alpha,\beta\rangle\), which is constructed by applying the operators \(e^{\alpha a^{\dagger}}\) and \(e^{\beta b^{\dagger}}\) on the ground state \(|0\rangle\) as
\[|\alpha,\beta\rangle=e^{-\frac{|\alpha|^{2}+|\beta|^{2}}{2}}e^{\alpha a^{ \dagger}}e^{\beta b^{\dagger}}|0\rangle\,. \tag{15}\]
Then, from Eq. (10) and Eq. (11), the eigenvalues \(\alpha\) and \(\beta\) corresponding to the relative coordinate \((r_{0}\cos\theta_{0},r_{0}\sin\theta_{0})\) and the center of the circle \((X_{0},Y_{0})\) are given by
\[\alpha = \sqrt{\frac{m\omega}{2\hbar}}r_{0}e^{i\theta_{0}}, \tag{16}\] \[\beta = \sqrt{\frac{m\omega}{2\hbar}}(X_{0}-iY_{0}). \tag{17}\]
The wave function of the coherent state is given by
\[\langle\mathbf{x}|\alpha,\beta\rangle= \sqrt{\frac{m\omega}{2\pi\hbar}}\exp\left\{-\frac{m\omega}{4 \hbar}\left[(x-r_{0}\cos\theta_{0}-X_{0})^{2}+(y-r_{0}\sin\theta_{0}-Y_{0})^{ 2}\right]\right\} \tag{18}\] \[\times\exp\left\{i\frac{m\omega}{2\hbar}\left[(r_{0}\sin\theta_{ 0}-Y_{0})x-(r_{0}\cos\theta_{0}-X_{0})y-r_{0}(X_{0}\sin\theta_{0}-Y_{0}\cos \theta_{0})\right]\right\}.\]
\(a(t)\) and \(b(t)\) evolve according to the Heisenberg equation as
\[i\hbar\dot{a}(t) = [a(t),H]=\hbar\omega a(t), \tag{19}\] \[i\hbar\dot{b}(t) = [b(t),H]=0\,. \tag{20}\]
Hence, we have \(a(t)=e^{-i\omega t}a\) and \(b(t)=b\). Then, from Eq. (14), the expectation values of \(\xi(t)\) and \(\eta(t)\) in the coherent state are given by
\[\langle\xi(t)\rangle = \frac{1}{2}\sqrt{\frac{2\hbar}{m\omega}}\langle\alpha,\beta|(a(t )+a^{\dagger}(t))|\alpha,\beta\rangle=\frac{1}{2}\sqrt{\frac{2\hbar}{m\omega}} (\alpha e^{-i\omega t}+\alpha^{*}e^{i\omega t})=r_{0}\cos(\theta_{0}-\omega t), \tag{21}\] \[\langle\eta(t)\rangle = \frac{i}{2}\sqrt{\frac{2\hbar}{m\omega}}\langle\alpha,\beta|(-a( t)+a^{\dagger}(t))|\alpha,\beta\rangle=\frac{1}{2}\sqrt{\frac{2\hbar}{m \omega}}(-\alpha e^{-i\omega t}+\alpha^{*}e^{i\omega t})=r_{0}\sin(\theta_{0 }-\omega t)\,. \tag{22}\]
This corresponds to the position of a charged particle orbiting clockwise with the angular velocity \(\omega\).3 The expectation values of \(X(t)\) and \(Y(t)\) do not depend on time: \(\langle X(t)\rangle=X_{0}\) and \(\langle Y(t)\rangle=Y_{0}\).
Footnote 3: For a negatively charged particle \(\omega=qB/m<0\), the particle orbits counterclockwise.
The expectation value of the Hamiltonian becomes
\[\langle\alpha,\beta|H|\alpha,\beta\rangle=\hbar\omega\left(|\alpha|^{2}+\frac{ 1}{2}\right)=\frac{1}{2}m\omega^{2}r_{0}^{2}+\frac{1}{2}\hbar\omega. \tag{100}\]
### Time Dilation in a Rotating Frame
We show that the time dilation Eq. (30) is invariant even if we move to a rotating frame.
Consider a frame \((x^{\prime},y^{\prime})\) which rotates with the angular velocity \(\Omega\) about the \(z\) axis with respect the inertial frame \((x,y)\). The two coordinates are related by
\[\begin{pmatrix}x^{\prime}\\ y^{\prime}\end{pmatrix}=\begin{pmatrix}\cos\Omega t&\sin\Omega t\\ -\sin\Omega t&\cos\Omega t\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}. \tag{101}\]
Then, the shift vector appears in the rotating frame
\[-c^{2}dt^{2}+dx^{2}+dy^{2}=-c^{2}dt^{2}+(dx^{\prime}-\Omega y^{\prime}dt)^{2}+ (dy^{\prime}+\Omega x^{\prime}dt)^{2}, \tag{102}\]
that is, \(\beta^{x^{\prime}}c=-\Omega y^{\prime}\) and \(\beta^{y^{\prime}}c=\Omega x^{\prime}\). In the presence of the shift vector, the (external) Hamiltonian becomes \(H=\frac{(\mathbf{P}-q\mathbf{A})^{2}}{2m}-\boldsymbol{\beta}c\cdot(\mathbf{P}- q\mathbf{A})\), so that the time evolution of the position vector is given by
\[\dot{\mathbf{x}}^{\prime}=\frac{[\mathbf{x}^{\prime},H]}{i\hbar}=\frac{ \mathbf{P}-q\mathbf{A}}{m}-\boldsymbol{\beta}c\,. \tag{103}\]
Moreover, from Eq. (101) and Eq. (102), we have
\[\dot{x}^{\prime} = \Omega y^{\prime}+\omega(\eta\cos\Omega t-\xi\sin\Omega t), \tag{104}\] \[\dot{y}^{\prime} = -\Omega x^{\prime}-\omega(\xi\cos\Omega t+\eta\sin\Omega t)\,. \tag{105}\]
Hence
\[\frac{(\mathbf{P}-q\mathbf{A})^{2}}{m^{2}} = (\dot{x}^{\prime}+\beta^{x^{\prime}}c)^{2}+(\dot{y}^{\prime}+ \beta^{y^{\prime}}c)^{2} \tag{106}\] \[= \omega^{2}\left(\xi^{2}+\eta^{2}\right)=\dot{x}^{2}+\dot{y}^{2}\,.\]
Therefore, the time dilation formula Eq. (30) holds in a rotating frame. This implies, in particular, that even if we move to a rotating frame with \(\Omega=-\omega\) so that a particle is at rest (classically), the time dilation does not change.
|
2305.13520 | Tied-Augment: Controlling Representation Similarity Improves Data
Augmentation | Data augmentation methods have played an important role in the recent advance
of deep learning models, and have become an indispensable component of
state-of-the-art models in semi-supervised, self-supervised, and supervised
training for vision. Despite incurring no additional latency at test time, data
augmentation often requires more epochs of training to be effective. For
example, even the simple flips-and-crops augmentation requires training for
more than 5 epochs to improve performance, whereas RandAugment requires more
than 90 epochs. We propose a general framework called Tied-Augment, which
improves the efficacy of data augmentation in a wide range of applications by
adding a simple term to the loss that can control the similarity of
representations under distortions. Tied-Augment can improve state-of-the-art
methods from data augmentation (e.g. RandAugment, mixup), optimization (e.g.
SAM), and semi-supervised learning (e.g. FixMatch). For example,
Tied-RandAugment can outperform RandAugment by 2.0% on ImageNet. Notably, using
Tied-Augment, data augmentation can be made to improve generalization even when
training for a few epochs and when fine-tuning. We open source our code at
https://github.com/ekurtulus/tied-augment/tree/main. | Emirhan Kurtulus, Zichao Li, Yann Dauphin, Ekin Dogus Cubuk | 2023-05-22T22:23:40Z | http://arxiv.org/abs/2305.13520v1 | # Tied-Augment: Controlling Representation Similarity Improves
###### Abstract
Data augmentation methods have played an important role in the recent advance of deep learning models, and have become an indispensable component of state-of-the-art models in semi-supervised, self-supervised, and supervised training for vision. Despite incurring no additional latency at test time, data augmentation often requires more epochs of training to be effective. For example, even the simple flips-and-crops augmentation requires training for more than 5 epochs to improve performance, whereas RandAugment requires more than 90 epochs. We propose a general framework called Tied-Augment, which improves the efficacy of data augmentation in a wide range of applications by adding a simple term to the loss that can control the similarity of representations under distortions. Tied-Augment can improve state-of-the-art methods from data augmentation (e.g. RandAugment, mixup), optimization (e.g. SAM), and semi-supervised learning (e.g. FixMatch). For example, Tied-RandAugment can outperform RandAugment by 2.0% on ImageNet. Notably, using Tied-Augment, data augmentation can be made to improve generalization even when training for a few epochs and when fine-tuning. We open source our code at [https://github.com/ekurtulus/tied-augment/tree/main](https://github.com/ekurtulus/tied-augment/tree/main)
Machine Learning, ICML
## 1 Introduction
Data augmentation is an integral part of training deep neural networks to improve their performance by modulating the diversity and affinity of data (Gontijo-Lopes et al., 2020). Although data augmentation offers significant benefits (Simard et al., 2003; Krizhevsky et al., 2017; Shorten & Khoshgoftaar, 2019; Szegedy et al., 2015), as the complexity of the augmentation increases, so does the minimum number of epochs required for its effectiveness (Cubuk et al., 2019). As neural networks and datasets get larger, machine learning models get trained for fewer epochs (for example, Dosovitskiy et al. (2020) pretrained for 7 epochs), typically due to computational limitations. In such cases, conventional data augmentation methods lose their effectiveness. Additionally, data augmentation is not as effective when finetuning pretrained models as it is when training from scratch.
In this work, we present a general framework that mitigates these problems and is applicable to a range of problems from supervised training to semi-supervised learning by amplifying the effectiveness of data augmentation through feature similarity modulation. Our framework, Tied-Augment, makes forward passes on two augmented views of the data with tied (shared) weights. In addition to the classification loss, we add a similarity term to enforce invariance between the features of the augmented views. We find that our framework can be used to improve the effectiveness of both simple flips-and-crops (Crop-Flip) and aggressive augmentations even for few-epoch training. As the effect of data augmentation is amplified, the sample efficiency of the data increases. Therefore, our framework works well even with small amounts of data, as shown by our experiments on CIFAR-4K (4k samples from CIFAR-10), Oxford-Flowers102, and Oxford-IIT Pets.
Despite the simplicity of our framework, Tied-Augment empowers augmentation methods such as Crop-Flip and RandAugment (Cubuk et al., 2020) to improve generalization even when trained for a few epochs, which we demonstrate for a diverse set of datasets. For longer training, Tied-Augment leads to significant improvements over already-strong baselines such as RandAugment and mixup (Zhang et al., 2017). For example, Tied-RandAugment achieves a 2% improvement over RandAugment when training ResNet-50 for 360 epochs on ImageNet, without any architectural modifications or additional regularization.
Our contributions can be summarized as follows:
* We show that adding a simple loss term to modulate feature similarity can significantly improve the effectiveness of data augmentation, which we show for a diverse set of data augmentations such as Crop-Flip, RandAugment, and mixup.
* Unlike conventional methods of data augmentation, with our framework, data augmentation can improve performance even when training for only a single epoch for finetuning pretrained networks or training from scratch on a wide range of datasets with different architectures.
* We compare Tied-Augment to multi-stage self-supervised learning methods (first pretraining, then finetuning on ImageNet). Our proposed framework is designed to be as straightforward as traditional data augmentation techniques, while avoiding the need for additional components such as a memory bank, large batch sizes, contrastive data instances, extended training periods, or large model sizes. Despite this simplicity, Tied-Augment can outperform more complex self-supervised learning methods on ImageNet validation accuracy.
## 2 Background / Related Work
### Data Augmentation
Data augmentation has been a critical component of recent advances in deep vision models (He et al., 2022; Bai et al., 2022; Liu et al., 2021). We can divide data augmentation works into two categories: individual operations and optimal combinations of individual operations. In general, data augmentation operations are performed to expand the distribution of the input space and improve performance.
Random cropping and horizontal flips are widely used operations in image processing problems. This set of operations is usually extended by color operations (Szegedy et al., 2016, 2017). mixup (Zhang et al., 2017) is a method that uses a convex sum of images and their labels. This operation provides better generalization and robustness even in the presence of corrupted labels. Other operations include Cutout (DeVries and Taylor, 2017), a method that randomly masks out square regions within the image; Patch Gaussian (Lopes et al., 2019), an operation that combines Cutout with the addition of Gaussian noise by randomly adding noise to selected square regions; (Liu et al., 2016), a cropping strategy for object detection that generates smaller training samples by taking crops from an upscaled version of the original image; Copy-Paste (Ghiasi et al., 2021), an augmentation method that inserts random objects onto the selected training sample.
### Self-supervised Learning
Self-supervised learning is a form of representation learning that usually makes use of pretext tasks to learn general representations (Ericsson et al., 2022). Generally, self-supervised learning methods follow a two-step paradigm. They first pretrain the network on a large dataset, then use it for fine-tuning on downstream tasks.
Clustering is the paradigm of mapping non-linear augmented views projections into a unit sphere of K classes (Bautista et al., 2016). This paradigm is notably widespread in image understanding problems (Caron et al., 2018; Asano et al., 2019; Caron et al., 2019; Gidaris et al., 2020). SwAV (Caron et al., 2020) is particularly noteworthy in this set of works. They cluster the data by enforcing consistency between the assigned clusters of the augmented views. Additionally, they propose _multi-crop_ strategy, a random cropping strategy that not only two standard resolution crops but also N low resolution crops to take the features of varying resolutions into account.
Contrastive instance discrimination learns representations by pushing the features of positive instances, meaning augmented views of the same image or images with same classes, closer and pushing features of negative instances away (Hadsell et al., 2006). Currently, this is one of the most widely used paradigms.
MoCo (He et al., 2020) maintains a dictionary of encodings and views the problem as query matching. SimSiam (Chen and He, 2021) proposes to encode two augmented views of the same image, one with an MLP (multi-layer perceptron) head, and increase feature similarity. BYOL (Grill et al., 2020) follows the same method, but uses a network and another network following it by exponential moving average. SimCLR (Chen et al., 2020) uses a network with an MLP head to encode two augmented views and maximizes similarity through contrastive loss (Hadsell et al., 2006). NNCLR (Dwibedi et al., 2021) improves on this approach by using clustering to maximize the number of correct negative instances. SupCon (Khosla et al., 2020) adapts this paradigm to supervised learning by following SimCLR and using contrastive loss, but selecting the correct positive and negative labels by using labels. SupCon showed that augmentation methods such as RandAugment with a supervised-contrastive loss can outperform the same data augmentation methods with a cross-entropy loss.
## 3 Tied-Augment
Tied-Augment framework combines supervised and representation learning in a simple way. We propose to enforce pairwise feature similarity between given augmented views of the same image while using a supervised loss signal. As shown in Figure 2, our framework consists of three compo
nents:
* **Two stochastic data augmentation modules** (can be identical) produce two augmented views of the same image. These transformations can be chosen arbitrarily as long as they improve the performance of the baseline supervised model. However, in this work, we use the same augmentation for both branches for simplicity. Given two augmentations, we name the case after the more complex augmentation. For example, if RandAugment is used with Crop-Flip on the other branch, we name the case Tied-RandAugment. In Section 4 we provide a thorough analysis of the effects of the chosen data augmentation modules.
* **A neural network** generates features (pre-logits) and logits based on given an image. There are no architectural constraints as our framework is based on the pre-logit feature vector, which is used in all classification networks.
* **Pairwise feature similarity and supervised loss functions** enforce pairwise feature similarity/dissimilarity and supervised loss signal, respectively. In this work, we use L2 loss as the pairwise feature similarity function (we ablate this decision in 5) and, for simplicity, cross entropy loss as the supervised loss. The contribution of the feature-similarity term to the loss is controllable by the hyperparameter Tied-weight.
The training of Tied-Augment works as follows. In each training iteration, we sample a batch of images of size \(N\), and generate two augmented views, resulting in \(2N\) images. For each image pair, we compute the feature similarity with L2 loss and for each image we calculate cross entropy loss. For given input \(x\), logits \(f(x)\), labels \(y\), features of the first augmented views \(v_{1}=v_{1}(x)\), features of the second augmented views \(v_{2}=v_{2}(x)\), supervised loss \(\ell\), and the feature similarity loss weight \(w\) the loss function of Tied-Augment is:
\[\mathcal{L}_{\text{Tied-Aug}}=\sum_{i}\ell(f(v_{i}(x)),y)+w\|v_{1}(x)-v_{2}(x) \|^{2} \tag{1}\]
In Algorithm 1, we provide an overview of our framework. In general, the views correspond directly to the feature representations \(v_{i}=h_{i}=h(\text{aug}_{i}(x))\) where \(h\) is the function that produces the feature representation and \(\text{aug}_{i}(\cdot)\) is the \(i\)-th augmentation function. However, we will also examine cases that require more elaborate views such as Tied-mixup.
### Tied-FixMatch
In this section, we apply the Tied-Augment framework to FixMatch (Sohn et al., 2020) as a case study to demonstrate the easy adaptability of our framework. We refer to this version as Tied-FixMatch. FixMatch is a semi-supervised learning algorithm that combines consistency regularization and pseudo-labeling. For the labeled portion of the dataset, FixMatch uses standard cross-entropy loss. For the unlabeled images, FixMatch generates two augmented views of the same image using a weak (Crop-Flip) and a strong (RandAugment) transformation. Then, the model's predictions for the weakly augmented image are used as pseudo-labels
Figure 1: Python code for Tied-Augment based on NumPy.
Figure 2: Tied-Augment framework.
for the strongly-augmented image. However, predictions whose confidence is below a threshold are masked out and not used to calculate the unsupervised loss. FixMatch uses a standard cross-entropy loss denoted \(\ell_{s}\) on the labeled images.
Considering that FixMatch already has a two-branch strategy for learning from unlabeled images, we can introduce Tied-Augment without any additional computational cost. In Tied-FixMatch, we change the objective as not only maximizing consistency and minimizing pseudo-labeling loss, but also minimizing the pairwise feature distance between augmented views of the same unlabeled images. In doing so, we also mask the instances with a confidence threshold and do not apply the pairwise feature similarity loss. Therefore, given features of the weakly-augmented unlabeled images \(h_{1}\), features of the strongly-augmented unlabeled images \(h_{2}\), and a similarity loss weight \(w\), the loss minimized by Tied-FixMatch is simply:
\[\ell_{s}+\lambda_{u}\ell_{u}+w\|h_{1}-h_{2}\|^{2} \tag{2}\]
### Tied-mixup
Here, we consider the application of Tied-Augment to mixup (Zhang et al., 2017). mixup is a popular data augmentation technique that produces augmented examples by convex combination of pairs of training points
\[\hat{x} =\lambda x_{1}+(1-\lambda)x_{2}\] \[\hat{y} =\lambda y_{1}+(1-\lambda)y_{2}\]
where \(\lambda\sim\text{Beta}(\alpha,\alpha)\) is a mixing coefficient sampled from a Beta distribution with parameter \(\alpha\).
Unlike the previously considered augmentations, different mixup augmented views have different labels in general. Applying Tied-Augment to mixup requires defining a better correspondence between the two augmented views. We propose the following
\[\Omega_{M}(h)=w\|\lambda h(x_{1})+(1-\lambda)h(x_{2})-h(\hat{x})\|^{2}. \tag{3}\]
In order to produce features that are in the same space as the first view of mixed examples \(v_{1}=h(\hat{x})\), this approach mixes the features of the clean examples to produce the second view \(v_{2}=\lambda h(x_{1})+(1-\lambda)h(x_{2})\). In effect, this is encouraging the features of the model to be linear in-between training points.
### Tied-SAM (Sharpness Aware Minimization)
Sharpness-Aware Minimization (SAM) (Foret et al., 2020) is a widely-used training strategy that consists of two steps. At the first step, SAM applies an adversarial perturbation to first place the weights at the highest point in the loss landscape. Then, in the second step, this results in a move to a wider minimum. Tied-SAM augments this algorithm by boosting the adversarial move through pushing features of the augmented views apart (negating the Tied-weight) in the first step. In doing so, enable SAM to find a better adversarial loss landscape location. For the second step, we apply standard Tied-Augment to move to an even wider minimum.
### Understanding Tied-Augment
We can gain some insight into TiedAugment by considering its application to Gaussian input noise augmentation. The additive regularization for Tied-GaussianNoise is given by
\[\Omega_{G}(f)=wE_{\epsilon}[\|h(\mathbf{x})-h(\mathbf{x}+\epsilon)\|^{2}]\]
where \(h\) produces the features of the network, \(\mathbf{x}\in\mathbb{R}^{n}\), and \(\epsilon\sim\mathcal{N}(0,\sigma)^{n}\). Consider the approximation of this term using the first-order Taylor expansion of \(h\) at \(\mathbf{x}\), \(\Omega_{G}(h)\approx w\sigma^{2}\|\nabla h(\mathbf{x})\|_{F}^{2}\). This additive regularization is part of the well known class of Tikhonov regularizers (Tikhonov and Arsenin, 1977; Bishop, 1995) that include weight decay. It encourages the feature mapping function to become more invariant to small corruptions in the input, which can be beneficial for generalization. For a more detailed analysis of Tied-Augment, please refer to Appendix 8.6.
## 4 Experiments
To show the effectiveness of Tied-Augment, we experiment with training from-scratch on CIFAR-10, CIFAR-100, and ImageNet. We extend these tests with finetuning and few-epoch / low-data regimes to simulate more realistic scenarios, where the amount of domain-specific data or available compute is limited. Lastly, we show that Tied-Augment significantly improves the performance of state-of-the-art methods (e.g. mixup and SAM) and can be used for semi-supervised learning (e.g. FixMatch). For all models that use RandAugment, we show its configuration as "RandAugment(N=number of layers, M=magnitude, P=probability)". If probability is not given, it is set to the default of 1.0.
### CIFAR-10 and CIFAR-100, CIFAR-4K
CIFAR-10 and CIFAR-100 are widely studied datasets, and CIFAR-4K is a benchmark intended to simulate the low-data regime. All baselines and Tied-Augment models include random pad-and-crops and flips (CF). RandAugment baselines and Tied-RandAugment models also include Cutout (DeVries and Taylor, 2017). For RandAugment experiments, we copy the reported optimal number of layers (N) and magnitude (M) for both augmentation branches, decoupling the hyperparameter search space from augmentation selection. We did not find an additional improvement from tuning the RandAugment on the second branch (e.g. RandAugment(N=2, M=14) for one branch, and RandAugment(N=2,
M=19) for the second). We also experimented with Stacked-RandAugment (Tian et al., 2020) and SimAugment (Chen et al., 2020) on the second branch but saw no performance improvement over standard RandAugment. On both CIFAR-10 and CIFAR-100, we use the same data augmentation pairs for Wide-ResNet-28-10 and Wide-ResNet-28-2. All models are trained using the hyperparameters from RandAugment (Cubuk et al., 2020).
Additionally, we measure the efficacy of Tied-Augment on CIFAR-4K. We randomly sample 400 images from each class for training and leave the test set as is. We use the same hyperparameters as Cubuk et al. (2020) including training for 500 epochs. We use the same optimal setting of RandAugment(N=2, M=14) on both branches. As shown in Table 1, Tied-Augment improves both Crop-Flip and RandAugment by a significant amount, on all CIFAR datasets considered. We report all the hyperparameters in Appendix 8.2.
### Few-epoch training
Previous work has shown that data augmentation is only able to improve generalization when the model is trained for more than a certain number of epochs. Usually, more complex data augmentation protocols require more epochs. For example, Cubuk et al. (2019) reported that more than 90 epochs was required to be able to search and apply AutoAugment policies. Similarly, Lopes et al. (2019) reported that none of the tested augmentation transformations was helpful when trained for only 1 epoch, even for simple augmentations such as flips or crops. To test how much of this problem can be mitigated by Tied-Augment, we evaluate our method on CIFAR-10 and CIFAR-100 for {1, 2, 5, 10} epochs. For runs with epoch={1, 2, 5}, the learning rate and weight-decay were tuned to maximize the validation accuracy of the identity baseline (since in this regime identity baseline outperforms the Crop-Flip baseline). The learning rate and weight-decay hyperparameters for the 10 epoch models were tuned to maximize the validation set performance of the Crop-Flip baseline.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline & CF & Tied-CF & RA & Tied-RA \\ \hline
**CIFAR-10** & & & & & \\ WRN-28-2 & 94.9 & **95.5** & 95.8 & **96.9** \\ WRN-28-10 & 96.1 & **96.5** & 97.3 & **98.1** \\ \hline
**CIFAR-100** & & & & \\ WRN-28-2 & 75.4 & **76.9** & 79.3 & **80.4** \\ WRN-28-10 & 81.2 & **81.6** & 83.3 & **85.0** \\ \hline
**CIFAR-4K** & & & & \\ WRN28-2 & 82.0 & **82.5** & 85.3 & **87.8** \\ WRN28-10 & 83.5 & **84.5** & 86.8 & **90.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Test accuracy (%) on CIFAR-10, CIFAR-100, CIFAR-4K, and ImageNet datasets.** We compare Tied-Augment to Crop-Flip (CF) and RandAugment (RA) baselines. Reported results are the average of 5 independent runs. Standard deviation of the results for each experiment is smaller than or equal to 0.1%.
\begin{table}
\begin{tabular}{c c|c c|c c} \hline \hline & \#epochs & Identity & Baseline & Tied- & Baseline & Tied- \\ \hline
**Cars** & & & & \\
2 & 69.0 & 59.9 & **69.5** & 58.7 & **69.4** \\
5 & 80.9 & 81.6 & **84.7** & 81.4 & **84.6** \\
10 & 82.0 & 86.7 & **88.3** & 87.1 & **89.2** \\
25 & 82.0 & 88.9 & **89.4** & 90.4 & **91.5** \\
50 & 82.3 & 89.6 & **90.0** & 91.5 & **92.2** \\ \hline
**Flowers** & & & & \\
2 & 56.6 & 47.1 & **56.8** & 47.2 & **56.5** \\
5 & 88.3 & 86.4 & **88.7** & 84.7 & **88.7** \\
10 & 90.7 & 91.6 & **93.3** & 92.1 & **93.5** \\
25 & 91.8 & 93.9 & **94.1** & 93.5 & **94.3** \\
50 & 92.2 & 93.6 & **94.5** & 94.1 & **95.1** \\ \hline
**Pets** & & & & \\
2 & 91.4 & 91.4 & **92.1** & 91.4 & **92.0** \\
5 & 92.4 & 92.8 & **93.1** & 92.1 & **93.0** \\
10 & 92.5 & 93.1 & **93.3** & 92.9 & **93.2** \\
25 & 92.9 & 93.4 & **93.7** & 93.4 & **93.6** \\
50 & 92.8 & 93.5 & **93.8** & 93.5 & **93.7** \\ \hline
**Aircraft** & & & & \\
2 & 44.2 & 34.1 & **41.8** & 31.6 & **40.8** \\
5 & 58.2 & 51.1 & **58.3** & 50.6 & **58.1** \\
10 & 59.3 & 60.6 & **61.9** & 60.7 & **61.5** \\
25 & 61.2 & 68.8 & **69.9** & 72.3 & **74.6** \\
50 & 62.3 & 71.6 & **72.3** & 74.2 & **76.1** \\ \hline
**CIFAR-10** & & & & \\
2 & 95.7 & 95.2 & **95.9** & 95.1 & **95.9** \\
5 & 96.4 & 96.3 & **96.8** & 96.3 & **96.8** \\
10 & 96.5 & 96.8 & **97.1** & 96.8 & **97.2** \\
25 & 96.6 & 97.2 & **97.4** & 97.3 & **97.6** \\
50 & 96.6 & 97.2 & **97.4** & 97.6 & **97.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Finetuning experiments on Stanford-cars, Oxford-Flowers102 (Flowers), Oxford-IIIIP Sets (Pets), FGVC Aircraft (Aircraft), CIFAR-10 datasets.** Reported results for the 2, 5 and 10 epoch experiments are the average of 10 independent runs, while the rest is the average of 5 independent runs. Baseline results are the maximum of standard training and double augmentation branch with no similarity loss. pretrained model is a standard ResNet-50, Tied-Augment is only used for finetuning. Best CF (CF vs. Tied-CF) and RA (RA vs. Tied-RA) results are bolded. The standard deviations of the accuracies are smaller than or equal to 0.5%, 0.4%, 0.2%, 0.1%, and 0.1% for 2, 5, 10, 25, and 50 epochs respectively.
\begin{table}
\begin{tabular}{c c|c c|c c} \hline \hline & CF & Tied-CF & RA & Tied-RA \\ \hline
**CIFAR-10** & & & & \\ WRN-28-2 & 94.9 & **95.5** & 95.8 & **96.9** \\ WRN-28-10 & 96.1 & **96.5** & 97.3 & **98.1** \\ \hline
**CIFAR-100** & & & & \\ WRN-28-2 & 75.4 & **76.9** & 79.3 & **80.4** \\ WRN-28-10 & 81.2 & **81.6** & 83.3 & **85.0** \\ \hline
**CIFAR-4K** & & & & \\ WRN28-2 & 82.0 & **82.5** & 85.3 & **87.8** \\ WRN28-10 & 83.5 & **84.5** & 86.8 & **90.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Test accuracy (%) for few-epoch training on CIFAR datasets. Reported results are the average of 10 independent runs. For 1, 2, 5, 10 epochs, standard deviations are below 0.5, 0.3, 0.2, and 0.1 respectively.**
To ensure fairness by eliminating the possibility of doubled epochs introduced by the two forward passes of our framework, in all reported tasks, the baselines performances are the maximum of standard training (no similarity loss and single augmentation branch) and double augmentation branch (with variable augmentation methods) with no similarity loss. Unlike our 200 epochs CIFAR experiments, we do not use the same augmentation for both branches but allow both the baseline and the Tied-Augment model to combine any one of the following augmentation methods: RandAugment(N=1, M=2), RandAugment(N=2, M=14), Crop-Flip, and identity. If one of the branches use RandAugment, for instance RandAugment for one branch and identity for the other, then it is only compared to RandAugment runs.
In Table 2, we show that Tied-Augment can outperform identity transformation in the epoch regimes as small as 2. Unconventionally, Tied-Augment is capable of pushing RandAugment to the level of Crop-Flip and identity, and even outperforming them in the {2, 5, 10} epochs regimes. For all the epoch regimes, Tied-Augment outperforms its baseline significantly, up to 6.7%.
In addition to training networks from scratch for a limited number of epochs, finetuning for a few epochs is also an important problem given the ever-growing trend to go deeper for neural networks. Therefore, we test our framework on finetuning tasks where data augmentation is considerably less effective than from-scratch training. For this purpose, we train a ImageNet-pretrained ResNet-50 (He et al., 2016) model on Stanford-Cars (Krause et al., 2013), Oxford Flowers (Nilsback and Zisserman, 2008), Oxford Pets (Parkhi et al., 2012), FGVC Aircraft (Maji et al., 2013), and CIFAR-10 (Krizhevsky et al., 2009). Table 3 compares the performance of our framework to the baselines models. It is evident that, like our from-scratch experiments, Tied-Augment is able to outperform identity with not only a weak augmentation like Crop-Flip but with RandAugment. On all the finetuning datasets we experimented with, Tied-Augment consistently and significantly improves the baseline, up to 10.7%.
### Image classification on ImageNet
We train ResNet-50 and ResNet-200 architectures (He et al., 2016) on the ImageNet dataset (Deng et al., 2009) with RandAugment. Previous work had shown that aggressive data augmentation strategies such as AutoAugment or RandAugment do not improve generalization if trained only for 90 epochs. To see if Tied-Augment can fix this issue, we train with Tied-RandAugment on ResNet-50 for 90 epochs. To see the full benefit of aggressive data augmentation, we also train Tied-RandAugment models for 360 epochs. Finally, to see the impact our approach on simple augmentations, we train the standard ResNet-50 with the standard Crop-Flip baseline and our Tied-CropFlip. Finally, to test the impact of Tied-Augment on a larger model, we train ResNet-200 for 180 epochs. We train ResNet-200 for fewer epochs to compensate for its larger compute requirement. We do not observe an improvement on the baseline ResNet-200 models when training for longer. All ResNet models use the standard training hyperparameters for ResNet, listed in Appendix Section 8.1.
In Table 4, we find that Tied-RandAugment is able to improve top-1 accuracy by almost 2% when trained for 90 epochs, and significantly reduces the number of epochs required for RandAugment to be effective, whereas regular RandAugment requires more than 90 epochs to improve generalization. When trained for 360 epochs, Tied-RandAugment still improves on RandAugment by 2%, totalling a 3.3% improvement over simple Crop-Flip. We also observe that Tied-CropFlip outperforms regular Crop-Flip in every setting.
To evaluate Tied-Augment on a different data augmentation method, we trained Resnet-50 networks with the same setup with mixup. We cross-validate the mixup coefficient \(\alpha\) within the values \(\{0.2,0.3,0.4\}\), and the similarity loss weight within \(\{1,50,100\}\). Our mixup baseline achieves an top-1 accuracy of 77.9%. When we apply our simple Tied-Augment framework to mixup, Tied-mixup achieves 78.8%, an almost 1% improvement over an already strong baseline.
Since the Tied-Augment loss has a supervised and an unsupervised term, we compare Tied-Augment to relevant self-supervised methods that utilize all the training labels of ImageNet in addition to self-supervised training on ImageNet samples. We find that even though Tied-RandAugment is trained for fewer epochs without the need for multiple stages of training, Tied-RandAugment outperforms other methods for both ResNet-50 and ResNet-200 (Table 5).
### Transferability of Tied-Augment Features
We finetune a Tied-ResNet50 to downstream datasets to measure the transferability of its features and compare it to BYOL (Grill et al., 2020), SimCLR-v2 (Chen et al., 2020), and SwAV (Caron et al., 2020). We follow the SSL-Transfer (Ericsson et al., 2021) framework for our finetuning experiments. Namely, we finetune for 5000 step using a batch
\begin{table}
\begin{tabular}{l c|c c|c c}
**ImageNet** & \#epochs & CF & Tied-CF & RA & Tied-RA \\ \hline ResNet-50 & 90 & 76.3 & **77.0** & 76.3 & **78.2** \\ ResNet-50 & 360 & 76.3 & **76.9** & 77.6 & **79.6** \\ \hline ResNet-200 & 180 & 78.5 & **79.7** & 80.0 & **81.8** \\ \hline \end{tabular}
\end{table}
Table 4: **ImageNet results.** CF and RA refer to Crop-Flip and RandAugment, respectively. ResNet-200 baselines do not improve when trained for more than 180 epochs. Standard deviations for the reported results are smaller than or equal to 0.2%.
size of 64, SGD with Nesterov momentum (Sutskever et al., 2013), doing a grid search over learning rate and weight decay. We choose the learning rate from 4 logarithmically spaced values between 0.0001 and 0.1. Weight decay is chosen from 4 logarithmically spaced values between \(10^{-6}\) and \(10^{-3}\) as well as 0.
In Table 6, we compare the performance of Tied-Augment to self-supervised models and the supervised baseline. Our model outperforms SwAV (Caron et al., 2020) by 0.8% and the supervised baseline by 1.6%. This shows that the features learned by Tied-Augment are more transferable than its self-supervised and supervised counterparts. It is worth noting that a Tied-RandAugment model finetuned using Tied-CropFlip significantly improves an already strong performacee (0.9%).
### Tied-FixMatch
To back up our claim that we offer a framework that can be used for a wide-range of problems, we apply Tied-Augment to a semi-supervised learning algorithm: FixMatch (Sohn et al., 2020). We compare the performance of our framework to the baseline exactly following the hyperparameters of the original work, without changing the augmentation pair of the unsupervised branch or adding the similarity term to the supervised branch. We use Wide-ResNet-28-2 and Wide-ResNet-28-8 configurations for CIFAR-10 and CIFAR-100 respectively. For the unsupervised branch, we use Crop-Flip for the weak branch and RandAugment(N=2, M=10, probability=0.5) for the strong branch, while supervised branch uses Crop-Flip. For our CIFAR-10 and CIFAR-100 experiments, we use 4000 and 10000 labeled examples preserving the class balance respectively. In Table 7, it is shown that Tied-FixMatch not only outperforms the baseline FixMatch but also outperforms its supervised counterpart which uses all of the 50000 labeled images. All hyperparameters are listed in Appendix 8.3.
### Composability of Tied-Augment
It is crucial for a framework to be composable with other methods while retaining their performance improvements. To show that Tied-Augment has this property, we experiment with Sharpness-Aware Minimization (Foret et al., 2020). For SAM experiments, we train a Wide-ResNet-28-10 following the hyperparameters of the original work for 200 epochs which are listed in Appendix 8.4. We replicate their results with RandAugment(N=2, M=14). In Table 8, we show that Tied-SAM outperforms the baseline SAM.
## 5 Ablations and Analysis
In this section, we analyze the components of Tied-Augment framework and show their effectiveness. Additionally, we ablate our design choices.
### Deconstructing Tied-Augment
In Table 9, we deconstruct Tied-Augment framework and show the improvement from each component. For each task considered, we create the highest-performing Tied-Augment method by first starting with the simplest baseline (standard crop-flip). Then, we apply RandAugment. Even though RandAugment provides noteworthy performance benefits (e.g. 1.3% on ImageNet), it is not effective and even harmful for finetuning and few-epoch training. Since Tied-Augment requires two differently augmented views of a sample, some of its improvement comes from "augmenting the batch" (Hoffer et al., 2020; Fort et al., 2021) (row (3)). We find additional benefits from diversifying the augmentation policies used for the different views (row (4)). Finally, the largest improvement comes from "tying" the representations coming from the two branches, which gives us Tied-RandAugment (row (5)), which adds an additional 1.1%, 0.4%, and 15.2% accuracy on ImageNet, CIFAR-10, and Stanford-Cars (epochs), respectively, in addition to our improved diversely augmented batch approach.
We find that for few-epoch from-scratch and fining experiments, generally 2 or 5 epochs, supervised signal from only one branch shows a better performance. In other cases, however, we found that cross entropy loss on both batches \(b_{1}\) and \(b_{2}\) improves the results more.
We, then, discuss the computational cost Tied-Augment below. Tied-Augment requires a single forward pass and a single backward pass. If there is no I/O bottleneck and a high-end accelerator (e.g Nvidia A100), the runtime of a forward pass on \(b_{1}\) is roughly equal to a forward pass on \(b_{1}\) and \(b_{2}\). However, from a number of computational operations perspective, the required computation is double
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline & Epochs & Multi-stage & Top-1 \\ \hline
**ResNet-50** & & & \\ SimCLR & 1000 & ✓ & 76.0 \\ SimCLR v2 & 800 & ✓ & 76.3 \\ BYOL & 1000 & ✓ & 77.7 \\ SupCon & 350 & ✓ & 78.7 \\ Tied-RandAugment & 360 & ✗ & **79.6** \\ \hline
**ResNet-200** & & & \\ SupCon & 700 & ✓ & 81.4 \\ Tied-RandAugment & 360 & ✗ & **81.8** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Comparison of our method to self-supervised models.** Multi-stage denotes the need for separate pretraining and finetuning stages. Note that Tied-Augment methods do not require a pretraining stage. Performances of the self-supervised models are their 100% ImageNet finetuned results. Results reported are the average of 5 independent runs. The standard deviations are smaller than or equal to 0.2% for all reported results.
the forward pass of standard training. The cost of the backward pass on \(b_{1}\) and \(b_{2}\) size is approximately the same as a backward pass on \(b_{1}\) on modern accelerators. Therefore, Tied-Augment only increases the computational cost by the additional forward pass; however, it is still computationally cheaper than double-step methods like SAM because it does not require two separate backward passes. For example, instead of a 100% increase in computational cost (as would be the case for SAM), we empirically observe an increase of roughly 30% increase on an Nvidia A100 on CIFAR-10.
### Similarity Function
One of the critical components of Tied-Augment is the similarity term. In Table 10, we report the results of L1, L2 and cosine similarity functions. Here, it is worth noting that in the reported results, the weight of the cosine function is negative unlike L1 and L2 in the sense that for maximizing feature similarity L1 and L2 need to be minimized while cosine angle between the representations needs to be maximized. It is a known phenomenon that data augmentation can improve the model output invariance to distortions (Gontijo-Lopes et al., 2020). Therefore, it is intuitive to also encourage representation invariance. Interestingly, we find that the opposite can also be true. Enforcing feature dissimilarity can also improve performance on highly overparametrized CIFAR datasets considered; however, this is not the case for ImageNet for L2 similarity function. For simplicity (halving the search space for Tied-weight) and maximum performance improvement on all considered datasets, we choose only to consider increasing invariance. It is worth noting that negative Tied-weights for L1 and L2 (minimizing feature similarity) on CIFAR datasets also outperforms the baseline (Tied-weight=0). For cosine similarity, positive Tied-weight can outperform baseline for all datasets considered. We provide an analysis of the stability of tied-weight in Appendix 5.4.
### General Design Choices
In Tied-Augment framework, there are many design choices that are of interest. For example, given that we double the batch size, there are two ways of doing the forward pass: separate forward passes on the batches or a single forward pass on both of the batches concatenated. These two approaches are not functionally equivalent for networks with BatchNorm (BN) (Ioffe and Szegedy, 2015) due to the running statistics. We find that the performance difference between these cases is generally equal to or less than 0.1%. We consistently report the results of double separate forward passes.
Another design choice to consider is the use of BN layers. For our experiments where we use two different RandAugment configurations (one weak, one stronger), we evaluated Split BatchNorms (Xie et al., 2020; Merchant et al., 2020) but did not find significant performance improvements. Thus we only report experiments that use standard BN layers.
Being invariant to different crops is a desirable prop
\begin{table}
\begin{tabular}{c|c c c c c c c c c c|c} \hline \hline & Aircraft & Cal-101 & Cars & CIFAR-10 & CIFAR-100 & DTD & Flowers & Food & Pets & SUN397 & Avg. \\ \hline SimCLR v2 & 78.7 & 82.9 & 79.8 & 96.2 & 79.1 & 70.2 & 94.3 & 82.2 & 83.2 & 61.1 & 80.8 \\ BYOL & 79.5 & 89.4 & 84.6 & 97.0 & 84.0 & 73.6 & 94.5 & 85.5 & 89.6 & 64.0 & 84.2 \\ SwAV & 83.1 & 89.9 & 86.8 & 96.8 & 84.4 & 75.2 & 95.5 & **87.2** & 89.1 & **66.2** & 85.4 \\ Supervised & 83.5 & 91.0 & 82.6 & 96.4 & 82.9 & 73.0 & 95.5 & 84.6 & 92.4 & 63.6 & 84.6 \\ Tied-RA & 84.7 & 92.6 & 89.9 & 96.9 & 83.9 & 75.8 & 96.7 & 84.3 & 93.5 & 63.9 & 86.2 \\ \hline Tied-RA + & **88.1** & **93.3** & **90.2** & **97.2** & **85.2** & **76.2** & **97.3** & 86.4 & **93.9** & 64.5 & **87.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Finetuning experiments on downstream datasets comparing self-supervised learning to Tied-Augment pretrained model. All reported models are ResNet50. Supervised baseline is pretrained using only RandAugment. SimCLR-v2, BYOL, SwAV, and supervised baseline are from (Ericsson et al., 2021). Tied-RA stands for Tied-RandAugment. Tied-RA + Tied-CF finetune is the case where a Tied-RA pretrained ResNet50 is finetuned using Tied-CropFlip. All models are finetuned using crop-flip data augmentation.
\begin{table}
\begin{tabular}{l|c|c|c} & \begin{tabular}{c} SiraMatch \\ baseline \\ \end{tabular} & \begin{tabular}{c} Supervised \\ baseline \\ \end{tabular} &
\begin{tabular}{c} Tied- \\ FixMatch \\ \end{tabular} \\ \hline \#labels & 4k & 50k & 4k \\ \hline CIFAR-10 & 95.7 \(\pm\) 0.05 & 95.8 \(\pm\) 0.02 & **96.1**\(\pm\) 0.04 \\ CIFAR-100 & 77.4 \(\pm\) 0.12 & 77.6 \(\pm\) 0.04 & **77.9**\(\pm\) 0.08 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Application of Tied-Augment framework to FixMatch.** Similarity function is applied to the features between the unsupervised branches. The reported FixMatch baseline results are from (Sohn et al., 2020), supervised baseline results are from (Cubuk et al., 2020) and include RandAugment, and our results are the average of 5 runs.
\begin{table}
\begin{tabular}{l|c|c|c} & Supervised & SAM & \multirow{2}{*}{Tied-SAM} \\ & baseline & baseline \\ \hline CIFAR-10 & 97.3 \(\pm\) 0.03 & 97.9 \(\pm\) 0.1 & **98.3**\(\pm\) 0.1 \\ \hline CIFAR-100 & 83.3 \(\pm\) 0.05 & 86.2 \(\pm\) 0.1 & **86.5**\(\pm\) 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Sharpness-Aware minimization (SAM) experiments.** Baselines are replicated. Supervised baseline and SAM baseline both include RandAugment. The reported results are the average of 5 independent runs.
erty when targeting occlusion-invariance (Purushwalkam and Gupta, 2020). We also try using the same crop for both branches in our Tied-RandAugment experiments. This means taking a random (resized for ImageNet) crop from the image once and feeding the same crop into RandAugment on both augmentation branches. Surprisingly, this has little to no effect on performance. Therefore, for simplicity, we use different crops on both augmentation branches for CIFAR and finetuning experiments, same crop for ImageNet experiments.
### Stability of tied-weight
In Figure 3, we present the stability of the introduced tied-weight hyperparameter. It is shown that even for a large range of values, tied-weight is able to improve the performance of the model on ImageNet dataset, indicating that Tied-Augment offers significant performance improvements without the need for extensive hyperparameter search.
## 6 Conclusion
As dataset and model sizes increase, machine learning models are trained for fewer and fewer epochs. Traditionally this has made data augmentation less useful. We introduce Tied-Augment, a simple method for combining self-supervised learning and regular supervised learning to strengthen state-of-the-art methods such as mixup, SAM, FixMatch, and RandAugment by up to 2% on Imagenet. Tied-Augment can be implemented with only a few lines of additional code.
Tied-Augment can improve the effectiveness of standard data augmentation approaches such as Crop-Flip even when training for a few epochs. When training for longer, Tied-Augment achieves significant improvements over state-of-the-art augmentation methods.
Tied-Augment shows the promise of combining self-supervised approaches with regular supervised learning. An exciting future direction would be to evaluate Tied-Augment for large language model training which tends to be for a few epochs.
\begin{table}
\begin{tabular}{c|c|c} & similarity & Tied- \\ & function & Crop-Flip \\ \hline \multirow{3}{*}{CIFAR-10} & L1 & 96.3 & 97.8 \\ & Cosine & **96.5** & 98.0 \\ & L2 & **96.5** & **98.1** \\ \hline \multirow{3}{*}{CIFAR-100} & L1 & 81.3 & 84.8 \\ & Cosine & 81.5 & **85.0** \\ & L2 & **81.6** & **85.0** \\ \hline \multirow{3}{*}{ImageNet} & L1 & **76.9** & 78.7 \\ & Cosine & 76.7 & 78.8 \\ \cline{1-1} & L2 & **76.9** & **79.2** \\ \hline \end{tabular}
\end{table}
Table 10: **Ablation on the similarity function.** Tied-weights of all considered similarity functions have the signs so that they increase the feature similarity. Reported results are the average of 5 distinct runs. Imagenet Tied-RA models use (N=2, M=9) on both branches.
Figure 3: Tied-RandAugment accuracy with ResNet-50 on ImageNet as a function of tied-weight.
\begin{table}
\begin{tabular}{l r r r} \hline \multicolumn{4}{c}{**Different components of Tied-Augment**} \\ \hline & ImageNet & CIFAR-10 & Stanford-Cars \\ & & & (2 epochs) \\ \hline (1) Baseline (Flips and Crops) & 76.3 & 96.1 & 59.9 \\ (2) RandAugment & 77.6 & 97.3 & 58.7 \\ (3) Two views with same RandAugment policy & 78.0 & 97.6 & 52.4 \\ (4) Two views with different RandAugment policies & 78.5 & 97.7 & 54.2 \\ (5) Tied-Augment & **79.6** & **98.1** & **69.4** \\ \hline \end{tabular}
\end{table}
Table 9: **Ablation study for the improvements coming from Tied-Augment on ImageNet, CIFAR-10, and CIFAR-100.** Relative to a baseline model, addition of two augmented views of the same image improves performance (3). Creating two augmented views by two distinct augmentation methods (generally one more aggressive RandAugment, and one less aggressive RandAugment) further boosts performance (4). Finally, adding a feature similarity objective yields a significant performance increase (5).
## 7 Acknowledgments
We thank Omer Faruk Ursavas for his contributions to this project. The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources). We acknowledge the support of CuRe program from Google AI, Jonathan Caton, and the Google Cloud team. We acknowledge Johannes Gasteiger for his feedback on the manuscript, Jascha Sohl-dickstein for helpful discussions.
|
2302.03794 | Characterizing the Dark Count Rate of a Large-Format MKID Array | We present an empirical measurement of the dark count rate seen in a
large-format MKID array identical to those currently in use at observatories
such as Subaru on Maunakea. This work provides compelling evidence for their
utility in future experiments that require low-count rate, quiet environments
such as dark matter direct detection. Across the bandpass from 0.946-1.534 eV
(1310-808 nm) an average count rate of $(1.847\pm0.003)\times10^{-3}$
photons/pixel/s is measured. Breaking this bandpass into 5 equal-energy bins
based on the resolving power of the detectors we find the average dark count
rate seen in an MKID is $(6.26\pm0.04)\times10^{-4}$ photons/pixel/s from
0.946-1.063 eV and $(2.73\pm0.02)\times10^{-4}$ photons/pixel/s at
1.416-1.534eV. Using lower-noise readout electronics to read out a single MKID
pixel we demonstrate that the events measured while the detector is not
illuminated largely appear to be a combination of real photons, possible
fluorescence caused by cosmic rays, and phonon events in the array substrate.
We also find that using lower-noise readout electronics on a single MKID pixel
we measure a dark count rate of $(9.3\pm0.9)\times10^{-4}$ photons/pixel/s over
the same bandpass (0.946-1.534 eV) With the single-pixel readout we also
characterize the events when the detectors are not illuminated and show that
these responses in the MKID are distinct from photons from known light sources
such as a laser, likely coming from cosmic ray excitations. | Noah Swimmer, W. Hawkins Clay, Nicholas Zobrist, Benjamin A. Mazin | 2023-02-07T23:14:30Z | http://arxiv.org/abs/2302.03794v1 | # Characterizing the Dark Count Rate of a Large-Format MKID Array
###### Abstract
We present an empirical measurement of the dark count rate seen in a large-format MKID array identical to those currently in use at observatories such as Subaru on Maunakea. This work provides compelling evidence for their utility in future experiments that require low-count rate, quiet environments such as dark matter direct detection. Across the bandpass from 0.946-1.534 eV (1310-808 nm) an average count rate of \((1.847\pm 0.003)\times 10^{-3}\) photons/pixel/s is measured. Breaking this bandpass into 5 equal-energy bins based on the resolving power of the detectors we find the average dark count rate seen in an MKID is \((6.26\pm 0.04)\times 10^{-4}\) photons/pixel/s from 0.946-1.063 eV and \((2.73\pm 0.02)\times 10^{-4}\) photons/pixel/s at 1.416-1.534eV. Using lower-noise readout electronics to read out a single MKID pixel we demonstrate that the events measured while the detector is not illuminated largely appear to be a combination of real photons, possible fluorescence caused by cosmic rays, and phonon events in the array substrate. We also find that using lower-noise readout electronics on a single MKID pixel we measure a dark count rate of \((9.3\pm 0.9)\times 10^{-4}\) photons/pixel/s over the same bandpass (0.946-1.534 eV) With the single-pixel readout we also characterize the events when the detectors are not illuminated and show that these responses in the MKID are distinct from photons from known light sources such as a laser, likely coming from cosmic ray excitations.
1Department of Physics, University of California, Santa Barbara, California, 93107, USA
\({}^{*}\)[email protected]
\({}^{\dagger}\)[http://mazinlab.org](http://mazinlab.org)
## 1 Introduction
Microwave Kinetic Inductance Detectors (MKIDs) are superconducting microresonators with the ability to measure individual photons [1]. These light sensitive detectors can measure photon arrival times with microsecond precision and their unique detection mechanism also allows them to measure the energy of each incident photon. Since they utilize a different readout scheme than conventional image sensors such as charge-coupled devices (CCDs), MKIDs have the additional benefit of not being susceptible to read noise (stemming from the conversion of electrons from a CCD potential well into a voltage signal before being quantized and processed) or dark current (when thermal electrons accumulate in the potential well of the CCD and are counted as part of the signal upon readout despite their different origin) [2, 3, 4, 5, 6]. This, along with their photon counting ability makes MKIDs exceptional detectors for astronomy in photon starved regimes [7, 8, 9].
Current-generation MKIDs have been designed to be sensitive to photons in different ranges such as for ultraviolet, optical, and near-infrared (UVOIR) astronomy and X-ray detection. They also offer straightforward ways to tune their sensitivity to higher- or lower-energy bandpasses. This ability opens opportunities for MKIDs to be used as detectors for new physics applications such as the search for dark matter. In 'photon-starved' regimes, where sources emit very few photons, it is of high importance to adequately characterize the performance of the detectors to ensure that each photon detected is'real', i.e. light from the astronomical source being observed rather than from errant sources such as thermal blackbody radiation from inside a cryostat.
In this work we aim to characterize "dark counts" measured by a large-format MKID array. These are events that are registered as photons by the detector when it is not exposed to a light
source. These dark counts differ from those of conventional semiconductor detectors because of their origin. Semiconductor detectors register false counts due to dark current and read noise. Dark current is the generation of thermal electrons in the material that are captured by the detector's potential well and counted as part of the signal while read noise is the noise that is added to the measured signal from charge photon-to-voltage conversion and signal processing such as analog to digital conversion. In MKIDs false triggers may stem from noise in the room-temperature readout [4], blackbody photons from the environment, or more complex sources.
## 2 MKID photon measurement
Each MKID pixel - "pixel" and "detector" may be used interchangeably - is a superconducting LC-circuit which is excited at its resonant frequency by a probe tone using room temperature readout electronics. When an incident photon is absorbed by an MKID pixel, the energy from the photon breaks Cooper pairs in the resonator which causes the inductance of the resonator to increase. This increase in inductance is measured as a change in the phase of the microwave probe tone. The probe tone for each MKID pixel is typically sampled at 1 MHz, giving microsecond timing resolution [2, 4, 10].
For an MKID pixel to detect a photon, the photon event must cause the phase of the detector to increase beyond a minimum threshold. For a given pixel this threshold is calculated by measuring the phase (in radians) while not illuminated. This allows a phase noise to be measured, which we find does not typically exceed \(\sigma_{\phi}\sim\)0.15 radians in good-performing pixels. The threshold for the given resonator is then set to be 6\(\sigma_{\phi}\) away from the average phase value - typically 0 radians. In other terms, the threshold for each pixel is set to be
\[\phi_{\rm Threshold}=\hat{\phi}+6\sigma_{\phi}\approx 6\sigma_{\phi} \tag{1}\]
where the right hand approximation holds when \(\hat{\phi}\approx\)0. This is the method by which each pixel's photon detection threshold is calculated for both electronic readout systems described in this
Figure 1: A single photon event measured by an MKID pixel with readout sampling at 0.8 MHz. The x-axis shows time in microseconds and y-axis shows the detector phase response measured in radians.
paper (Sections 4 and 5). Since the phase noise is Gaussian and a \(6\sigma_{\phi}\) threshold is used, that means that there is less than a 1 in 500,000,000 chance that the phase will fluctuate above the threshold at any point when the phase is measured. The process for determining the threshold for a pixel is described in detail in [11]. Additionally, each pixel's phase response to photons of different energies within the bandpass of interest is measured when the array is calibrated (Section 3.3). For the MKID resonators used in this experiment we find that the typical phase response for the lowest-energy photons in the bandpass (0.946 eV, \(\lambda\)=1310 nm) is \(\phi_{0.946\mathrm{eV}}=-1.4\pm 0.2\) radians and the typical phase response for the highest-energy photons (1.534 eV, \(\lambda\)=808 nm) is \(\phi_{1.534\mathrm{eV}}=-2.2\pm 0.3\) radians. This demonstrates that the phase noise is unlikely to ever swing sufficiently high to cause a resonator to trigger on an event within the calibrated bandpass.
It is also instructive to calculate the signal-to-noise ratio (SNR) for photon pulses seen by each resonator. The resolving power \(\mathcal{R}=E/\Delta E\sim E/2.355\sigma_{\phi}\), where \(E\) is the energy of a photon event and is proportional to the phase pulse height, meaning that the SNR \(\sim E/\sigma_{\phi}\) (see Section 3.3). Rearranging, one finds that the SNR \(\sim 2.355\mathcal{R}\). This means that for a resonator with an \(\mathcal{R}=4.5\) - a typical value in this experiment - the SNR for a photon pulse will be about 10.6, again demonstrating that a resonator's phase response to a photon will be significantly higher than the phase noise itself.
An example of a single photon event is shown in Figure 1. When a photon is measured the readout records the time at which the photon struck and the height of the peak of the pulse. The height of the pulse (which is proportional to the change in the resonator's phase) is related to the energy of the incident photon. Therefore, a photon measured using the MKID readout intrinsically gathers information about both the time a photon arrived and its energy. By performing an energy calibration with lasers of known wavelength, these measured pulse heights can be converted to energies of the incident photons within the calibrated range.
Figure 2 shows several types of events measured by a single detector that can contaminate a dataset. The first occurs when noise in the system causes the readout to register a photon when there is clearly none present. This may occur when there is a small random spike in the phase data or a very low energy photon hits the detector. These "noise" triggered events are removed in post-processing as their peak pulse heights fall outside of the calibrated energy bandpass and can therefore be removed when cutting photons that are outside this energy range. This cut will exclude any of the "noise" triggers that initially registered as a photon. The second is when
Figure 2: 2 classes of ‘bad’ photon events that can be triggered in the MKID readout. (Left) A phase time stream (phase measured in radians) taken when noise in the data caused the readout to trigger as if a photon hit the detector although clearly none did. (Right) A second photon arriving on the tail of a previous one. This contaminates the analysis of the first and, depending on the dead time, the readout may not trigger at all on the second.
multiple photons are caught riding on the tail of the initial photon registered by the readout. This can be mitigated by decreasing the detector 'dead time', the time after a photon is measured before the readout can read a second photon. In the digital readout (Section 4) the first photon is counted while the second (and beyond) photon is ignored completely, while in the analog readout (Section 5) one can manually calibrate these and split up timestreams that have multiple photon events within them, including both if desired or treating them similarly to the digital readout and not registering the rest.
## 3 Experiment overview
### Optics
The optical path to put light onto the MKID array is relatively simple. One of five lasers within the sensitivity range of the devices (808 nm, 920 nm, 980 nm, 1120 nm, 1310 nm) can be inserted into an integrating sphere which has an output to a multimode fiber at room temperature. The fiber is then inserted into a port in the dilution refrigerator where it is routed to directly to a collimator at the same temperature as the MKID device. The fiber has transmission greater than 90% at all of the laser wavelengths above. within the The collimator is oriented so the collimated light shines directly onto a microlens array (MLA) which focuses the light onto the photosensitive inductor of each MKID pixel. The collimator creates a spot greater in size than the MKID array, meaning that the distribution of light from the lasers is spatially uniform across the MKID pixels. A schematic of the optical path within the fridge is shown in Figure 3, while the inset shows a spot of light focused on a single MKID pixel.
To prevent potential stray photons from the inside of the dilution refrigerator from hitting the MKID array, a special lid was created so that the fiber collimator mounts directly to the box
Figure 3: Schematic of the optical path from the laser at room temperature to the MKID array and of the idealized analog readout with parametric amplification. Laser light is carried down a multimode fiber that is inserted into the input port of a collimator. The collimated light is incident on a microlens array which then focuses the light onto each MKID pixel. The signal comes from synthesizer A, and the pump tone for the TWPA comes from synthesizer B. Each of the resistors to ground represents a 50\(\Omega\) termination. (Inset) A single MKID pixel showing the ideal focus spot of light onto the photosensitive inductor portion of the detector.
which houses the MKID array, preventing any unwanted photons from entering the window and hitting a detector.
### MKID array
The MKID array used in this experiment has 10 microwave feedlines, each with 2044 MKID resonators coupled to it for a total of 20,440 pixels. This device has the same design as the MKID arrays currently in use with the MKID Exoplanet Camera (MEC, [12]) which is currently commissioned at the Subaru Telescope on Maunakea. An image of a MEC-style array can be seen in Figure 4.
In the MEC instrument each of the 20,440 pixels can be read out simultaneously using the 2nd Generation MKID Digital Readout (see Section 4 for further details) [4] which provides exquisite spatial resolution while time-tagging incident photons with microsecond accuracy and is theoretically capable of resolving each photon's energy to within several percent [10] although the resolving power may be degraded by noise from hard-to-control experimental sources such including electrical noise in the room-temperature readout electronics, magnetic fields and Johnson noise in the dilution refrigerator, two-level system noise in the detectors, and phase measurement noise. The resolving power may also be affected by strong radio frequency interference (RFI) from sources such as 5 GHz Wi-Fi that are close in frequency space to the resonant frequencies of the MKIDs (4-8 GHz).
While mounted in an instrument, and MKID array has a lid to protect stray light that may be incident on it and a microlens array (MLA) which is designed to focus light onto the photosensitive region of each MKID resonator. Figure 4 does not show the lid+MLA so that the array itself may be seen for clarity.
### Array calibration
The resolving power of an MKID pixel for a given photon energy is given by \(\mathcal{R}=E/\Delta E\). Here, \(E\) is the energy of the incident photons - typically from a monochromatic laser - and \(\Delta E\) is measured by determining the full width at half maximum (FWHM) of the distribution of photon
Figure 4: A 10-feedline MEC-style MKID array with 20,440 pixels. The lid with microlens array has been removed so that the MKID array itself can be seen. The MKID digital readout (Section 4) combined with the dilution refrigerator setup (Figure 3) allow for up to 1 of the microwave feedlines (2044 MKID pixels) to be read out at a time. In an MKID instrument all 10 can be read out simultaneously.
energies recorded by the pixel. To measure the resolving power of each resonator, five lasers are shined across the array, one at a time. The responses of each resonator are recorded and used to find its resolving power at each of the different laser wavelengths [11].
819 MKID pixels were initially identified before any data collection. An energy calibration dataset was taken prior to and after data acquisition to assess the stability of each pixel's response to photons of a given energy.
We require that each resonator was successfully energy calibrated in both datasets. This means that each pixel was marked by the energy calibration software as being successfully calibrated at each of the 5 laser energies. This cut removed 40 pixels, leaving 779 (95.1%) of the original 819. This cut was made for two reasons. The first being that if the MKID Data Reduction Pipeline [11] is not able to identify a resonator in one of the two calibration datasets, we cannot confirm that it stayed stable through the duration of the data collection. This was responsible for removing 14 of the 40 pixels cut. The second is that even if a resonator was identified in both calibration datasets, it must be able to be successfully energy calibrated at all 5 laser energies. We have seen that resonators that only pass 3 or 4 of the laser energies are worse performing and typically see many low-energy 'noise triggered events' that make such pixels unreliable. This led to the other 26 of the 40 removed pixels being cut. The median \(\mathcal{R}\) values at each energy are shown for each calibration dataset in Table 1.
These 779 best performing pixels that remained after these cuts are those whose data will be used for the dark count analysis in Section 4.2. In Section 5 a single one of these resonators will be used to characterize the nature of the photons that are seen.
### Electronic readouts
This investigation used 2 separate readout systems. The first readout system is the MKID Digital Readout described in [4] which used to read a substantial number of MKID pixels on one (or more) microwave feedline(s). The second system uses an analog readout scheme similar to that described in [3, 6] to read out a single MKID pixel. This system ensures lower readout noise and the ability to record photon phase timestreams. We will discuss the former in Section 4 and the latter in Section 5.
\begin{table}
\begin{tabular}{||c|c|c|c||} \hline Energy (eV) & Wavelength (nm) & Median \(\mathcal{R}_{1}\) & Median \(\mathcal{R}_{2}\) \\ \hline \hline
0.946 & 1310 & 4.5 & 4.9 \\
1.107 & 1120 & 4.4 & 4.5 \\
1.265 & 980 & 4.4 & 4.4 \\
1.348 & 920 & 4.4 & 4.5 \\
1.534 & 808 & 4.6 & 4.5 \\ \hline \end{tabular}
\end{table}
Table 1: The median resolving power of all resonators measured at each calibration laser energy (wavelength). Columns 1 and 2 show the energy and wavelength of each laser used for calibration. Column 3 shows the median resolution values of all resonators taken prior to data collection while column 4 shows the same values taken 3 days after, following the conclusion of data collection.
## 4 MKID digital readout: many-detector measurement
### Experimental setup
The full-array readout in this experiment used the second generation MKID digital readout [4] which is divided between 3 temperature stages where most of the large electronics are at room temperature. The remainder were housed in a BlueFors Dilution Refrigerator which cools them to 4K or 100 mK, depending on the component. The internal schematic for the fridge is seen in Figure 3.
The room-temperature components are discussed in detail in [4] but are briefly described here for context and clarity. A 2nd-generation MKID readout board has a Xilinx Virtex-7 FPGA (Field Programmable Gate Array) controls Analog-to-Digital Converter/Digital-to-Analog Converter (ADC/DAC) boards which will generate (DAC) and take in (ADC) the in-phase (\(I\)) and quadrature (\(Q\)) components of each probe tone. It also contains ROACH-2 (Reconfigurable Open Architecture Computing Hardware) boards developed by CASPER (Collaboration for Astronomy Signal Processing and Electronics Research), which includes all of the core digital signal processing for the readout system [13, 14]. This includes the channelization of probe tones, filtering, and photon pulse detection. An RF/IF board is also used for conversion of probe tones from the IF band (-1 to 1 GHz) up to the RF band (4-6 or 6-8 GHz) so that up to 1022 MKID resonators can be read out at their resonant frequencies. Each readout cartridge contains two sets of readout boards - one to read out pixels with resonant frequencies from 4-6 GHz and the other to read out pixels with resonant frequencies from 6-8 GHz - to provide the capability to read out all of the 2044 MKID pixels on a single microwave feedline. For this experiment we chose to only read out the higher-frequency half of an MKID feedline from 6-8 GHz to reduce potential noise from boards running on the same cartridge and to best match the bandpass of the parametric amplifier.
Two 4-meter RF coaxial cables are then attached between the readout cartridge and the dilution fridge to send the signal to the MKID array and carry the output back to the readout cartridge to be read out.
Internally, the signal is sent from room temperature to 100 mK using cryogenic RF coax cables that are heat sunk at intermediate temperature stages to reduce heat flow to the MKIDs. At the 4K stage there is a 20 dB attenuator to attenuate Johnson noise from the room temperature input. A second 20 dB attenuator is added at the 100 mK stage to further reduce Johnson noise from the 4K stage and to account for the amplification of the signals on the output side of the array. The probe tones are then sent through the MKID device, exciting the resonators of a single MKID array microwave feedline. On the output side the signal is sent through a traveling wave parametric amplifier (TWPA1). This is a wideband (its maximal gain occurs between 6-8 GHz), high-power amplifier capable of reading out photon events with near quantum-limited amplifier noise [3, 15]. The signal is further amplified by a Low Noise Factory (LNF) High Electron Mobility Transistor (HEMT) amplifier at 4K. HEMT amplifiers are often used in cryogenic systems that require relatively high saturation power, significant dynamic range requirements, and a wide bandpass (for MKIDs, 4-8 GHz). The microwave signal is then sent back up to room temperature and routed back to the readout cartridge so the MKIDs may be measured.
Footnote 1: Further work is being conducted to characterize the performance of a TWPA while reading out many MKID pixels and will be the subject of a future publication.
Using the readout to monitor half of one MKID feedline enables up to 1022 pixels to be read out during this experiment. The number of functioning pixels compared to the total number of possible pixels is called the pixel yield. The current-generation 20,440 pixel arrays such as the one used in this experiment have yields of about 80%. Each pixel in an MKID array is unique and can be rendered non-functioning for different reasons such as a piece of dust or residue landing on a pixel and shorting it to ground. Variation in the thickness of the resonators or their chemical
composition can also cause them to move to unpredictable frequencies, colliding with another resonator in frequency space and making one or both unusable [12]. The pixel yield of about 80% leads to 819 of the possible 1022 resonators being read out. These pixels will form the basis of our analysis of the dark count rate measured by the MKIDs. We note that this is the first simultaneous readout of a large MKID array with a parametric amplifier that we are aware of.
### Data collection and reduction
Using the MKID second generation digital readout [4] each resonator from half of an MKID array feedline collected data with no light incident on the detectors for 86,750 seconds (24.1 hours in total) from 10-13 December 2020 while the dilution refrigerator was regulating the device temperature at 100 mK.
Notably, the data taken using the digital readout is useful because it allows hundreds to thousands of resonators to be read out simultaneously. This enables useful characterization of the data that can only be inferred via bulk properties of the array such as identification and rejection of cosmic ray hits (which contaminate the data) or 'flashes' across the device (whose origins are currently unexplored but whose effects are nevertheless contaminating and must be removed).
The 86,750 seconds of data taken using the MKID Digital Readout were processed and reduced using the MKID Science Data Pipeline [11]. The data were split into smaller chunks of time in order to create photon lists (the 'base' format for MKID instrument data, see [7] for further details) of manageable size.
During this reduction, a cosmic ray calibration is performed by making a time-based coincidence veto based on significantly more photons being detected across the array in a short burst above
Figure 5: A timestream showing the number of counts across the 819 resonators read out using the MKID Digital Readout over a 300-second span. The timestream is created by binning the photon arrival times using 10 \(\mu\)s bins (i.e. all photons measured by the 819 resonators are first sorted by time then grouped into bins of 10 \(\mu\)s, meaning the y-axis shows the number of photons that hit the measured resonators within that short time range). The quiescent count rate is approximately 0 counts/second, but cosmic rays hitting the array and other potential system noise cause ‘flashes’ where many pixels register a photon. The inset shows a single cosmic ray event (marked by the red arrow) for 200 \(\mu\)s before and 600 \(\mu\)s after the peak.
the quiescent count rate. This mitigates cosmic ray events, which occurs when a high energy particle is absorbed by the MKID array. When an energetic particle absorbed in the substrate it may down-convert into a cloud of phonons that spreads out across the array, depositing energy into resonators as the phonons move away from the point of absorption. As energy is deposited into each resonator, that resonator will register a single spurious event. Since the energy from the energetic particle spreads so quickly over many resonators, this will cause a spike in the number of counts across the array for a short time before returning to the normal count rate of the observation in question. A full-array timestream with many cosmic events can be seen in Figure 5. To prevent any possible contamination from cosmic events, 5000 \(\mu\)s and 10000 \(\mu\)s are removed from before and after each event, respectively (the asymmetrical nature is due to there being a 'tailing off' behavior while the energy spreads through the array). This veto leads to a short time range surrounding each cosmic ray event being removed, ultimately removing \(\sim 0.3\%\) of the total time each resonator was taking data.
For the last quality cut we remove any "hot" pixels remaining. Qualitatively, a pixel is considered "hot" if - after cosmic rays are removed - it counts significantly more photons than other pixels. These pixels that register photons more frequently than the average pixel may become "hot" for reasons such as having a very low phase threshold, being under- or overpowered, electrical noise in the readout electronics causing the probe tone for that individual resonator to become noisy, the resonator being adversely affected by local electrical or magnetic fields and shifting slightly in frequency, causing the resonator to go out of calibration, or other more complicated phenomena leading to a specific resonator triggering more frequently than its neighbors. The hot pixels cause the distribution of the total number of photons measured per resonator to be highly positively skewed. To properly characterize the shape of this distribution and catch hot pixels, a metric that is robust to outliers must be chosen. To this end, the median of the number of counts (\(\hat{c}\)) from each resonator was taken to be the 'expected' value of the distribution while the spread is measured using the astropy mad_std function ([https://docs.astropy.org/en/stable/api/astropy.stats.mad_std.html](https://docs.astropy.org/en/stable/api/astropy.stats.mad_std.html)) to calculate a robust standard deviation (\(\sigma_{c,MAD}\)) using the Median Absolute Deviation (MAD). As shown in equation 2 a pixel is considered hot if the number of counts \(c\) it sees is greater than the median number of counts measured by all pixels plus 15 times the MAD standard deviation.
\[c\geq\hat{c}+15\sigma_{c,MAD} \tag{2}\]
This cut leaves 590 of the 779 pixels that remained from Section 3.3. Ultimately this represents a cut of 24.3% of the pixels that were energy calibrated at all 5 laser energies (or 26.7% of the initial 819 pixels).
The median resolving power of the remaining 590 MKID pixels across all wavelengths is \(\mathcal{R}\sim 4.54\). For each pixel the photons which fell between the calibrated energy values and were not removed by the cosmic ray calibration were divided into equal-width energy bins and the count rate of photons per second was then calculated at each energy. This allowed the count rate across pixels to be measured, whose values can be seen in Figure 6 and Table 2.
### Analysis
In Section 4.2 the data collection and reduction was discussed in detail. After the final subset of pixels was determined the dark count rate could be calculated. To do this, the count rate in each pixel was measured and the error on each value calculated using Poisson statistics.
The energy bins were determined by the calibrated bandpass (0.946-1.534 eV, or 1310-808 nm) and the median resolving power \(\mathcal{R}\sim 5\). This resulted in 5 energy bins of 0.118 eV centered at 1.005, 1.123, 1.240, 1.358, and 1.476 eV.
The measured count rate for the unilluminated MKID array ranges from \((2.73\pm 0.02)\times 10^{-4}\) photons/pixel/s at 1.476 eV to \((6.26\pm 0.04)\times 10^{-4}\) photons/pixel/s at 1.005 eV after cosmic ray
rejection and other cleaning steps, respectively. This corresponds to an MKID pixel seeing a low energy photon roughly every 1600\(\pm\)10 seconds and a high energy photon every 3660\(\pm\)30 seconds. Alternatively, over the full bandpass the count rate is measured at \((1.847\pm 0.006)\times 10^{-3}\) photons/pixel/s. In flux units, this is \((3.14\pm 0.01)\times 10^{-3}\) photons/pixel/s/eV in energy space or \((3.68\pm 0.01)\times 10^{-6}\) photons/pixel/s/nm in wavelength space.
Although the dark count rate measured by an MKID pixel is not directly analogous to the dark current measured by CCDs and EMCCDs (Electron-Multiplying Charge Coupled Devices) due to the differing origin of the events it is useful to compare the event rates seen by each. Above, we reported that the number of dark count events measured per MKID pixel is \((1.847\pm 0.006)\times 10^{-3}\) photons/pixel/s. State-of-the-art CCDs have measured dark count rates of \(1.66\times 10^{-3}\) electrons/pixel/s [16] while EMCCDs have measured dark current rates as low as \(1\times 10^{-3}\) electrons/pixel/s [17]. Assuming that the CCD and EMCCD gain (the ratio of photons needed to generate 1 electron in the CCD/EMCCD) is 1 - i.e. 1 electron in the detector corresponds to 1 photon event - then the dark current generation in these state-of-the-art CCDs and EMCCDs are comparable to the dark count rates measured by an MKID pixel.
#### 4.3.1 Comparison to raw data
In Section 4.2 the data reduction algorithm was discussed. This includes the removal of photons from cosmic ray events, cleaning of particularly noisy sections of time, and excluding 'hot' pixels that live in a state where they count significantly more photons than their physical neighbors. To show the improvement these cuts bring we also calculate the dark count rate with none of the removal steps performed. The results are shown in Table 2.
Figure 6: The average count rate values in photon counts per pixel per second in each energy bin with 1\(\sigma\) error bars measured from the ensemble of count rates from all remaining pixels.
Prior to performing any data cleaning, each resonator saw an average of 1493\(\pm\)2 counts across the calibrated bandpass. Afterwards this number was reduced to 159.5\(\pm\)0.5, an improvement of by nearly a factor of 10. While we were able to reduce the number of counts by almost a factor of 10 for each energy band, the time cut was not particularly aggressive and only about 0.3% of the duration from each pixel was removed via the cosmic ray cuts. This shows that the majority of the counts seen come from spurious events that can be calibrated out without significant time needing to be removed from the dataset in question.
#### 4.3.2 Potential photon sources
The steps taken during the data reduction in Section 4.2 aimed to mitigate effects from noisy pixels and well-understood noise sources such as cosmic ray events. However there is still the possibility that there are more complex or uncalibrated sources that are not well characterized in this system. This can include things such as electrical noise from cryogenic amplifiers or room temperature readout electronics, secondary photons stemming from cosmic rays exciting electrons and causing fluorescence in the fiber optic cable, and simple blackbody radiation.
Work has been done to characterize the noise characteristics of the MKID Digital Readout [4] although it is not well understood how electrical noise in the system translates to spurious triggers on non-photon events. However, well-behaved MKID pixels are partially characterized by showing low noise, meaning they will be less susceptible to pixel-specific noise causing a false photon trigger.
In the event that a cosmic ray is absorbed at a point in the fiber optic cable or the rest of the optical path it may deposit its energy in that material, exciting electrons which will then release secondary photons from this particle being absorbed. In this case, individual photons may be generated in the optical path which can then be transmitted to the MKID detectors. This would be a 'true' photon detection from an unintended physical source.
The previous two sources of photon detections are both issues that may contaminate sensitive data in a photon-starved environment but are currently challenging if not impossible to mitigate. For the first, reducing electrical noise by preventing ground loops, using low-noise power sources, and working in an isolated environment will reduce false triggers from electronic noise but in practice this is nearly impossible to eradicate. We note however that large scale electrical noise typically affects all resonators simultaneously and can therefore be removed (using a similar time-coincidence veto as a cosmic ray) or causes single resonators to become 'hot' or 'cold' which
\begin{table}
\begin{tabular}{||c|c c|c c||} \hline & \multicolumn{2}{c|}{With Reduction Steps} & \multicolumn{2}{c||}{No Reduction Steps} \\ \hline Bin Center & Count Rate (\(\times 10^{-4}\)) & Total Counts & Count Rate (\(\times 10^{-3}\)) & Total Counts \\ (eV) & (photons/pixel/s) & (photons) & (photons/pixel/s) & (photons) \\ \hline \hline
1.005 & 6.26\(\pm\)0.04 & 54.1\(\pm\)0.3 & 5.0\(\pm\)0.1 & 436.7\(\pm\)0.9 \\
1.123 & 3.68\(\pm\)0.03 & 31.8\(\pm\)0.2 & 3.83\(\pm\)0.09 & 332.2\(\pm\)0.8 \\
1.240 & 3.06\(\pm\)0.02 & 26.5\(\pm\)0.2 & 3.17\(\pm\)0.08 & 275.2\(\pm\)0.7 \\
1.358 & 2.75\(\pm\)0.02 & 23.7\(\pm\)0.2 & 2.73\(\pm\)0.07 & 237.1\(\pm\)0.6 \\
1.476 & 2.73\(\pm\)0.02 & 23.6\(\pm\)0.2 & 2.44\(\pm\)0.07 & 211.6\(\pm\)0.6 \\ \hline \end{tabular}
\end{table}
Table 2: The average count rate per pixel and total number of counts per pixel for the data with the different calibration and cleaning steps (see also Figure 6) compared to the same quantities without. Without reduction steps, each pixel took data for 86,750 seconds. After cleaning, each pixel was left with 86,491 seconds of data.
may also be handled gracefully in the data reduction pipeline [11]. For the second, a secondary photon from a cosmic ray may be removed in the data reduction if its energy is sufficiently far outside the calibrated bandpass of the detector, but if its energy is within the bandpass then it will be impossible to remove as it is a single photon event and therefore not subject to the same time-coincidence veto from when a cosmic ray strikes the detector directly.
We will explore the possibility that the photons that the MKIDs are seeing while purportedly unilluminated are coming from a blackbody radiating within the dilution refrigerator. Although all precautions have been taken to prevent stray photons from hitting the detector, photons are incredibly difficult to insulate against so we examine the possibility that the photons measured in the dark environment come from a thermal source.
First, we generate blackbody spectra for each of the 4 potential temperature stages which may be generating blackbody photons that could possibly hit the MKIDs. These are the 100 mK stage where the array itself and fiber collimator are mounted and the array is directly exposed to, the walls of the 4 K and 50 K intermediate stages that are used to step down from room to operating temperature and that are nested around the 100 mK stage, and 300 K, which represents the blackbody radiation from the ambient environment or the inner face of the outermost temperature stage of the dilution refrigerator.
Using Planck's Law
\[B_{\lambda}(T)=\frac{2hc^{2}}{\lambda^{5}}\frac{1}{exp\big{(}\frac{hc}{4k_{B}T }\big{)}-1} \tag{3}\]
Figure 7: MKID spectrum (blue) compared to a 300 K blackbody spectrum (black) scaled to the central value of the MKID spectrum shown on a logarithmic scale. In this bandpass the blackbody spectrum varies over 7 orders of magnitude while the MKID spectrum remains relatively flat. The central point of the 300 K spectrum is normalized to the central point of the MKID spectrum. Error bars on the MKID spectrum are sufficiently small that they are contained within the points themselves.
we find that the flux density of 100 mK, 4 K, and 50 K blackbody radiation between \(\sim\)0.9 eV and \(\sim\)1.6 eV is sufficiently small that a blackbody at any of these temperatures would not be expected to produce photons from 0.9-1.6 eV over the duration of the experiment (86,750 seconds). Therefore, the blackbody spectra representing these temperatures are not included in the analysis.
The 300 K blackbody spectrum can be seen plotted against the spectrum measured by the MKID pixels in Figure 7. The spectra are shown in units of log\({}_{10}\)(Flux Density), where the flux density was measured/calculated in ergs/s/cm\({}^{2}\)/A. The plot is shown in log scale because while the MKID spectrum remains relatively flat across the bandpass, ranging from \((5.22\pm 0.04)\times 10^{-14}\) ergs/s/cm\({}^{2}\)/A, the 300 K blackbody spectrum varies over 7 orders of magnitude.
The massive discrepancy in shape of the two spectra show that the photon hits that are still being measured by the MKID pixels are not generated by a 300 K blackbody. With this and the fact that the 100 mK, 4 K, and 50 K stages will not generate any blackbody photons over the calibrated bandpass it is possible to say that the source of the remaining photons measured using the MKID digital readout do not come solely from blackbody sources in the environment.
## 5 MKID analog readout: single detector measurements
### Experimental setup
The analog readout utilizes the same internal electronics of the fridge, but externally the 6 foot SMA cables attach to a homodyne readout system consisting of two Anritsu MG37022A Signal generators, a Weinschel Attenuator box 8310 Series, an National Instruments-ADC/DAC, and an IQ mixer box. The function of these devices is the same as in the digital readout case. The schematic for the analog readout system, and the parametric amplifier, is shown in Figure 3. Unlike the digital readout, the analog readout supplies individual frequencies from an Anritsu Synthesizer to probe single resonators on the array. The analog readout has less noise associated with it compared to the digital readout which has to make compromises so that it is able to issue many probe tones while also dealing with limited dynamic range in the ADC attenuators and precision in its firmware computations.
The primary reason for taking a set of data in the dark with the analog readout is to analyze the nature of the photon pulses. Despite the fact that MKIDs are not susceptible to read noise and dark current due to their unique detection mechanism compared to conventional detectors such as CCDs [1, 4, 10], empirical evidence has shown that they do indeed still measure photon-like events when they are not illuminated that may be triggered by noise sources such as those noted in Section 3.2. Since the MKID Analog Readout saves photon pulse timestream data (as in Figures 1 and 2) it is possible to explore the nature of these events to determine if these 'dark counts' look the same as 'true' photon events, or if they are demonstrably different. By measuring several thousand 'dark' photons this way we aim to assess them and determine if we can assign them any explainable origin.
### Data collection and reduction
A second set of data was collected in addition that from the MKID Digital Readout while the MKID was unilluminated where an MKID pixel was read out using an analog readout system designed for low noise, single-pixel characterization.
We chose a single MKID resonator that was also read out using the digital readout that had above-average resolving powers at all calibration energies. In this configuration there is only one MKID pixel read out, so we are no longer able to leverage the bulk properties of the MKID array for cosmic ray rejection. However, the analog readout saves the phase timestream of the resonator surrounding each photon event which allows inspection of each pulse (see Figures 1 and 2) to
determine whether it is characteristic of a'real' photon or whether noise caused the trigger.
In contrast to the digital readout which continuously takes data until the user decides to stop, the analog readout accepts the number of desired photon counts to measure before stopping. In this case the quiescent count rate of photon counts in the dark was first measured and found to be \(\sim\)0.03 Hz, which likely consists of predominantly cosmic ray events. With this in mind we chose to register 8000 photon counts and expected this should take roughly 3 days. The primary goal of this investigation is to see if the photons which are being triggered on look'real' or if they look like noise, although it is also possible to ascertain a dark count rate.
The analog readout system saves its data in a slightly different structure than the digital readout of Section 4. The digital readout saves time-tagged lists of photons along with the pixel location and height of the photon event but due to computational constraints do not save further information about the photons. In contrast, the analog readout operates in a way so that the recent phase data from each MKID pixel being read out is kept in memory so that when a photon is measured, the readout system may save a timestream of that phase data from the pixel from the time surrounding the photon event. An example of a phase timestream saved by the analog readout is shown in Figure 1. The duration and sample rate of this phase timestream are both parameters that can be tuned by the user. In this experiment the resonator's phase was sampled at 0.8 MHz and each phase timestream saved the 5000 \(\mu\)s surrounding the photon event (2500 \(\mu\)s before and after).
Analogously to the MKID Science Data pipeline, we first reject any photons which are outside of the calibrated bandpass (i.e. the peak of the phase is too high or too low) as well as any timestreams that contain more than 1 photon hit. The second criterion is the closest proxy we have to a time-coincidence veto in lieu of using bulk statistics from many pixels. This leaves 1118 photons, \(\sim 14\%\) of the total observed counts. In comparison, the cuts from the digital readout left us with \(\sim 10.7\%\) of the total observed counts (94262 remained of the initial 880855 from the analyzed pixels). The percentage for the analog readout is much due to both the lower noise from the readout system resulting in fewer counts below the calibrated region (i.e. triggers on noise) as well as the inability to make simultaneity cuts on cosmic ray events in the analog case.
### Analysis
By binning photons using the same energy bins as in the MKID Digital readout we are able to compare the improvement that is gained when reading out a single pixel using significantly less noisy readout electronics. The count rates within the calibrated bandpass are shown in Table 3. As in the digital readout analysis the errors on photon counts and ultimate count rates are calculated using Poisson statistics.
#### 5.3.1 Photon rise and fall times
As previously discussed, the analog readout system saves photon timestream data (such as those shown in Figures 1 and 2). This allows us to examine the characteristic rise and fall times of the MKID pixel's phase when a photon event is triggered. For a baseline measurement an 808 nm (0.946 eV) laser and a 1310 nm (1.534 eV) laser are each shined on the pixel until it has registered 20,000 photon events.
A photon event is characterized by a fast exponential rise time in the measured phase as a photon strikes the detector, depositing its energy and breaking Cooper pairs into quasiparticles followed by a slower exponential tail as the quasiparticles recombine. To fit the rise and the fall times for a given photon event, the timestream is first split into a rising portion from the start of the timestream to the peak and a falling portion from the peak to the end. The rise and fall times are then each calculated from their respective sections of the timestream by fitting an exponential of the form
\[\phi=Ae^{-(t/b)}+c \tag{4}\]
where \(\phi\) is the measured phase, \(t\) is the corresponding time within the photon timestream, \(b\) is the time constant (which we call the rise or the fall time depending on which part of the event we are fitting), and \(A\) and \(c\) are constants to account for any offset or scaling differences between pulses.
We fit the rise and fall times for all of the photons from both the 808 and 1310 nm lasers as well as the dark-count photons measured by the pixel while it was unilluminated. Figure 8 shows how the rise and fall times of the dark counts compare to those of known photons from the two lasers as a function of wavelength.
It can be clearly seen that the 1\(\sigma\) (68%) confidence intervals of the rise times of the dark count photons overlaps with the 1\(\sigma\) CI of the rise times of the 808 and 1310 nm photons. This is relatively unsurprising as phase spikes so sharply when energy is deposited into the pixel that it is effectively instantaneous even with the microsecond timing resolution.
On the other hand, when the fall times of the dark counts are compared to those of the photons from the lasers one can see a stark distinction between the two distributions. The fall times from the laser photons are significantly faster than those of the dark counts for photons of similar energy - and therefore similar phase response. In theory the phase decay back to its quiescent value is governed by the quasiparticle recombination time, an effect solely due to properties of the superconducting material. Since the phase response is proportional to the number of quasiparticles generated in the material, photons that cause similar phase responses should have similar fall times since a roughly equivalent number of quasiparticles need to combine back into Cooper pairs. We can see in the bottom panel of Figure 8 that this is not the case in this experiment. Figure 9 shows an example of each of these photons. In the top panel, a photon that was measured when a laser was incident on the pixel can be seen with a measured fall time of 32.07 \(\mu\)s. Below that a second photon of the same energy can be seen, this time from when there was no light incident on the array. The fall time in this case is measured to be 79.89 \(\mu\)s, consistent with the difference in the distribution of the fall times from each population.
Although the dark count photons that remain after all previous cuts _qualitatively_ appear similar to the photons measured when lasers were being shined on the MKIDs, there is a quantitative difference in the two populations. The photons that are measured when a laser is being shined on the MKID show signficantly shorter fall times than dark count photons measured by the same pixel. The explanation for this behavior stems from the sources of the different families
\begin{table}
\begin{tabular}{||c|c c|c c||} \hline & \multicolumn{2}{c|}{Analog Readout} & \multicolumn{2}{c||}{Digital Readout (No Reduction)} \\ \hline Bin Center & Count Rate (\(\times 10^{-3}\)) & Total Counts & Count Rate (\(\times 10^{-3}\)) & Total Counts \\ (eV) & (photons/pixel/s) & (photons) & (photons/pixel/s) & (photons) \\ \hline \hline
1.005 & 1.8\(\pm\)0.1 & 189\(\pm\)14 & 5.0\(\pm\)0.1 & 436.7\(\pm\)0.9 \\
1.123 & 1.4\(\pm\)0.1 & 148\(\pm\)12 & 3.83\(\pm\)0.09 & 332.2\(\pm\)0.8 \\
1.240 & 1.4\(\pm\)0.1 & 145\(\pm\)12 & 3.17\(\pm\)0.08 & 275.2\(\pm\)0.7 \\
1.358 & 1.4\(\pm\)0.1 & 141\(\pm\)12 & 2.73\(\pm\)0.07 & 237.1\(\pm\)0.6 \\
1.476 & 1.2\(\pm\)0.1 & 121\(\pm\)11 & 2.44\(\pm\)0.07 & 211.6\(\pm\)0.6 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the total counts in the calibrated bandpass between 0.946-1.534 eV (1310-808 nm) using the MKID analog readout scheme to the MKID digital readout system.
Figure 8: Comparison of the rise times (top) and fall times (bottom) as a function of photon wavelength between dark counts - shown in black - and photons from 808 and 1310 nm lasers - shown in blue and red, respectively. The error bars for each point are the 1\(\sigma\)-errors for each measurement. In the top panel it can be clearly seen that the rise times of dark count photons and laser photons overlap significantly. In the bottom panel, the fall times for the dark count events do not overlap the photon events from the lasers.
Figure 9: Comparing the calculated rise and fall times of photons of the same energy measured by the MKID resonator. (Top) An 808 nm photon measured with an 808 nm laser shining on the pixel. (Bottom) An 808 nm photon measured when there is no light incident on the pixel. This corroborates the significant difference in fall times shown in Figure 8.
of photons and how they deposit energy into and are subsequently measured by the pixel. The rise and fall times of a photon event correlates with how quickly energy is deposited into the resonator and how long it takes for that energy to dissipate and allow the resonator return to its unexcited state. When energy is deposited into the resonator over a short time it will rapidly break many Cooper pairs which then begin to recombine right away, allowing the resonator to return to its unexcited state in a short time. If energy is deposited into the resonator over a longer time scale it will still break many Cooper pairs quickly but as the energy remains in the system before dissipating it will prevent those Cooper pairs from combining as fast which leads to a longer fall time back to quiescence. Real photons from a laser or other light source fall into the first category; the photon is absorbed and all of its energy is immediately deposited into the MKID, leading to a fast rise and fall time. Dark counts from sources such as cosmic rays are in the second family. Photon events triggered by a cosmic ray occur when the energy from the incident energetic particle is down-converted into a cloud of phonons that then spread the absorbed energy through the array substrate. As these phonons move past MKID pixels they deposit some of that energy into the pixel over the time it takes for them to move through the resonator. This means that dark count photons from cosmic ray events are non-instantaneous process and will consequently have longer fall times.
A separate potential explanation for the discrepancy in fall times between the laser photons and dark count photons is that while illuminating the array with a laser the photon flux is significantly higher than when the laser is off. This higher photon flux may lead to slight heating of the thin film of the MKID array. In turn, this would lead to shorter quasiparticle recombination times and ultimately shorter fall times for each photon event.
There is currently insufficient evidence to conclusively say that these are two completely different populations of photons due to the current gap in understanding of the noise sources in the MKIDs and inability to simultaneously read out an array using both the digital and analog readouts to correlate cosmic ray events (digital) to single photon traces (analog) so we cannot calibrate these photons out based solely on the difference in their fall times than is expected. If the assumption is made that these counts do come from cosmic rays and a calibration cut is made, the number of counts would decrease significantly from 121 to 10 photons in the high energy bin and from 189 to 27 photons in the lowest energy bin, with each bin seeing a reduction by about a factor of 8. Across the full bandpass the count rate would fall from \((7.1\pm 0.3)\times 10^{-3}\) photons/pixel/s to \((9.3\pm 0.9)\times 10^{-4}\) photons/pixel/s. In wavelength space this corresponds to \((1.42\pm 0.05)\times 10^{-5}\) photons/pixel/s/nm and \((1.9\pm 0.2)\times 10^{-6}\) photons/pixel/s/nm for the 'uncalibrated' and 'calibrated' rates, respectively. The final value with long-fall time events removed is just slightly below the \((2.77\pm 0.02)\times 10^{-6}\) photons/pixel/s/nm from the digital readout in Section 4.3.
## 6 Discussion and conclusions
In this paper we show that across the calibrated MKID bandpass from 0.946 to 1.534 eV (1310-808 nm), the count rate seen by the detectors in a large format array is \((3.14\pm 0.01)\times 10^{-3}\) photons/pixel/s/eV or \((3.68\pm 0.01)\times 10^{-6}\) photons/pixel/s/nm. It as also demonstrated that by using a relatively light calibration cut for cosmic ray events we are able to reduce the number of spurious photon events by nearly a factor 10 of while removing less than 1% of the duration of the data collection.
Using the MKID analog readout system and recording the shape of photon pulses in a single pixel we first show that the count rate across the calibrated bandpass is \((1.42\pm 0.05)\times 10^{-5}\) photons/pixel/s/nm without any further data cleaning steps, demonstrating that a quieter system does lead to lower count rates in an unilluminated MKID device.
While investigating the shape of the photon pulses using the analog readout it was found that the exponential tail of the dark counts corresponds to a significantly longer fall time than
from photons generated from a laser at the same wavelengths. The long fall times are indicative of energy taking a long time to dissipate from the resonators which points to events causing these triggers happening in the substrate rather than the pixels themselves. An example of a known event that takes place in the substrate and causes contaminating photon events is a cosmic ray hitting the MKID array. Making MKID pixels atop membranes is an ongoing field of research that offers a straightforward way to to minimize the potential for substrate absorptions to cause contaminating photon events. Because of the added complexity of making MKIDs on membranes in addition to the existing difficulty in fabrication this is not feasible for the large-format MKID arrays currently in use. However, in future experiments requiring much fewer (1 to \(\sim\)100) MKIDs, making them on membranes may be a reasonable path forward that will help mitigate contamination from substrate absorptions. Additionally, the current generation of MKID readout hardware does not allow for side-by-side simultaneous readout of many pixels while still capturing the photon phase timestreams and so there is no way to determine if these long fall time photons are coexistent with cosmic ray events at present. The next 3rd Generation MKID Digital Readout is currently under development [18] and promises to allow both capabilities at the same time. Further investigation of the source of these dark photons and if they are in fact generated cosmic rays will be explored in future work with the 3rd Generation MKID Readout. We note here that if future work does find that these long fall time photons are from contaminating sources it offers a straightforward way to calibrate them out of MKID datasets.
For a future dark matter detector experiment that would use a 100 pixel MKID array with 10 nm energy bins, the maximum dark count rate in the detector would be \(\sim(3.68\pm 0.01)\times 10^{-3}\) photons/s if the current style arrays and generation of MKID digital readout were used. With this said, any future MKID dark matter direct detection instrument will have several key upgrades to mitigate noise in the system. First, a new generation of MKID readout is currently under development which promises to be a significantly less noisy system than the one used at present. The continued development of Traveling Wave Parametric Amplifiers (TWPA [15, 3]) will also significantly reduce system noise compared to the more commonly used HEMT amplifiers. Finally, this instrument itself will use an array that has anti-reflection (AR) coating on the MKID devices and will not have optics that allow visible light to enter the cryostat. Both of these upgrades will prevent more stray photons from entering the fridge and causing spurious, unattributable counts on the detector.
## 7 Acknowledgements
NS gratefully acknowledges support from the Heising-Simons Foundation under grants #2020-1820 and #2021-3058, as well as support by NASA under grant #80NSSC19K0329 and from the NSF MRI Award #1625441. WHC is grateful for the support from NSTGRO Grant #80NSSC21K1290.
## 8 Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2310.03585 | Smoothing Methods for Automatic Differentiation Across Conditional
Branches | Programs involving discontinuities introduced by control flow constructs such
as conditional branches pose challenges to mathematical optimization methods
that assume a degree of smoothness in the objective function's response
surface. Smooth interpretation (SI) is a form of abstract interpretation that
approximates the convolution of a program's output with a Gaussian kernel, thus
smoothing its output in a principled manner. Here, we combine SI with automatic
differentiation (AD) to efficiently compute gradients of smoothed programs. In
contrast to AD across a regular program execution, these gradients also capture
the effects of alternative control flow paths. The combination of SI with AD
enables the direct gradient-based parameter synthesis for branching programs,
allowing for instance the calibration of simulation models or their combination
with neural network models in machine learning pipelines. We detail the effects
of the approximations made for tractability in SI and propose a novel Monte
Carlo estimator that avoids the underlying assumptions by estimating the
smoothed programs' gradients through a combination of AD and sampling. Using
DiscoGrad, our tool for automatically translating simple C++ programs to a
smooth differentiable form, we perform an extensive evaluation. We compare the
combination of SI with AD and our Monte Carlo estimator to existing
gradient-free and stochastic methods on four non-trivial and originally
discontinuous problems ranging from classical simulation-based optimization to
neural network-driven control. While the optimization progress with the
SI-based estimator depends on the complexity of the program's control flow, our
Monte Carlo estimator is competitive in all problems, exhibiting the fastest
convergence by a substantial margin in our highest-dimensional problem. | Justin N. Kreikemeyer, Philipp Andelfinger | 2023-10-05T15:08:37Z | http://arxiv.org/abs/2310.03585v2 | # Smoothing Methods for Automatic Differentiation Across Conditional Branches
###### Abstract
Programs involving discontinuities introduced by control flow constructs such as conditional branches pose challenges to mathematical optimization methods that assume a degree of smoothness in the objective function's response surface. Smooth interpretation (SI) is a form of abstract interpretation that approximates the convolution of a program's output with a Gaussian kernel, thus smoothing its output in a principled manner. Here, we combine SI with automatic differentiation (AD) to efficiently compute gradients of smoothed programs. In contrast to AD across a regular program execution, these gradients also capture the effects of alternative control flow paths. The combination of SI with AD enables the direct gradient-based parameter synthesis for branching programs, allowing for instance the calibration of simulation models or their combination with neural network models in machine learning pipelines. We detail the effects of the approximations made for tractability in SI and propose a novel Monte Carlo estimator that avoids the underlying assumptions by estimating the smoothed programs' gradients through a combination of AD and sampling. Using DiscoGrad, our tool for automatically translating simple C++ programs to a smooth differentiable form, we perform an extensive evaluation. We compare the combination of SI with AD and our Monte Carlo estimator to existing gradient-free and stochastic methods on four non-trivial and originally discontinuous problems ranging from classical simulation-based optimization to neural network-driven control. While the optimization progress with the SI-based estimator depends on the complexity of the programs' control flow, our Monte Carlo estimator is competitive in all problems, exhibiting the fastest convergence by a substantial margin in our highest-dimensional problem.
automate differentiation, optimization, imperative programs, discontinuous control flow, parameter synthesis, gradient estimation, probabilistic execution, monte carlo approximation.
## 1 Introduction
Parameter synthesis through optimization is a central task in fields such as modeling and simulation, control theory, and machine learning. The difficulty of the optimization tasks increases with the number of parameters required to accurately model increasingly complex systems. In the machine learning field, the well-known backpropagation algorithm [1] is commonly used for gradient-based training of deep neural network models across enormous numbers of parameters. Gradient-based methods promise fast convergence to a local optimum, but require the existence and calculation of the optimization problem's partial derivatives. Automatic differentiation (AD) techniques [2, 3] automatically calculate and propagate these derivatives through the arithmetic of arbitrary computer programs.
However, while these pathwise derivatives agree with the definition of the partial derivative, they do not provide sufficient gradient information if the control flow of the problem depends on the parameters. As a characteristic example for branching control flow, consider the Heaviside step function \(H(x)\!=\!\mathbf{1}_{x\!\geq\!0}\), depicted in Fig. 1.
This piecewise function has derivative \(dH/dx\!=\!0\) everywhere, except at a discontinuity at \(x\!=\!0\), where it is infinite,
Figure 1: Graph of the Heaviside step function and its derivative estimated pathwise (e.g., through automatic differentiation) by IPA, and by a smoothing estimator.
i.e., its derivative is the Dirac-delta. Here, AD can correctly determine the derivative at any \(x\!\neq\!0\), but its value of zero prohibits the use of gradient descent and does not provide any information about the jump discontinuity. Evidently, this situation remains even if the derivative is averaged across different sample points in the parameter space (cf. Fig. 1, dotted line). More generally, from the literature on infinitesimal perturbation analysis (IPA) [4] it is known that for stochastic discontinuous programs, simple averaging of the pathwise derivatives leads to a biased gradient estimator. One solution is to calculate the gradient of a smooth approximation of \(H\) (cf. Fig. 1, dashed line).
The need for differentiating discontinuous functions currently arises in many practical applications, such as neurosymbolic programming [5], program synthesis [6], agent-based simulation [7], and inverse rendering [8]. Thus, the challenge of obtaining gradients over discontinuities has been tackled from several angles: interpolation [7], stochastic (Monte Carlo) estimation [9], and smoothing over discrete randomness [10].
Here, we explore two novel approaches for providing smoothed gradients of imperative programs involving branching control flow. Using our tool DiscoGrad, problems involving _parameter-dependent control flow_ formulated in the C++ language (cf. Section 4.5 for a brief description of supported language constructs) can be automatically differentiated to determine their smoothed gradients, enabling the use of gradient-based methods for local optimization. As observed in the training of neural networks with backpropagation, suitable gradient descent algorithms are capable of finding high-quality solutions, even for non-convex functions [11]. Accordingly, our methods assume neither smoothness nor convexity and we evaluate the performance of our proposed estimators on four high-dimensional, discontinuous optimization problems. Our main contributions are:
1. We show how the existing technique of Smooth Interpretation (SI) [12], a form of abstract interpretation, can be combined with AD to obtain gradients across discontinuities in Section 4.1.
2. We provide a clear description of the assumptions made in SI's probabilistic execution of a program and their effects on the output's fidelity in Section 4.2.
3. We propose a novel gradient estimator that avoids SI's assumptions by a combination of AD and Monte Carlo sampling in Section 4.4.
4. We present DiscoGrad1, our tool to automatically translate C++ programs to an efficiently smoothed, differentiable counterpart using our proposed smoothing methods and other existing gradient estimators in Section 4.5. Footnote 1: Available at [https://github.com/philipp-andelfinger/DiscoGrad](https://github.com/philipp-andelfinger/DiscoGrad).
5. We provide an extensive evaluation of the estimator's execution times, gradient fidelity and optimization progress against existing sampling-based schemes for local optimization such as REINFORCE [13] and non-gradient based, global optimization methods (genetic algorithm, simulated annealing) in Section 5.
In the following sections, we introduce AD, the smoothing of gradients, and SI (Section 2) and review the related literature (Section 3). Section 4 presents our main results. Finally, we carry out an extensive evaluation (Section 5), concluding with final remarks and future directions (Section 6).
## 2 Background
In the following, we outline the established work on differentiating programs, with a focus on programs involving branching control flow. Starting from automatic differentiation as the base technique for differentiating programs, we introduce stochastic smoothing as well as smooth interpretation.
### Automatic Differentiation
Automatic differentiation (AD) is a method to compute partial derivatives of computer programs [2, 3]. Treating a program \(\mathcal{P}\) as a composition of mathematical functions \(\mathcal{P}\!=\!f_{1}\circ f_{2}\circ\cdots\circ f_{n}\), AD repeatedly applies the chain rule \(f_{i}^{\prime}\!=\!(f_{i+1}\circ f_{i+2})^{\gamma}\cdot f_{i+2}^{\gamma}\) to calculate the program's derivative \(\mathcal{P}^{\prime}\) from the inputs \(f_{n}\). The well-known backpropagation algorithm [1] widely used in machine learning is a special case of AD.
The literature distinguishes two approaches: _Forward-mode_ AD propagates derivative information throughout the (forward) executions of the program by augmenting the involved variables \(v\) with a so-called tangent value \(\hat{v}\). After each invocation of a mathematical function, this value is updated according to the function's derivative and the chain rule. Thus, at any given point in the execution, the tangent value can be interpreted as the partial derivative \(\partial f_{j}(\mathbf{x})/\partial x_{i}\) of the operations up to the current point \(j\) wrt. the component \(x_{i}\) of the input vector \(\mathbf{x}\).
In contrast, _reverse-mode_ AD records the arithmetic operations and values involved in the program's forward execution in a so-called _tape_ and computes the partial derivatives in a subsequent step by traversing the tape in reverse. While forward-mode AD calculates the partial derivatives of a single input variable wrt. all output variables throughout a single program execution, reverse-mode AD calculates the derivatives of all inputs wrt. a single output based on a single traversal of the tape following the program's termination. The pathwise gradients computed by AD are exact to machine precision. This is in contrast to finite differences methods, whose fidelity depends on finding an appropriate step size.
Many programs encountered in optimization problems involve input-dependent control flow, which typically introduces jump discontinuities. Unfortunately, as AD's purely arithmetic view of a single forward execution of a program cannot account for alternative control flow paths, it produces gradients of limited utility for optimization for such programs (cf. Fig. 1). For stochastic programs, averaging across pathwise derivatives, known as infinitesimal perturbation analysis (IPA), produces an unbiased estimator only if the program permits the exchange of expectation and differentiation [14]:
\[\nabla\mathbb{E}[\mathcal{P}(\mathbf{x})]=\mathbb{E}[\nabla\mathcal{P}( \mathbf{x})]\]
Programs involving input-dependent discontinuities typically violate this condition. For deterministic programs, this form of smoothing can also be used by perturbing the input vector
\(\mathbf{x}\) with random noise. Sampling-based estimators applicable to the discontinuous case are discussed in Sections 3.1 and 5.2. As described next, the expected value can alternatively be obtained by a symbolic probabilistic execution.
### Smooth Interpretation
Smooth interpretation (SI) [12] is a method to smooth the output of programs involving discontinuities, making them more amenable to numerical optimization using black-box approaches such as Nelder and Mead's method [15]. Building on abstract interpretation [16, 17] and probabilistic program semantics [18], SI executes a program \(\mathcal{P}\) according to a smoothed semantics that approximates the convolution of the program output with a Gaussian kernel \(f_{\mathbf{x},\Sigma}\):
\[\widetilde{\mathcal{P}}(\mathbf{x})\coloneqq\int_{\mathbf{y}\in\mathbb{R}^{n }}\mathcal{P}(\mathbf{y})f_{\mathbf{x},\Sigma}(\mathbf{y})d\mathbf{y}. \tag{1}\]
Here, \(\mathbf{x}\) is the program's \(n\)-dimensional input vector and \(\Sigma\) a diagonal covariance matrix determining the amount of smoothing.
In SI, each originally scalar input variable \(x_{i}\in\mathbf{x}\) is substituted by a Gaussian random variable \(X_{i}\) with mean \(\mu_{xi}\!=\!x_{i}\) and a configurable standard deviation \(\sigma_{xi}\) sometimes referred to as the smoothing factor. Now, the arithmetic operations specified in the program operate on and generate random variables. As an approximation, the distribution of any operation's output is in practice again represented by a Gaussian characterized by its mean and standard deviation. Thus, the output variables of a smooth interpretation are, just like the inputs, Gaussian random variables. The result of the interpretation, i.e., of the approximate convolution of the Gaussian with the program at the current input, are the expectations of the output variables.
A key aspect of SI is the handling of conditional branches, as encountered in the form of if-else statements. The smoothed semantics require both possible paths to be executed and weighted according to the distribution of the variables involved in the branching condition. This leads to two different distributions for some of the variables. Hence, each variable is represented as a mixture distribution, each element of which represents a Gaussian approximation of the distribution resulting from one of the program's (sequences of) branches. To limit the number of elements of each mixture distribution, a "Restrict" algorithm combines the results of branches in a way that minimizes the deviation from the original overall mixture distributions of the variables.
In general, the exact convolution of a program with a Gaussian is intractable. The approximations made by SI, e.g., assuming the program state to be a Gaussian mixture and limiting it to a finite size through Restrict, enable a practical application of the method and will be further explored in Section 4.2.
## 3 Related Work
Rooted in the field of non-smooth optimization [19], the (gradient-based) optimization of discontinuous programs has recently seen major interest across many domains, for example machine learning [20], computer graphics [8] and optimal control [9]. Besides gradient-free approaches such as genetic algorithms or the Nelder-Mead method [15], the state of the art in non-smooth optimization includes bundle methods, which augment the subgradient method through the exploitation of past subgradient information [21] and gradient sampling methods exploiting piecewise differentiability [22]. In contrast, we consider smoothed gradients of problems that, while piecewise differentiable, typically provide only zero-vector gradients (cf. Fig. 1). For these, pathwise estimates based on pathwise gradients, as in IPA [4], are insufficient.
In the following, we discuss existing work on differentiating _across_ branching control flow directly related to ours. Broader overviews of gradient estimation techniques are given in [23] and [24].
### Sampling-Based Gradient Estimation
Based on the conditional Monte Carlo method for variance reduction, _smoothed perturbation analysis_ (SPA) obtains an unbiased gradient estimate through conditional expectations [14]. By choosing suitable problem variables (called characterization) to condition on, the calculation of the expected value is effectively separated into continuous parts, allowing for the interchange of the expectation and differentiation operations (cf. the end of 2.1). While SPA has been widely applied to differentiate discontinuous problems such as certain discrete-event simulations, its applicability is limited by the need to manually determine a suitable characterization to condition on for the problem at hand. For an overview of the many variations of (S)PA refer to [23] (Section 9) and the references therein.
The _REINFORCE_ estimator, commonly employed in reinforcement learning, exploits the differentiation rule of the logarithm to eliminate the need for calculating gradients of the program [13]. The gradient is calculated from the plain program output, multiplied by the log-derivatives of the program-specific probability density (cf. Section 5.2). REINFORCE is thus also referred to as the log-derivative trick, likelihood ratio estimator or score function estimator. Similar to the information conditioned on in SPA, the probability density and its log-derivative are problem-specific.
Some problem-independent (black-box) estimators are given in [25], therein referred to as _gradient-free oracles_. The authors analyze the convergence of a scheme introduced in Chapter 3.4 of [26], which estimates the descent direction through directional derivatives, calculated by a randomized version of finite differences where each input dimension is perturbed simultaneously. We will make use of the first of these gradient-free oracles for comparison in the evaluation, where it is also briefly introduced in Section 5.2. Due to the lack of a commonly used name for this estimator, we abbreviate it as PGO (Polyak's Gradient-Free Oracle). A similar construction using non-directional finite differences is proposed in [9].
We note that sampling-based schemes for stochastic programs, like SPA and REINFORCE, can also be applied to the case of deterministic objective functions by introducing artificial perturbations to the program inputs. If these perturbations are sampled from a normal distribution, their estimates approach the gradient of the convolution integral from Eq. (1) as the number of samples approaches infinity. In other words, the gradient estimators, just like SI, approximate the convolution of the gradient with a Gaussian.
### Combination of Sampling and AD
Some recent works propose combinations of sampling-based methods with AD rather than finite differences.
In [10], an unbiased SPA estimator was derived for programs involving discrete randomness and integrated with the AD process in the Julia package StochasticAD. This allows for the automatic smooth differentiation of programs that sample from discrete probability distributions with parameters depending on the program inputs. In contrast to our work, this approach does not consider input-dependent discrete control flow.
A recent use of forward-mode AD is found in the "forward gradient", which is determined by sampling over directional derivatives [27]. While their work does not consider differentiation across discontinuities, our methods share the use of forward-mode AD. As we will see in Section 5.3, forward-mode AD incurs only a tolerable overhead in our benchmark problems, while allowing us to efficiently obtain intermediate partial derivatives and avoiding reverse-mode AD's linear dependence of the memory consumption on the program length.
Finally, a sampling-based method proposed in a recent preprint [28] achieves differentiability by applying a static degree of smoothing at each branch and systematically visiting all control flow paths whose probability is within machine precision. Their approach shares with SI the challenge of scaling to problems with non-trivial numbers of branches without introducing biases, the effects of which on SI's fidelity are detailed in Section 4.2 and quantified in Section 5.
### Differentiable Programming Languages and Neurosymbolic Programming
Differentiable programming languages offer semantics that allow for a sound calculation of gradients across entire, typically functional, programs. Abadi presented operational and denotational semantics for a functional language that includes a construct for reverse-mode AD [29]. Discontinuities are ruled out by assuming that constructs such as conditional branches are substituted by smooth approximations by the user.
Some recent languages treat discontinuities natively. Sherman et al. presented semantics for a functional language that covers non-differentiable functions, but requires continuity [30]. The functional language ADEV [31], which targets differentiable probabilistic programming, allows discontinuities to only depend on the program's stochasticity, not on the parameters. This is the same condition that is satisfied after applying the reparametrization trick [32]. A more general approach for handling discontinuities is taken in the functional language by Amorim et al. [33], which relies on distribution theory to soundly express the contribution of discontinuities to the gradient. The resulting integrals are approximated by Monte Carlo sampling. In contrast to these works focused on language semantics, we propose and study concrete gradient estimators that approximate gradients of smoothed imperative programs, specifically targeting the case where discontinuities depend on the parameters.
Among the use cases of differentiable programming languages is the gradient-based _synthesis of symbolic programs_, which offers an alternative to traditional combinatorial program search. The search over symbolic programs is achieved by interpreters that employ continuous relaxations to enable the computation of gradients of the program's output wrt. its parameters, which may represent numerical constants, instructions, or the registers to operate on [34, 35, 36, 6]. _Neurosymbolic programming_[37, 5] extends this idea towards programs combining symbolic and neural building blocks. In a recent work, a generalization of the REINFORCE estimator was used to differentiate across symbolic program executions in order to determine parameters leading to control flow paths that adhere to a safety criterion [38]. Although the focus of our own work is on the parameter synthesis for existing programs, the proposed gradient estimators may benefit synthesis approaches as well.
### Domain-Specific Approaches
Methods for gradient smoothing are proposed and applied in many contexts, motivated by a myriad of goals. Here, we discuss relevant publications from the popular fields of neurosymbolic programming, program synthesis, differentiable rendering and simulation-based optimization.
Some recent work achieves smoothing by weighted averaging of variable values across branches as a basis for combining neural networks with traditional algorithms [39, 40], for parameter estimation across agent-based simulations [7], and for antialiasing [41]. In contrast to SI and unbiased black-box estimators, these works lack a well-defined probabilistic semantics and thus do not offer a clear interpretation of the smoothed output.
An alternative approach common in simulation-based optimization is to sample a model's input-output relation to generate a _surrogate model_[42]. Depending on the type of surrogate, e.g., a neural network, the resulting model may be smooth and differentiable. A similar approach has been taken in _systems security_ for gradient-based fuzzing [43]. Surrogate models are typically fitted to input-output samples in a black-box fashion, after which gradient estimates are made without involvement of the original model. In contrast, the gradient estimators proposed in our work operate on the original program, making use of its internal structure.
Finally, the field of computer graphics has also shown broad interest in the differentiation of discontinuous programs. Differentiable rendering [8] aims at determining partial derivatives of pixel values with respect to scene parameters, enabling applications such as inverse rendering, i.e., determining scene parameters that best fit real-world image data. Modern rendering techniques are typically based on Monte Carlo sampling of light rays through three-dimensional scenes. The integrals approximated in this manner may carry discontinuities related to the visibility of objects in the scene. This problem can be solved either by explicitly sampling edges that cause the discontinuities [44], or more scalably by applying problem-specific reparametrizations to the objective function so that the position of discontinuities becomes independent of the parameters [45]. An overview of Monte Carlo techniques for differentiable rendering is given by Zeltner et al. [46].
In contrast to the above works, the gradient estimators proposed and evaluated in the remainder of our article target generic imperative programs without reliance on domain-specific problem properties.
## 4 Smooth Automatic Differentiation
Many optimization problems are naturally formulated as imperative programs. However, the existing work on smooth differentiation lacks a method that 1. focuses on imperative programs involving conditional branching on the input values and 2. makes use of exact pathwise derivatives as determined via AD. In this central section, we will show two possible candidates. First, we provide a common framework for the problem of calculating the smoothed gradient of discontinuous programs where the conditional control flow depends on the input vector. By treating SI as a special case of this framework, its integration with AD becomes straight forward, providing our first (deterministic) estimator. However, this approach is encumbered by the strong assumptions of SI. We explore ways to relax these through information gained by AD, but find that the benefits of these improvements are dwarfed by the effect of SI's restricted representation of the probabilistic program states. Thus, our second (sampling-based) candidate is an AD-powered Monte Carlo approach operating under much lighter assumptions, often yielding significantly more accurate gradient estimates.
### Approach
In our derivations, we consider optimization problems expressed as imperative programs \(\mathcal{P}\): \(\mathbb{R}^{n}\!\rightarrow\!\mathbb{R}\) mapping \(n\) input variables to a single output value. Thus, \(\mathcal{P}\) is typically a piecewise function. We further assume that discontinuities only arise through branch and loop conditions, noting that common discontinuous functions such as the absolute value function can be rewritten using conditional branching.
As shown in [12], the smoothing of \(\mathcal{P}\) with a (multivariate) Gaussian kernel \(f_{\mathbf{x},\Sigma}\) can be expressed in terms of a convolution
\[\widetilde{\mathcal{P}}(\mathbf{x})\coloneqq\int_{\mathbf{y}\in\mathbb{R}^{n} }\mathcal{P}(\mathbf{y})f_{\mathbf{x},\Sigma}(\mathbf{y})d\mathbf{y}=\mathbb{ E}_{X\sim\mathcal{N}(\mathbf{x},\Sigma)}[\mathcal{P}(X)]\,, \tag{2}\]
where \(\mathbf{x}\) is the program's \(n\)-dimensional input vector and \(\Sigma\) a diagonal covariance matrix determining the amount of smoothing. This form of smoothing is sometimes also called the (generalized) Weierstrass transform. By the law of the unconscious statistician, the convolution Eq. (2) can be regarded as taking the expected value of the program's output distribution when executed on normally distributed random variables \(X\!\sim\!\mathcal{N}(\mathbf{x},\Sigma)\), see also [23], Section 4. It is important to note that in our case randomness is artificially introduced by moving from \(\mathbf{x}\) to \(X\), i.e., our derivations are also applicable to deterministic programs.
A possible strategy to automatically calculate the smoothed gradient \(\widetilde{\nabla}\mathcal{P}(\mathbf{x})\!=\!\nabla\widetilde{\mathcal{P}}( \mathbf{x})\!=\!\nabla_{\mathbf{x}}\mathbb{E}[\mathcal{P}(X)]\) is to exchange the expectation and gradient operators, as is done in IPA. While this enables a close and automatic approximation, e.g., through a Monte Carlo approach and AD, the equality \(\nabla_{\mathbf{x}}\mathbb{E}[\mathcal{P}(X)]=\mathbb{E}[\nabla_{\mathbf{x}} \mathcal{P}(X)]\) only holds if \(\mathcal{P}\) is continuous (precise conditions are given in [47]).
In the case of imperative programs with control flow depending on \(\mathbf{x}\), \(\mathcal{P}\) is not generally continuous, requiring an alternative approach. We observe that the control flow partitions the program's (discontinuous) output \(\mathcal{P}(\mathbf{x})\) into several continuous parts. In the probabilistic execution context, we can isolate these continuous parts by conditioning the distribution of \(\mathcal{P}(X)\) on the execution path \(p\!\in\!\{1,\ldots,N\}\). More precisely, let the random variable \(\mathfrak{P}\) reflect which control flow path \(p\) was taken in the execution of \(\mathcal{P}\). Then, using the law of total expectation, we can decompose the integral Eq. (2) into a sum over expectations of its path-specific outputs:
\[\widetilde{\mathcal{P}}(\mathbf{x}) =\mathbb{E}_{X\sim\mathcal{N}(\mathbf{x},\Sigma)}[\mathcal{P}(X)] \tag{3}\] \[=\mathbb{E}_{\mathfrak{P}}[\mathbb{E}_{X}[\mathcal{P}(X)| \mathfrak{P}]]\] \[=\sum_{p=1}^{N}\mathbb{P}(\mathfrak{P}=p)\mathbb{E}_{X}[\mathcal{ P}(X)|\mathfrak{P}=p]\,.\]
In practice, \(\mathfrak{P}\) is defined in terms of the conjunction of branching conditions on \(X\) encountered along each control flow path \(p\). We note that the idea of using conditional expectations to obtain a smooth objective is very similar to SPA (cf. the overview in [24]). Here, however, the conditioning on \(\mathfrak{P}\) is not sufficient to exclusively rely on the IPA estimate of \(\mathbb{E}_{X}[\mathcal{P}(X)|\mathfrak{P}\!=\!p]\), because by the definition of \(\mathfrak{P}\), the probability \(\mathbb{P}(\mathfrak{P}=p)\) still depends on \(X\). Considering Eq. (3), the smoothed gradient of the program is given by:
\[\widetilde{\nabla}_{\mathbf{x}}\mathcal{P}(X) =\nabla_{\mathbf{x}}\mathbb{E}_{\mathfrak{P}}\big{[}\mathbb{E}_{X \sim\mathcal{N}(\mathbf{x},\Sigma)}[\mathcal{P}(X)|\mathfrak{P}]\big{]} \tag{4}\] \[=\nabla_{\mathbf{x}}\sum_{p=1}^{N}\mathbb{P}(\mathfrak{P}=p) \mathbb{E}_{X}[\mathcal{P}(X)|\mathfrak{P}=p]\] \[=\sum_{p=1}^{N}\big{(}\nabla_{\mathbf{x}}\mathbb{P}(\mathfrak{P}=p )\big{)}\mathbb{E}_{X}[\mathcal{P}(X)|\mathfrak{P}=p]\] \[+\sum_{p=1}^{N}\mathbb{P}(\mathfrak{P}=p)\big{(}\nabla_{\mathbf{x }}\mathbb{E}_{X}[\mathcal{P}(X)|\mathfrak{P}=p]\big{)}.\]
Taking into account the sum and chain rules of differentiation, this requires determining the gradients of \(\mathbb{P}(\mathfrak{P}\!=\!p)\) and \(\mathbb{E}[\mathcal{P}(X)|\mathfrak{P}=p]\) wrt. \(\mathbf{x}\), with the latter equivalent to the expectation of the pathwise gradient, which can be obtained by IPA and AD. The gradient of the probability of taking a certain path is more difficult to calculate automatically. In the general case where it does not depend on \(\mathbf{x}\), it is always \(\mathbf{0}\) and can be omitted (and Eq. (4) coincides with the IPA estimator). However, in our case the probability of a taking a branch is influenced by the branching condition, which depends on \(\mathbf{x}\), yielding a non-zero gradient vector. The second problem with estimating Eq. (4) is the number of possible paths \(N\), which grows exponentially with the number of branches, leading to an exponential explosion of summation terms.
SI handles both problems by assuming that \(\mathcal{P}(X)\) follows a Gaussian mixture distribution of maximum size \(M\ll N\). The smoothed execution then involves propagating the first two moments \(\mathbf{x}\) and \(\Sigma\) of \(X\) and descendant variables through the program. Fig. 2 (left and center) showcases this on a simple example program. At branches, the mixture then naturally arises (for each variable) from the two new possible distributions of the then and else cases. The weights of the two new mixture elements are calculated by evaluating the cumulative normal density parametrized with the two known moments of the input distribution. To ensure that at most \(M\) control paths are
carried along, selected mixture elements are merged (cf. Section 4.3). Thus, both the calculations leading to \(\mathbb{P}(\mathfrak{P}=p)\) and \(\mathbb{E}[\mathcal{P}(X)|\mathfrak{P}=p]\), which coincide with the weight and mean of the mixture element corresponding to \(p\), are smooth functions of the program input. We make use of this property by differentiating SI's approximations of the per-path weight and output via AD. The right-hand side of Figure 2 shows the differentiation by the example of forward-mode AD, which tracks the derivatives wrt. the first moments \(\mu_{X}\) of \(X\). We now briefly consider some interesting opportunities opened up by this integration of AD.
### Relaxing SI's Assumptions
The method of SI [12] imposes several major assumptions upon programs, trading off fidelity for execution speed:
1. _No interdependencies_: The inputs to any operation are assumed to be independent normal distributions.
2. _Everything is Gaussian_: The output of any operation is assumed to follow a normal distribution, which is not true in the general case, even assuming 1). As a consequence, within a particular branch, the distribution of a variable \(X_{i}\) (and its dependent variables) contains values excluded by the branching condition \(X_{i}\leq c\).
3. _No truncation at branches_: When splitting the state at a branching point, the resulting two distributions are approximated by scaling the weight of the Gaussian mixture elements. Thus, their means and standard deviations remain unaltered, where in reality the results are the lower and upper tails of Gaussian distributions, i.e., truncated Gaussians.
4. _Fixed-size state representation_: The program state across all possible control flow paths, whose number is exponential in the number of branches, is approximated by a fixed-size Gaussian mixture.
While these assumptions permit a reasonable approximation of small programs with mostly affine operations, they cause substantial deviations for larger programs. When integrating AD as described above, the resulting gradient estimates can thus become inaccurate and noisy (cf. Section 5). In the following, we briefly sketch how the additional information available via AD makes it possible to relax assumptions 1 to 3, but show that the effects of 4 dominate the error.
Improvements regarding assumptions 1 and 2 concern operations of the form \(C=h(A,B)\). We restrict this example to binary operations, as the n-ary case is analogous. As shown in Figure 2, SI determines the mean \(\mu_{C}\) and variance \(\sigma_{C}^{2}\) of \(C\) from \(A\sim\mathcal{N}(\mu_{A},\sigma_{A}^{2})\) and \(B\sim\mathcal{N}(\mu_{B},\sigma_{B}^{2})\):
\[\mu_{C} =h(\mu_{A},\mu_{B}) \tag{5}\] \[\sigma_{C}^{2} =\left(\frac{\partial\mu_{C}}{\partial\mu_{A}}\right)^{2}\sigma_ {A}^{2}+\left(\frac{\partial\mu_{C}}{\partial\mu_{B}}\right)^{2}\sigma_{B}^{2}\]
Figure 2: Example program (left) execution showcasing the probabilistic semantics of SI (center) and their integration with forward-mode AD (right). Only relevant AD operations are shown. The tangents \(\dot{v}\) denote the (generally partial) derivative \(\partial v/\partial\dot{\mu}_{X}\) wrt. the mean \(\mu\) of the normally distributed random variable \(X\sim\mathcal{N}(\dot{\mu}_{X},\dot{\sigma}_{X}^{2})\). The hat \({}^{*}\) symbol indicates an input value. Upon initialization, the (here scalar) input vector \(\mathbf{x}\) is taken as the mean \(\dot{\mu}_{X}\); \(\varphi\) and \(\Phi\) denote the normal distribution’s probability and cumulative density functions respectively. Note that in this example the pathwise derivative is 0, but through the combination of SI and AD, the derivative wrt. the branching condition is obtained.
This is a standard result of uncertainty propagation (UP) [48] and exact if \(h\) is an affine function and \(A\) and \(B\) are independent. Linear dependencies (correlations) between \(A\) and \(B\) can be accounted for by using the information obtained by AD (cf. also the explanation in Fig. 2):
\[\sigma_{C}^{2}=\sum_{i=1}^{n}\left(\frac{\partial\mu_{C}}{\partial\mu_{X_{i}}} \right)^{2}\sigma_{X_{i}}^{2}, \tag{6}\]
where \(n\) is the number of inputs and the gradient is provided by AD (see Appendix B for a derivation). Intuitively, this calculates the variance of \(C\) from the variance of the inputs \(X\), based on the transformations captured by the gradient since the start of the program and leading to \(g\), which implicitly accounts for the covariance between \(A\) and \(B\). Closer and automatically differentiable approximations of the distributions resulting from non-affine operations could be determined using higher-order Taylor approximations [41], at the cost of additional computational overhead.
Lifting Assumption 2 poses the largest challenge, as the assumption of Gaussian distributions for all variables is the main enabler of an easy integration with AD. To exclude invalid intervals from the variables' distributions on each path state, interval bounds could be carried along with the mean and variance for each mixture element [38]. Moving beyond Gaussian distributions would require the tracking and updating of higher-order moments or explicit representations of the distributions' shapes.
Assumption 3 can be lifted by calculating the truncated Gaussian distributions resulting from a branch and approximating them by their first two moments. This requires obtaining the dependence of every variable on the branching condition to determine the point of truncation for each variable, which may be approximated by first-order dependencies determined via AD.
To explore the effects of the enhancements on SI's fidelity, we carried out preliminary experiments with correlation-preserving variance calculation via UP and approximating the truncation at branches. However, we found that the error incurred by restricting the state representation to a small maximum number of paths (cf. Assumption 4) dwarfs the benefits of these enhancements. Figure 3 showcases this on a simple synthetic program (cf. Appendix C) by comparing the combination of the original SI with AD, its combination with UP (cf. Eq. (6)), and an unbiased stochastic approximation of the exact convolution. While UP improves the fidelity of the derivative to the convolution, restricting the number of tracked paths introduces significant jumps and deviations from the reference. The erratic results are caused by the decision which paths to merge, which is not smooth with respect to the program inputs: for a small change in the input value, an entirely different set of merging decisions may be made. As a consequence of this observation, and since the state restriction dominates SI's computational cost, we abstain from exploring the above enhancements further and instead focus on the Restrict [12] algorithm.
### State Restriction Strategies
Tracking the effects of every possible branch in a program execution quickly leads to a state explosion that renders the execution of non-trivial programs intractable. Thus, SI employs an algorithm to merge branches, enabling the restriction of the state size to a user-defined limit \(M\). The restriction is achieved by identifying and subsequently merging two elements of the Gaussian mixture, such that the cost defined as the deviation from the original overall distribution is minimized, which involves calculations of new means and standard deviations. As noted in [12], this algorithm is optimal in the sense that it minimizes the deviation from the overall original mixture. However, the algorithm is computationally intensive, requiring an iteration across the variables of all combinations of path states to determine a pair of paths to merge. Further, merging two dissimilar paths can result in high-variance mixture elements and unreachable program states even according to a strict probabilistic semantics. For example, refer to line 7 in Fig. 2. If the two states were merged, this would result in \(Z\sim\mathcal{N}(0.5,0.5)\) assuming that \(w_{1}=w_{2}=0.5\). In other words, \(Z\)'s two possible crisp integer values are merged into a single Gaussian, which can severely affect subsequent operations. Here, we explore three alternate heuristics with different tradeoffs in fidelity and computational cost.
In the Restrict algorithm, the cost of merging two variables is determined by the difference in the mixture element's moments and by the paths' weights. To avoid merging dissimilar states, we can select paths to merge solely on the moments and ignore the path weights. We refer to this strategy as Ignore Weights (IW).
On the other hand, by only considering the paths' weights, the expensive pair-wise comparisons among the paths' states can be avoided. In this strategy, which we refer to as Weights Only (WO), the weight is used as a proxy for each variables' contribution to the gradient. Assuming sufficiently similar mean values across paths, the weight is a good indicator of a mixture elements' contribution to the final expected value
Figure 3: Comparison of gradient fidelity wrt. the convolution with the original SI proposal with 32 tracked control flow paths and with correlation-preserving variance calculation using AD (uncertainty propagation, UP). When reducing the number of tracked paths to a small subset, as would be required for larger programs, the assumption of a fixed size mixture dominates the error. The non-smooth merging of mixture elements causes the gradient to jump or even assume the wrong sign, which is problematic for gradient descent.
Eq. (4). Most importantly, since every variable on a given path shares the same weight, a merge decision is reduced to determining the pair of paths with the lowest weights. A more radical strategy that aims for improved performance and low variance at the same time is to discard (Di) paths with the lowest weights entirely, avoiding the merging of variables. The disadvantage of this strategy is that value and gradient information from discarded paths is lost entirely, whereas merging of path states retains the available gradient information across all paths, albeit in an aggregated form.
Although IW, WO, and Di are suboptimal in the theoretical sense, we will see in our evaluation (cf. Section 5) that the decrease in overhead and/or variance obtained through these strategies can lead to faster optimization progress than the strategy proposed by Chaudhuri et. al., which we abbreviate as Ch.
### Monte Carlo Approach to Smooth Differentiation
The assumption of Gaussian distributions and restricting the state to a small subset of possible paths (cf. assumption 4) cause the results of SI to deviate from the exact convolution of the program's output with a Gaussian kernel. As a consequence, SI's estimations of the program's output and its gradient may be significantly biased. In the following, we present an alternative approximation of the probabilistic program semantics based on Monte Carlo sampling and AD.
The key idea is to revisit the decomposition of the program's convolution with a Gaussian into a sum over the control flow paths. From Eq. (4) it follows that the partial derivative wrt. dimension \(k\) of the mean \(\mathbf{x}\) for a program with \(N\) control flow paths is given by:
\[\begin{split}\frac{\partial\mathcal{P}(X)}{\partial x_{k}}& =\sum_{p=1}^{N}\frac{\partial\mathbb{E}[Y_{p}]\,w_{p}}{\partial x _{k}}\\ &=\sum_{p=1}^{N}\frac{\partial\mathbb{E}[Y_{p}]}{\partial x_{k}}w _{p}+\sum_{p=1}^{N}\mathbb{E}[Y_{p}]\,\frac{\partial w_{p}}{\partial x_{k}}. \end{split} \tag{7}\]
For readability, we abbreviate the probability of taking the control flow path \(p\) with the weight \(w_{p}\!\equiv\!\mathbb{P}(\mathfrak{P}\!=\!p)\) and the expected value of the output distribution conditioned on \(p\) as \(\mathbb{E}[Y_{p}]\equiv\!\mathbb{E}[Y|\mathfrak{P}\!=\!p]\).
We now consider the approximation using Monte Carlo sampling, i.e., through repeated execution of the program on inputs drawn from the input distribution. By definition, the output distribution \(Y_{p}\) of an individual path \(p\) for fixed \(\mathbf{x}\) does not depend on branch conditions. Averaging across the pathwise derivatives of samples restricted to path \(p\) using the indicator function \(\mathbf{1}\) leads to the following unbiased estimator:
\[\frac{\partial\mathbb{E}[Y_{p}]}{\partial x_{k}}=\lim_{S\to\infty}\frac{1}{n_ {p}}\sum_{s=1}^{S}\frac{\partial y_{s}}{\partial x_{k,s}}\mathbf{1}_{\mathfrak{ P}(\mathbf{x}_{s})=p}, \tag{8}\]
where \(S\) is the number of samples, the \(x_{k,s}\) are sampled from \(X_{k}\!\sim\!\mathcal{N}(x_{k},\sigma^{2})\), \(y_{s}\) and \(\mathfrak{P}(\mathbf{x}_{s})\) are the output and chosen control flow path when running \(\mathcal{P}\) on the sample \(\mathbf{x}_{s}\), and \(n_{p}\) is the number of samples on path \(p\).
Each sample takes exactly one of the \(N\) paths, and the weight \(w_{p}\) of a path \(p\) is its probability of being taken, i.e., as \(S\) tends to infinity, \(n_{p}/S\) approaches \(w_{p}\). Hence, a sampling-based form of the first summation in Eq. (7) is simply:
\[\begin{split}\sum_{p=1}^{N}\frac{\partial\mathbb{E}[Y_{p}]}{ \partial x_{k}}w_{p}&=\lim_{S\to\infty}\sum_{p=1}^{N}\frac{1}{n_ {p}}\sum_{s=1}^{S}\frac{\partial y_{s}}{\partial x_{k,s}}w_{p}\mathbf{1}_{ \mathfrak{P}(\mathbf{x}_{s})=p}\\ &=\lim_{S\to\infty}\frac{1}{S}\sum_{s=1}^{S}\frac{\partial y_{s} }{\partial x_{k,s}}.\end{split} \tag{9}\]
The remaining difficulty lies in the term \(\partial w_{p}/\partial x_{k}\) from Eq. (7), which depends on the distribution of the branch conditions and their sensitivities to the program inputs. Given a branch statement of the form "if (\(C\!\prec\!d\))", with \(d\) a constant, we refer to the random variable \(B\coloneqq C-d\) as the branch condition. The branch condition is true, i.e., the branch is taken, with probability \(\mathbb{P}(B\leq 0)\), which is the cumulative distribution function of \(B\) evaluated at 0. In SI, this probability is estimated based on the parameters of the assumed Gaussian distribution of \(B\), and its derivative can thus be determined transparently via AD. To determine the derivative in the general case, where \(B\) may depend arbitrarily on \(\mathbf{x}\), let \(g\) be the function that describes the dependence of \(B\) on \(\mathbf{x}\), i.e.: \(g(\mathbf{x})\coloneqq B\). Using the chain rule, we have:
\[\frac{\partial F_{g(\mathbf{x})}}{\partial x_{k}}=\frac{\partial F_{g(\mathbf{x })}}{\partial g(\mathbf{x})}\frac{\partial g}{\partial x_{k}}. \tag{10}\]
Here, \(\partial F_{g(\mathbf{x})}/\partial g(\mathbf{x})=f_{g(\mathbf{x})}=f_{B}\) is the PDF of the condition, which we can estimate based on the samples, e.g., via kernel density estimation. The second term \(\frac{\partial g}{\partial x_{k}}\) is the derivative of the branch condition wrt. \(x_{k}\). In a sampling-based regime, the value of this term at \(0\) can be approximated by averaging the exact AD derivatives for realizations in a neighborhood \([-\delta,\delta]\). Using the approximation of the product from Eq. (10) and denoting the sampling-based estimate of the branch conditions' PDF as \(\tilde{f}_{B}\), we arrive at the following Monte Carlo estimator for the derivative of the path weight of the "true" case at a branch encountered on path \(p\):
\[\frac{\partial w_{p}}{\partial x_{k}}\approx\frac{-\tilde{f}_{B}(0)}{S}\sum_{s=1 }^{S}\frac{\partial g}{\partial x_{k,s}}(\mathbf{x}_{s})\mathbf{1}_{\mathfrak{ P}(\mathbf{x}_{s})=p,|g(\mathbf{x}_{s})|<\delta} \tag{11}\]
and analogously with positive sign in the "false" case. We refer to this estimator as the DiscGrad Gradient Oracle (DGO).
The derivation above assumes that the program behaves deterministically across the samples. However, since the smoothed gradients permit simple averaging, stochasticity can be introduced across estimation passes.
Each sample \(\mathbf{x}_{s}\) may encounter several branches along each dimension \(k\). For sufficiently large \(\delta\), each \(x_{k,s}\) may thus appear in the summation of Eq. (11) for multiple branches. This situation is not accounted for since the contribution of each individual sample only captures the output's derivative on its encountered control flow path without explicitly considering the (transitive) effects of taking alternative paths. We treat this case heuristically by assigning an affected sample the path weight derivative wrt. the relevant dimension of the branch with the most equal distribution of samples between \([-\delta,0]\) and \((0,\delta]\).
For the problems considered in our evaluation, we found that it sufficed to set \(\delta\) to \(\infty\), indicating that the benefit of collecting more samples per dimension and branch outweighs the bias in the pathwise derivative introduced by choosing a larger neighborhood. Even so, deeply nested branches can lead to only small numbers of samples being observed at each branch. This problem can be mitigated by translating nested branches to sequential branches, which was straight forward for the problems considered in Section 5.
### DiscoGrad: Smooth Differentiation of C++ Programs
DiscoGrad is a tool to translate programs written in a subset of C++ to a smooth representation in order to execute them according to an approximate probabilistic semantics and to estimate the smooth programs' gradient. The tool is comprised of two main parts: a set of header-based back-ends that implement AD, SI, and AD-guided Monte Carlo gradient estimation on one hand, and source-to-source transformations implemented via the LLVM compiler toolchain to generate estimator-specific code that makes use of the respective back-end. To allow for a meaningful evaluation of execution times, the code was carefully profiled and optimized using standard techniques such as early returns, avoiding unnecessary copy operations, and minimizing dynamic memory allocation.
Fig. 3(a) depicts a basic DiscoGrad program that implements the Heaviside function. In the main function, instances of DiscoGrad and DiscoGradFunc are created as interfaces to the chosen estimator's backend. In this example, the user function _DiscoGrad_f() initializes smooth variables of type sdouble using an input mean value and variance, branches on the smooth variable x, and returns the resulting expectation of y. The listed program is the input to the smoothing transformation, which is applied to any user function prepended with the string _DiscoGrad_. After compilation, the smoothed program outputs the expectation returned by the user function along with its gradient.
At present, beside crisp code, which remains unmodified, DiscoGrad's smoothing transformation supports mathematical operations and assignments on any combination of crisp and smooth variables, conditional branching, loops, functions on smooth variables, references and pointers to smooth variables as well as simple uses of containers. Among the features currently not implemented are smooth versions of the ternary operator, switch statements, and global variables. DiscoGrad's features and limitations are documented in our repository2, where the full source code and the programs used in the evaluation in Section 5.1 can be accessed.
Footnote 2: [https://github.com/philipp-andelfinger/DiscoGrad](https://github.com/philipp-andelfinger/DiscoGrad)
#### 4.5.1 AD Implementation
DiscoGrad includes an implementation of forward-mode AD based on operator overloading. At first appearance, reverse-mode AD might seem preferable, since it covers the common case of differentiating programs mapping large numbers of inputs to a single output in a single reverse pass. However, forward-mode AD provides two key benefits in our problem setting: firstly, its memory consumption is independent of the number of operations carried out by the program. In contrast, reverse-mode AD maintains a tape in memory that grows linearly with the number of operations. Furthermore, in combination with SI, the memory consumption for the tape multiplies with the number of tracked control flow paths. Secondly, forward-mode AD allows us to determine a variable's derivatives with respect to the inputs at any time throughout a program's execution without further cost, in contrast to the reverse passes that would be needed with reverse-mode AD. This allows us to efficiently determine the derivatives of branch conditions to the inputs in our implementation of the Monte Carlo estimator described in Section 4.4.
In our implementation, a variables' tangents (partial derivatives) with respect to the inputs are carried along as arrays, allowing for compiler vectorization of the tangent operations3. To exploit the frequent case of variables carrying at most one non-zero tangent, full tangent arrays are allocated lazily only once required. A pool of tangent arrays is maintained to avoid frequent explicit memory allocations and deallocations when variables are created or destroyed. While naive forward-mode AD incurs a slowdown factor equivalent to the input dimension, we will see in Section 5.3 that these simple implementation-level design decisions suffice to reduce the AD overhead to a more tolerable level.
Figure 4: Example DiscoGrad program (a) and the smoothed versions of the contained branch for SI (b) and DGO (c).
#### 4.5.2 SI implementation
Executing a program according to the probabilistic semantics of SI deviates from a regular "crisp" execution in two main regards: firstly, when encountering a branching statement, both the then and the else case is visited. Doing so repeatedly generates an exponential number of _path states_ representing the variables' values resulting from different branch sequences, which is restricted to a configurable maximum number to constrain the memory consumption and execution time. Secondly, the mathematical, logical, and comparison operations of the original program are widened to operate on all present path states. On each path state, operations originally carried out on scalars are executed on Gaussian distributions represented by their first two moments. DiscoGrad takes a similar approach to the original implementation of SI in the EULER tool [49], but allows for the use of smoothed variables with C++ features such as containers and references, integrates SI with AD, and implements the state restriction strategies presented in Section 4.3.
The key idea is for the source-to-source transformation to flatten the program's control flow across all branches so that all branch bodies are visited and to delegate the management of the variable states in the different control flow paths to our back-end library. In the back-end, the program state is managed by an instance of type SiStack, which holds the path states that are active at the programs' scopes, with the state in the currently visited scope (e.g., the then-body of an if-else statement) at the top (cf. Fig. 3(b)). The type sdouble (smooth double) overloads the operations defined for double, carries them out for all active paths in the current scope, and substitutes originally crisp mathematical operations by operations on the moments of Gaussians. Each path state's weight is a floating-point variable subject to differentiation via our AD implementation. Similarly, the mean and optionally also the variance of each Gaussian value representing a mixture element are differentiable variables.
Given an if-else statement with a condition \(1<\)= r where at least one of \(1\), r evaluates to type sdouble, DiscoGrad determines on each active path state \(p\) the conditional probability \(\mathbb{P}_{p}(l-r\leq 0)\) of entering the branch. The other inequality operators are handled analogously. For each existing path state, two new path states representing the then and else cases are created with the new weights determined by multiplying the conditional probability and its complement with the original path's weight. Paths with weights below a configurable threshold (set to \(10^{-20}\) in the evaluation) are discarded due to their negligible impact on the program's output and to avoid arithmetic underflow. Smooth loop constructs are supported by exiting a loop once no path state with sufficient probability enters another iteration.
At a branch, DiscoGrad first halves the number of active paths using one of the state restriction strategies Ch, IW, WO, and Di described in Section 4.3 to make it possible to generate new paths. As we will see in our evaluation, the overhead incurred by this step is decisive for the overall execution time under SI. For the computationally expensive Ch strategy, we cache previously computed merge costs among the path states as long as they remain unchanged and maintain a priority queue to efficiently select the next pair of states to be merged.
#### 4.5.3 DiscoGrad Oracle (DGO) Implementation
Our Monte Carlo estimator DGO (cf. Section 4.4) estimates the gradient of the smoothed program based on a series of runs on perturbed inputs. Since the individual samples follow the control flow of the original crisp program, an implementation of DGO is vastly more lightweight compared to SI. Here, the type sdouble maps to a single scalar floating-point number differentiable via our AD implementation, without any probabilistic semantics.
The source-to-source transformation simply prepends each if-else statement with a condition the form \(1<=r\) by a call that passes \(l-r\) to our back-end, and analogously for the remaining inequality operators (cf. Fig. 3(c)). For each sample, the condition values in a neighborhood \([-\delta,\delta]\) and their derivatives are gathered in order to estimate the weight derivative according to Section 4.4. The mean of the conditions' gradients is computed on the fly via AD, as is the overall pathwise gradient of the sample. Having collected the branch conditions, we estimate the probability density of the condition values at each branch using kernel density estimation with a Gaussian kernel.
Subsequently, we iterate over the branches encountered by each sample to assign the sample the path weight derivatives along its path. Finally, the pathwise gradients and the path weight derivatives are combined according to the summation of Equation Eq. (7) to yield the DGO estimate of the gradient.
## 5 Evaluation
Bringing everything together, we perform an extensive empirical evaluation of the proposed gradient estimators with respect to their execution times and fidelity, comparing them to two other estimators. We combine them with the Adam gradient descent procedure to solve optimization problems and also compare against global optimization techniques.
In all experiments, we distinguish two types of replications. A single run of a program at a given solution is a _microreplication_. For stochastic programs, several microreplications are carried out, averaging the partial derivatives across microreplications. A _macroreplication_ is a replication of an entire experiment (optimization, parameter sweep) starting from an initial solution and spanning a series of microreplications. When multiple macroreplications are executed, all estimators are configured with the same sequence of starting solutions across macroreplications.
All measurements of execution times and optimization progress were carried out on a machine equipped with a 16-core AMD Ryzen 9 7950X processor and 64 GiB RAM running Debian GNU/Linux 11, running at most 16 processes in parallel.
### Selected Non-Smooth Optimization Problems
Based on well-known optimization problems from the literature and practical applications, we implemented four evaluation problems dominated by discontinuous control flow. An overview of the problems is provided in Table 1.
#### 5.1.1 Traffic Lights
The TRAFFIC dxd problem is a deterministic macrosimulation of a _road network_ using a simple form of the cell transmission
model [50] covering a two-dimensional grid of four-way intersections with dimensions \(d\times d\) over \(d\) time steps. Vehicles are represented as _populations_ (vehicle counts) per lane. At each time step, \(d\) new _vehicles_ are created at the northern and/or western border of the grid, each with a general movement direction to the opposite border interrupted by rare predetermined turns. The traffic flow at each intersection is organized by a signal that alternates between green and red phases of two time steps each for the horizontal and vertical lanes, allowing at most one vehicle per step and lane to advance to the next intersection. The parameters of this problem are the \(d^{2}\) traffic light _phase offsets_ for each intersection and the objective is to maximize the total _number of intersections passed_ by the vehicles throughout the simulation. Here, discontinuities arise from the discrete switching of the traffic signals.
#### 5.1.2 Air Conditioning
The second problem, taking inspiration from [38] and later referred to as AC, considers the _optimal control_ of an air conditioning unit by a _neural network_. An insulated room with a single window is simulated over \(10\) time steps. Over time, the _temperature_ of the room gradually approaches the _outside temperature_ according to the room's _insulation_. With a probability of \(5\%\) per step, the window is opened, decreasing the insulation drastically. The task of the AC is to keep the temperature of the room as close to a chosen _target_ as possible by deciding on its on/off state and the cooling power (\(2\) neural network outputs), given the target, previous, and new temperature together with its previous action (\(5\) neural network inputs). Each time the AC activates, an _energy penalty_ is incurred. For each simulation, the initial, target, and outside temperature, as well as the insulation are chosen randomly to force the network to generalize. Considering the feedback from the previous' time steps inputs and outputs, this problem can be viewed as training a recurrent neural network. The problem parameters are the \(82\) weights of the one layer deep network and the objective is the minimization of the loss function defined as the sum of the average mean squared error over time and the energy penalties. Discontinuities arise both from the on/off state of the AC and the discrete randomness of the window opening event.
#### 5.1.3 Hotel Booking
The third problem is a revenue maximization problem from the SimOpt benchmark suite [51, 52] and considers the _optimal booking_ of a hotel. The hotel offers 100 rooms via 56 "products" reflecting the guests' arrival days within a week, the lengths of their stays, and two different rates: a crack rate and a discount rate. Over the course of one week, guests arrive and request products according to per-product Poisson processes with predefined rates. The number of bookings is restricted by a per-product _booking limit_. Whenever a requested product's booking limit is greater than zero, the guest can be accommodated, and the booking limits for all products are decremented to account for the reduced room availability on the days covered by the product. Here, the goal is to maximize the revenue by adjusting the products' interdependent booking limits to the guests' arrivals. The parameters are the 56 booking limits, each with an upper bound of 100. Discontinuities are caused by the discrete decisions whether a guest's request for a product can be satisfied.
#### 5.1.4 Disease Spread
The final problem is an _agent-based_ SIR (susceptible, infected, recovered) simulation of _disease spread_ similar to [7]. Over \(25\) time steps, a population of \(200\) individuals moves on an undirected _graph topology_ generated as a random geometric graph with \(100\) nodes (locations) and average degree \(\approx 7\). Initially, agents are infected with a certain probability. Upon infection, a recovery time is scheduled with a delay drawn from an exponential distribution. Throughout the simulation, agents move along predetermined paths along the edges in the graph, being infected by their neighbors at the same location and recovering at their scheduled time. The probability of being infected depends on a per-location coefficient. The \(102\) parameters of this problem are the initial infection probability, the mean recovery time relative to the simulation end time, and the location-specific infection probabilities. The objective is to fit a previously generated progression of the epidemic by minimizing the mean squared error between the distribution of agent states at each location and the states recorded in the reference trajectory. Discontinuities arise due to the discrete randomness in the infections, which occur via Bernoulli trials, as well as the discrete recovery event.
### Gradient Estimators
The objective of the evaluation is to determine the utility of our gradient estimators DGSI and DGO for solving the optimization problems defined in the previous section by gradient descent. We chose the popular Adam optimizer due to its well-known applicability to noisy gradient estimates [53]. As points of reference, we employ the PGO [25] and REINFORCE (RF) [13] estimators. Smooth gradients are calculated by sampling over a set of normally distributed random perturbations of the program parameters, making REINFORCE also applicable to deterministic problems such as TRAFFIC (cf. Appendix A for a complete derivation). More formally, we use the following reference estimators to obtain the smoothed
\begin{table}
\begin{tabular}{r l l l}
**Problem** & **Randomness** & **Parameters** & **Objective Function** \\ \hline TRAFFIC dual & Deterministic & \(d^{2}\) traffic light offsets with \(d\in\{2,5,10,20,40\}\) & Traffic flow \\ \hline AC & Stochastic & 82 neural network weights & Loss determined by deviation from target temperature and energy cost \\ \hline HOTEL & Stochastic & 56 products’ booking limits & Reverse \\ \hline EPDEMCCS & Stochastic & Recovery rate, initial infection probability, 100 location-specific & Mean squared error wrt, reference over steps and locations \\ & & infection probabilities & \\ \hline \end{tabular}
\end{table}
Table 1: Overview of the benchmark problems.
gradient \(\widetilde{\nabla}_{\mathbf{x}}\):
\[\widetilde{\nabla}_{\mathbf{x}}^{\text{poo}}\,\mathcal{P}(\mathbf{x})=\lim_{S \rightarrow\infty}\frac{1}{S}\sum_{s=1}^{S}\frac{\mathcal{P}(\mathbf{x}+ \sigma\mathbf{u}_{s})-\mathcal{P}(\mathbf{x})}{\sigma}\mathbf{u}_{s} \tag{12}\]
and
\[\widetilde{\nabla}_{\mathbf{x}}^{\text{ag}}\,\mathcal{P}(\mathbf{x})=\lim_{S \rightarrow\infty}\frac{1}{S}\sum_{s=1}^{S}\frac{\mathcal{P}(\mathbf{x}+ \sigma\mathbf{u}_{s})}{\sigma}\mathbf{u}_{s}, \tag{13}\]
where \(\mathbf{u}_{s}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) are iid. variates of the standard multivariate normal distribution and \(\sigma\) is the smoothing factor (i.e., standard deviation). Notice how this formulation of REINFORCE, where the stochasticity is introduced by random perturbations, is very similar to PGO.
We note that, for stochastic problems, REINFORCE may also directly exploit the problem-specific stochasticity, but only if the log-derivative is known. As our problems allow arbitrary probability distributions, the log-derivative is difficult to obtain in general. Further, our proposed mechanisms and PGO work fully automatically, i.e., in a black-box setting. We thus evaluate REINFORCE in the same setting. Additionally, PGO can be combined with a random search as described in [25, 26]. The random gradient-free search algorithm described therein can be viewed as only taking one sample from PGO per step of descent. In this evaluation, we limit our scope to performing _gradient_ descent, leaving the random search and possible interesting integrations with the Adam optimizer as future work (cf. Section 6).
This leaves us with four optimization procedures. For brevity, we only show the estimator and number of samples (or tracked control flow paths), as they are all combined with Adam, for example "PGO/100" for PGO estimator with 100 samples or "DGSI/Di/4" for our SI implementation with four tracked control flow paths, using the Discard restriction strategy (cf. Table 2).
In the following, we perform three types of evaluation. First, we evaluate the scalability in terms of computation time and memory (Section 5.3) and the gradient error (Section 5.4). Testing the practical impacts of the former two, we conclude with an evaluation of the optimization performance (Section 5.5). For the evaluation of optimization performance we also compare two popular global optimization algorithms that would typically be applied to solve our non-smooth and non-convex problems: a standard _genetic algorithm_ (GA) with elitism (as provided by the pyeasyga module4) and _simulated annealing_ (SA) (by porting the version from the Ensmallen library5 to Python). Our measurements of execution time and optimization progress over time exclude process startup times to avoid disadvantage the existing baseline approaches without AD, which are typically faster and thus more strongly impacted by startup times.
Footnote 4: [https://github.com/remomosowm/pyeasyga](https://github.com/remomosowm/pyeasyga)
Footnote 5: [https://ensmallen.org/docs.html#simulated-amnesling-sn](https://ensmallen.org/docs.html#simulated-amnesling-sn)
### Execution Time
All of the AD-based gradient estimators incur an overhead over a crisp program execution without AD. Here, we evaluate the scaling of the estimators' wall-clock execution time with the number of samples or paths. Each measurement was repeated one hundred times, resulting in 95% confidence intervals smaller than 7% of any of the shown averages.
Figure 5 shows the slowdown of different configurations of the estimators over a single crisp program execution without AD, normalized to the number of paths or samples and rounded to two significant digits. Each value can be interpreted as the slowdown per path or sample.
The overhead for the IPA estimator comprises only the cost of AD and the negligible cost of random number generation for the perturbations. In the EPIDEMICS, HOTEL, and TRAFFIC problem, we see the benefit of the simple sparsity optimization in our AD implementation. In these problems, similar to the extreme case of the Heaviside function shown in Fig. 1, the program's smoothed gradient depends mostly on the branches taken, with only limited arithmetic on variables that directly depend on the input parameters. As a consequence, most variables do not carry a tangent value and the slowdown factor remains far below the input dimension, which would be the expected slowdown with naive forward-mode AD. When increasing the number of samples, the reuse of tangent vectors (cf. Section 4.5.1) allows the overhead per sample to gradually diminish. In contrast, in the AC problem, the output has a non-zero pathwise gradient with respect to the neural network coefficients. Hence, most intermediate variables carry tangents with respect to all inputs and the slowdown becomes more pronounced. Still, due to vectorization of the AD operations, the slowdown for IPA with 1 000 samples is only about a quarter of the input dimension of 82.
Our Monte Carlo estimator DGO involves a configurable number of AD-enabled program executions, additional bookkeeping at branches, and kernel density estimations. We see that accordingly, the slowdown is somewhat larger than IPA's. However, we observe sublinear scaling behavior when increasing the number of samples. Since branch condition values are stored and operated on per branch, the impact of the base costs for the per-branch operations diminishes with larger numbers of samples. As with IPA, the AC problem, which involves more input-dependent arithmetic operations, entails higher overhead.
SI incurs the cost for AD and carrying along Gaussian distributions across several control flow paths, as well as for restricting the state according to the chosen strategy. Depending on the program and state restriction strategy, the number of control flow paths can fluctuate and may not always saturate the configured upper bound. Hence, we normalize to the _ef
\begin{table}
\begin{tabular}{r l l}
**Name** & **Estimator** & **References** \\ \hline DGSI & DiscGond implementation of our combination of 5th with AD using the & Section 4.1 ff. and [12] \\ DGSI/DI & Discard restriction strategy (discard branches based on weights), & \\ DGSI/WO & Weights Only restriction strategy (merge highest-weighted branches), & \\ DGSI/W & Ignore Weights restriction strategy (merge only based on moments). & \\ \hline DGO & DiscGond implementation of our Monte & Section 4.4 \\ & Carlo estimator. & \\ \hline PGO & Polyak’s Gradient-Free Oracle. & Eq. (12v, [25, 26] \\ \hline RF & DiscGond version of REINFORCE. & Eq. (13v, [13]) \\ \hline \end{tabular}
\end{table}
Table 2: Overview of evaluated gradient estimators. All estimators are combined with the Adam optimizer.
fective_ number of control flow paths throughout the program's execution, which we define as the average number of paths active when encountering a branch. Since the state restriction is applied before branches and dominates SI's execution time, normalizing to the number of paths at that point captures the main cost of the different SI estimators. As expected, the slowdown of SI is much larger compared to the other estimators. The restriction strategies based on merging paths (Ch, IW, WO) are up to two orders of magnitude more expensive than "discard" (Di). The cost of selecting the next paths to be merged is negligible for WO, while it is quadratic in the number of paths for Ch and IW. For all three of Ch, IW, and WO, the cost for the subsequent merging of paths is linear in the number of variables. The results for the EPIDEMICS problem, which uses the largest number of variables of the considered problems, show the resulting enormous overhead of the merging-based strategies, with only modestly better scaling using WO. The DGSI estimators based on merging incurred the lowest overhead in the HOTEL and AC problems, which is a consequence of their comparatively smaller number of variables and branches.
Overall, substantial slowdown is observed with all of the smoothing estimators, ranging from a factor of about two to several orders of magnitude compared to a single crisp execution. In particular, the overhead of DGSI with the restriction strategies based on merging paths is likely to put many real-world applications out of reach. Whether each estimator's overhead can be justified depends on the fidelity of the calculated gradients and the resulting progress in parameter synthesis problems, which we evaluate in the next sections.
### Gradient Fidelity
In this section, we provide empirical measurements of the estimated smoothed gradient fidelity in terms of the mean absolute error, which is defined in the dimension \(k\) for the input vector \(\mathbf{x}\) as:
\[MAE(g,k)=\frac{1}{S}\sum_{s=1}^{S}\left|\widetilde{\nabla}_{k}^{g}\,\mathcal{ P}(\mathbf{x}_{s})-\widetilde{\nabla}_{k}\,\mathcal{P}(\mathbf{x}_{s})\right|, \tag{14}\]
where \(\mathbf{x}_{1},\ldots,\mathbf{x}_{S}\) is a sequence of sample points and \(\widetilde{\nabla}_{k}^{g}\) indicates the \(k\)-th component of the smoothed gradient estimate of the estimator \(g\in\{\text{DGSI, DGO, PGO, RF}\}\), i.e., the partial derivative \(\partial\mathcal{P}(\mathbf{x}_{s})/\partial x_{k,s}\). To retrieve the MAE in dimension \(k\), we uniformly sample from these partial derivatives in a problem-specific range in dimension \(k\) around an optimal value for \(\mathbf{x}\), as determined by optimization.
Evaluating this error is challenging, as for large problems it is expensive to calculate the exact smoothed gradient baseline \(\widetilde{\nabla}\mathcal{P}(\mathbf{x}_{s})\). Thus, we use a large number of samples (\(5\times 10^{5}\)) of the unbiased PGO to produce a baseline with maximum \(95\%\) confidence intervals of 0.003 (AC), 0.014 (TRAFFIC 2x2), 0.033 (TRAFFIC 5x5), 0.425 (HOTEL) and 7.3 (EPIDEMICS). The wide maximum confidence intervals for the HOTEL and EPIDEMICS baselines can be attributed to some large partial derivatives in these problems (cf. Fig. 8). To reduce the computation time, we evaluate all problems in a deterministic setting by configuring a fixed seed and only consider the first \(\leq 25\) partial derivatives.
Fig. 6 shows an overview of the MAE wrt. the respective dimensions of the AC, TRAFFIC, HOTEL and EPIDEMICS problems. The MAE, as indicated by the cell color intensity, is the average of the MAE defined in Eq. (14) over the first \(k=1,\ldots,25\) dimensions of each problem. The first result is that, with some exceptions, the error of all sampling-based estimators decreases with the number of samples. However, the estimators differ in how much samples they need to obtain the same fidelity, with our application of the REINFORCE estimator requiring orders of magnitude more samples than PGO and DGO. Overall, our DGO estimator slightly outperforms the PGO estimation in terms of sample efficiency, although in some scenarios a bias prohibits further improvement with the number of samples. The DGSI estimation is also very close to the baseline in many cases. Especially on smaller problems such as TRAFFIC 2x2, DGSI exhibits a competitive error, delivering good results even with only 8 tracked paths. As expected, the Di restriction strategy is less accurate than the more expensive Ch. Additionally, the error varies drastically between problems for all estimators.
A more granular view of these findings is depicted in Figures 7, 8 and 9, which show a comparison of selected partial derivatives as line plots. The vertical bar in each plot represents the optimal value, around which the samples of the partial derivative were taken. In Fig. 7 it can be seen that for relatively low-dimensional problems, DGO and DGSI deliver almost perfect results, and the optimum could be identified at the parameter values where the partial derivatives cross 0. From Fig. 8 it is evident that some problem dimensions pose greater challenges to smoothed derivative estimation than others. For example, the estimates wrt. the location-specific
Figure 5: Slowdown of the different gradient estimators compared to a crisp execution without AD, normalized to reflect the slowdown per sample (Monte Carlo estimators) or path (DGSI).
infection probabilities \(x_{4}\) and \(x_{6}\) are much noisier than wrt. the relative recovery time \(x_{1}\); the estimates wrt. some of the dimensions seem to be biased for DGO and DGSI. Additionally, DGSI delivers a gradient that is significantly smaller than the baseline and sometimes (erroneously) zero. This can be attributed to the fixed size of the state tracked by SI, which necessarily results in a loss of smoothed derivative information for sufficiently large problems. Interestingly, in the AC problem, the gradients delivered by DGSI are also much smaller than the baseline, but can still capture the trend very well (cf. Fig. 9). On this problem, DGO is vastly less noisy than PGO, which can be attributed to the use of the exact pathwise derivative.
To conclude, we observe that with some exceptions where the gradient estimates are biased, DGO delivers accurate results. Where it can exploit the pathwise derivative, the results exhibit less variance than the baseline PGO estimate. DGSI is competitive in terms of the MAE on smaller problems with low dimensionality, beating every other estimate on TRAFFIC 2x2, and obtains less noisy derivatives than the sampling-based estimators in these cases. On larger problems, it incurs a bias, but can often still capture the underlying trend. The effects of the observed differences in fidelity on the estimators' utility for gradient descent is evaluated in the next section.
### Optimization Performance
The optimization progress using the gradient-based approaches hinges on a suitable choice of the input standard deviation and the learning rate of the Adam optimizer. Varying these two hyperparameters affects the degree to which the computed gra
Figure 8: Partial derivatives wrt. the recovery rate (\(x_{0}\)), initial infection probability (\(x_{1}\)) and location-specific infection probabilities \(x_{4}\) and \(x_{6}\) of the EPDEMICS model, as calculated by different gradient oracles. Using the PGO estimator with \(500\,000\) samples as an unbiased baseline, the fidelity varies drastically among estimators, but also the problem dimensions. In particular, DGSI produces derivatives that are much too small.
Figure 6: Error in the gradient estimate over different problems and estimator parameterizations (lower is better). Each cell reflects the mean absolute error (MAE) wrt. to the baseline (PGO/\(500\,000\)), averaged over the first \(\leq 25\) model dimensions. The lowest error is provided by PGO and DGO, with the best result for REINFORCE magnitudes higher. For DGSI, a consistent decrease of the error with the number of paths can generally not be observed. In some cases, this is also true for DGO, signaling a potential bias in the estimate.
Figure 7: Partial derivatives wrt. the signal offsets of the TRAFFIC 5x5 problem, as calculated by different gradient estimates. Here and in the following figures, the centered vertical bar indicates the optimal value around which the samples were taken. In this fairly easy problem, all estimators deliver accurate partial derivatives.
Figure 9: Partial derivatives wrt. neural network parameters of the AC problem, as calculated by different gradient oracles. In this problem, the DGO and DGSI estimators can profit from their accurate pathwise gradient, while PGO exhibits a lot of noise. The DGSI estimator is severely biased and thus plotted on a separate scale (right), but is able to capture the trends correctly.
dients accord with the original function on one hand, the ability to escape local minima on the other hand. Here, we identified for each problem one combination \((\sigma_{0},\eta_{0})\) of standard deviation and learning rate where good progress was made with all estimators. An automated hyperparameter sweep was then carried out covering three levels for each hyperparameter, covering all combinations of \(\sigma_{0}\cdot(\frac{1}{2},1,2)\) and \(\eta_{0}\cdot(\frac{1}{2},1,2)\). As additional hyperparameters, we further varied the number of samples used by the stochastic gradient estimators, and for DGSI the number of paths and the path restriction strategy. Across all problems and optimization methods, we carried out 3 310 macroreplications, resulting in a total CPU time of about 3 085 hours.
Where not stated otherwise, we show for each estimator the results of the hyperparameter combination that yielded the best solution at the end of the time budget. At each solution determined via a smoothing estimator, we carried out an additional evaluation using the _crisp_ program. The plots show the results of the crisp evaluation to ensure comparability of the solution qualities even if the smoothed program deviates from the crisp one. After observing a premature convergence of simulated annealing (SA) to low-quality solutions, we decreased its hyperparameter \(\lambda\), which determines the relative decrease in temperature per step, from its default value of \(10^{-3}\) to \(10^{-5}\). Nevertheless, due to a lack of significant progress, the results for the SA are excluded from most plots.
Our plots show the optimization progress over wall-clock time and optimization steps. Each data point in our results is the average of five macroreplications carried out for the respective combination of problem, estimator, and hyperparameters. While the progress over time is the main concern for practical purposes, the progress over steps indicates the strides made when disregarding differences in execution time. For the gradient-based estimators and SA, one step represents an evaluation of the objective function at one solution across the configured number of paths or samples for the smoothing estimators. For the genetic algorithm (GA), one step represents an update from one generation to the next, which involves 50 function evaluations, one for each population-member.
Figure 10 shows the optimization progress for the HOTEL problem. Apart from SA and REINFORCE, all methods converge to a similar revenue of about 53 200 within the time budget of thirty minutes. The fastest convergence is achieved by the GA, albeit to a slightly worse solution than the best-performing methods. Comparing the stochastic estimators, PGO performed best with 1 000 samples, in contrast to 100 samples with DGO. Figure (b)b, which shows the first 500 optimization steps, indicates that the DGO's higher variance with only 100 samples leads to less progress per step compared to PGO/1000. DGSI performed best with the "discard" (Di) restriction strategy and with four paths, also converging to roughly the same solution quality as DGO and PGO. Inspecting the solutions, we observed that all three of these methods arrived at similar final parameter combinations.
In the EPIDEMICS problem (cf. Figure 11), PGO/100 and particularly the GA outperform the other methods. We have seen in Section 5.4 that DGO and PGO both struggled to accurately estimate the gradient for this problem, in which there is a complex interplay between the initial infection probability, the recovery rate, and the per-location infection probabilities. Here, GA converges extremely quickly, identifying a solution that is reached by PGO/100 only at the end of the time budget. Considering the progress over the first 50 steps, we see that DGO makes larger strides than all other gradient estimators, indicating that that its slower progress over time is a result of its higher execution time rather than lower-fidelity gradient estimates. With DGSI, the convergence both over time and steps is too slow to be competitive.
Of our problems, AC is the only one in which some of the partial derivatives are non-zero in the crisp case. Hence, the classical IPA estimator can be applied, albeit without capturing the discontinuities generated by the decision whether cooling is activated in a time step. As Figure 12 shows, all methods apart from SA were able to reduce the cost function to below 2.4, with the best solutions obtained by PGO/100 and DGSI with the IW strategy and eight paths. IPA/1000 made very little initial progress, but approaches the other methods' results at the end of the time budget. Studying the AC controller's behavior, all of the solutions obtained by the listed methods activate the cooling whenever the temperature is higher than the target. However, the best two solutions identified by PGO and DGSI result in a more careful selection of the degree of cooling according to the current insulation and the energy cost. Here, in contrast to the other problems, DGSI benefits from the existence of non-zero pathwise gradients, for which AD delivers exact values.
Figure 10: Optimization progress of the best-performing parametrization of each estimator for the HOTEL problem over optimization steps.
Figure 14: Optimization progress of the best-performing parametrization of each estimator for the TRAFFIC 20x20 problem.
Figure 12: Optimization progress of the best-performing parametrization of each estimator for the AC problem.
Figure 13: Optimization progress of the best-performing parametrization of each estimator for the TRAFFIC 10x10 problem.
Figure 11: Optimization progress of the best-performing parametrization of each estimator for the EPIDEMICS problem.
Finally, we consider the TRAFFIC problem at the three scales of 10x10, 20x20, and 40x40 resulting in 100, 400, and 1 600 decision variables. Figure 13 shows the results for the 10x10 grid. In accordance with the fidelity results from Section 5.4, where we have seen that DGO produces highly accurate gradients for this problem, the fastest convergence over the optimization steps by far is achieved by DGO/100, which still holds when plotted over optimization steps. Here, GA was not able to improve beyond the initial solution. In contrast to SA's results in the other problems, it is able to make substantial progress within the time budget, while still not being competitive with the best-performing methods. REINFORCE is once again outperformed by PGO/100. DGSI with the Ch strategy behaved somewhat similarly to SA, but having completed less than 300 steps was unable to obtain a competitive solution.
Similar trends are observed in Figure 14 for our largest problem TRAFFIC 20x20. We omit DGSI and REINFORCE, which did not make substantial progress. Here, DGO/100 provides the fastest convergence and the best solution, slightly better than PGO/100 and PGO/1000. DGO/1000 exhibits vastly faster convergence over steps, but finished only about 150 steps within the time budget.
Since PGO and DGO consistently outperformed the other methods in the TRAFFIC problem, we limited the computationally intensive experiment on the 40x40 grid to these two estimators with their respective best-performing hyperparameter combination from the 20x20 experiment. Figure 15 shows that for this problem with 1 600 decision variables, DGO/100 vastly outperforms both PGO/100 and PGO/1000 over time, and DGO/1000 over optimization steps. Here, DGO benefits from its use of AD to separate the effects of the individual in
Figure 16: Overview of the optimization progress over the time budgets for the HOTEL AC and EPIDEMICS problems.
Figure 17: Overview of the optimization progress over the time budgets for the TRAFFIC problem.
Figure 15: Optimization progress of the DGO and DGO estimators for the TRAFFIC 40x40 problem.
put dimensions, whereas PGO must rely on the scalar program output alone.
The optimization progress measurements are summarized in the heatmaps shown in Figures 16 and 17, which indicate each method's progress relative to the best improvement over the initial solution made by any method. For the traffic problems, we only show the methods that made significant progress for at least one problem size.
In summary, DGSI has proven to swiftly determine high-quality solutions for the HOTEL and AC problems, in which the effects of choosing alternate branches are less extreme than in the other problems. Particularly good results were seen for AC which is the only of the considered problems in which the pathwise gradient is non-zero. Generally, the stochastic estimators PGO and DGO delivered the most reliably high-quality solutions within the time budget. While our AD-based estimator DGO showed outstanding performance particularly in the TRAFFIC problem with large numbers of input dimensions, the existing estimator PGO has the benefit of being applicable to existing programs without instrumentation.
## 6 Conclusion
Although an evaluation of gradient estimators targeting a problem domain as broad as parameter synthesis must necessarily be limited in scope, we identify several trends in our results.
The objective functions of the considered optimization problems are discontinuous and non-convex. Nevertheless, local search based on gradient descent consistently outperformed global search via a genetic algorithm or simulated annealing. Our results indicate that both for stochastic and non-stochastic problems, likely due to the combination of noisy estimates with Adam, the local search is able to escape local minima sufficiently to swiftly identify high-quality solutions. Future work dedicated to more extensive benchmarking could consider more sophisticated global optimization methods such as recent trust region-based algorithms [54] or metaheuristics [55].
Each of the studied gradient estimators comes with its own tradeoffs. The estimators that combine smooth interpretation and automatic differentiation (DGSI) incur a substantial cost in execution time, depending on the state restriction strategy and the number of control flow paths carried along. Notably, we saw that even if the fundamental approximations made by smooth interpretation were lifted, the need to combine or discard intermediate state severely impacts the gradient fidelity. Accordingly, DGSI excels at problems with only limited branching or where the effects of branches are relatively constrained, i.e., non-chaotic problems. Future efforts to improve the gradient estimation via smooth interpretation should thus focus on robust state restriction strategies.
Our proposed estimator using automatic differentiation and Monte Carlo sampling (DGO) is vastly less computationally expensive and avoids the various approximations made in smooth interpretation. A key limitation is the need to obtain a sufficient number of samples at each branch, which may require many replications in the presence of deeply nested branches. In the considered problems, where nested branching could be reformulated into sequential branching, the fidelity of DGO was always among the best of the considered estimators. DGO's combination with Adam provided competitive convergence behavior for all problems, vastly outperforming its closest competitor in our highest-dimensional problem. Our tool DiscoGrad offers an efficient implementation of DGSI and DGO to automate the estimations via DGSI and DGO for programs written in C++.
Considering the existing estimators, a remarkable observation is that Polyak's Gradient-Free Oracle (PGO), which does not require AD, exhibited low execution times and provided good results in all experiments. We thus consider PGO and the closely related gradient-free algorithm Nesterov Random Search [25, 56] promising alternatives to global search across high-dimensional parameter spaces, even for non-convex problems.
In our experiments, we carried out a hyperparameter sweep to identify suitable combinations of learning rates, degrees of smoothing, and numbers of samples. Since these hyperparameters interact, our future work will include an exploration of scheduling algorithms that jointly select combinations of these hyperparameters and adjust them throughout the optimization process.
## Acknowledgments
We extend our thanks to Marian Zuska for implementing the HOTEL optimization problem in DiscoGrad. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant no. 497901036.
|
2306.16108 | Is ChatGPT a Biomedical Expert? -- Exploring the Zero-Shot Performance
of Current GPT Models in Biomedical Tasks | We assessed the performance of commercial Large Language Models (LLMs)
GPT-3.5-Turbo and GPT-4 on tasks from the 2023 BioASQ challenge. In Task 11b
Phase B, which is focused on answer generation, both models demonstrated
competitive abilities with leading systems. Remarkably, they achieved this with
simple zero-shot learning, grounded with relevant snippets. Even without
relevant snippets, their performance was decent, though not on par with the
best systems. Interestingly, the older and cheaper GPT-3.5-Turbo system was
able to compete with GPT-4 in the grounded Q&A setting on factoid and list
answers. In Task 11b Phase A, focusing on retrieval, query expansion through
zero-shot learning improved performance, but the models fell short compared to
other systems. The code needed to rerun these experiments is available through
GitHub. | Samy Ateia, Udo Kruschwitz | 2023-06-28T11:24:48Z | http://arxiv.org/abs/2306.16108v2 | # Is ChatGPT a Biomedical Expert?1
###### Abstract
We assessed the performance of commercial Large Language Models (LLMs) GPT-3.5-Turbo and GPT-4 on tasks from the 2023 BioASQ challenge. In Task 11b Phase B, which is focused on answer generation, both models demonstrated competitive abilities with leading systems. Remarkably, they achieved this with simple zero-shot learning, grounded with relevant snippets. Even without relevant snippets, their performance was decent, though not on par with the best systems. Interestingly, the older and cheaper GPT-3.5-Turbo system was able to compete with GPT-4 in the grounded Q&A setting on Factoid and List answers. In Task 11b Phase A, focusing on retrieval, query expansion through zero-shot learning improved performance, but the models fell short compared to other systems. The code needed to rerun these experiments is available through GitHub2.
Footnote 2: \(\copyright\)\(\copyright\)\(\copyright\)\(\copyright\)
Zero-Shot Learning, LLMs, BioASQ, GPT-4, NER, Question Answering
## 1 Introduction
Recently released ChatGPT models GPT-3.5-Turbo and GPT-4 [1] and their unprecedented zero-shot performance in a variety of tasks, sparked a surge in the development and application of LLMs. By participating in the eleventh CLEF BioASQ challenge [2], we wanted to explore how well these systems perform in specialized domains and whether they can compete with expert fine-tuned systems.
### BioASQ Challenge
BioASQ is a series of large-scale biomedical challenges associated with the CLEF 2023 conference. Its 11th iteration comprises three tasks [2]:
1. Synergy On Biomedical Semantic QA For Developing Issues
2. Biomedical Semantic QA
3. MedProCNER On MEDical PROCdure Named Entity Recognition
This paper focuses on the second and third tasks, the two tasks we participated in. The Biomedical Semantic QA task (Task B) is subdivided into Phase A (document retrieval and snippet extraction) and Phase B (Question Answering) [3].
We will start with a brief overview of some related work in Section 2 before outlining the experimental setup in Section 3. Section 4 presents our methodology followed by a discussion of Results in Section 5. Finally, we will also touch on ethical issues (Section 6) and offer some conclusions (Section 7).
## 2 Related Work
To motivate our approach and contextualise our contribution we will briefly discuss related work on recently released generative pre-trained transformer models, have a look at few-shot and zero-shot learning and touch on professional search, i.e. search in a professional context.
### GPT Models
Recently released generative pre-trained transformer (GPT) models GPT-4 and GPT-3.5-turbo are based on the transformer architecture [4] and pre-trained on the next token prediction task. These models are additionally fine-tuned with reinforcement learning from human feedback, which greatly improves their ability to follow instructions and the perceived utility of their generations [5]. OpenAI states that GPT-3.5-turbo is additionally optimized for chats, but does not disclose the exact training procedure used1.
Footnote 1: [https://platform.openai.com/docs/model-index-for-researchers](https://platform.openai.com/docs/model-index-for-researchers)
GPT-4 is the most recent and best performing model of OpenAI, which is, as of this writing, only programmatically accessible through closed beta API access2. It exhibits human-level performance on various professional and academic benchmarks and can process images as well as text [1].
Footnote 2: [https://openai.com/product/gpt-4](https://openai.com/product/gpt-4)
### Few and Zero-Shot Learning
These models improve over the earlier GPT-3 model which showed that in certain tasks sufficiently big LLMs can compete with fine-tuned transformer models using only few-shot learning, which greatly reduces the need for extensive training data [6].
In the **few-shot learning** setting, the GPT models are prompted with a text that contains a few examples of the tasks at hand, for example, multiple question-answer pairs, and at the end only the current question for which an answer should be generated by the model. The model then ideally completes this text by writing the correct answer.
In the **zero-shot learning** setting, the model is not supplied with any examples but rather only a direct question or abstract task description and is ideally able to generate a useful completion that answers the question or solves the task [7].
Zero-shot and few-shot learning is especially interesting for applications in specialized domains with no or sparse training data available. Prior work in the biomedical domain has shown that language models pre-trained on in-domain data outperform models pre-trained on open domain data [8][9][10]. In this work, we want to explore whether these new GPT models, that are extensively trained on vast amounts of open domain data, can compete with specialized fine-tuned models that are expected to participate in the challenge.
Even though these models are proprietary and neither the architecture nor the specific training process is known, several open-source alternatives have been developed such as OPT [11], BLOOM [12], or Pythia [13]. Projects based on these and other open source models are constantly improving, and some are already nearly reaching GPT-3.5-turbo level performance [14]. We therefore believe that studying these commercial models is valuable for establishing a baseline in zero-shot performance for upcoming open-source alternatives. These alternatives could potentially challenge state-of-the-art (SOTA) systems across a wide range of natural language processing (NLP) tasks.
### Professional Search
Professional search is search conducted in a work context [15]. This is an everyday activity for many professionals that comes with specific requirements which are different from the requirements of generic Web search [16]. The BioASQ challenge can be framed as a form of professional search in which the searchers are biomedical experts aiming to find answers to domain-specific questions.
Automatic query expansion plays a key part in many professional search contexts including search by healthcare information professionals, patent agents and recruitment professionals [17] as well as in conducting systematic reviews [18]. What is ultimately being submitted to the search system can turn out to be a fairly complex search strategy, a query involving domain-specific information based around Boolean operators. This is one of the motivations for us to explore automatic query expansion in our methodology.
## 3 Experimental Setup
We describe the experimental setup of the two BioASQ tasks that we participated in, Task 11 B and MedProCNER. For Task 11 B a benchmark dataset with training and test biomedical questions in English along with reference answers was used that has been created based on questions by biomedical experts [19].
### Task 11 B: Biomedical Semantic QA
For **Phase A**, the participating systems receive a list of biomedical questions such as "_Which protein is targeted by Herceptin?_" and should retrieve a list of up to 10 most relevant articles from the PubMed Annual Baseline Repository for 20233. Additionally, the systems should also create a list of at most 10 most relevant snippets extracted from the previously retrieved article titles or abstracts. Participating systems are compared based on the Mean Average Precision (MAP) metric.
Footnote 3: [https://lhncbc.nlm.nih.gov/ii/information/MBR.html](https://lhncbc.nlm.nih.gov/ii/information/MBR.html)
In **Phase B**, the participating systems receive the same questions as in Phase A, along with a set of gold (correct) articles and snippets selected by biomedical experts. They should then generate an _ideal_ paragraph sized (at most 200 words) answer based on these snippets. The questions are also tagged with either _Yes/No_, _Factoid_, _Summary_, or _List_ type indicating the format for an additional _exact_ answer that should be created by these systems.
* _Yes/no_ questions require the exact answer to be either "yes" or "no".
* _Factoid_ question require the exact answer to be a list of up to 5 entity names or other short expressions ordered by decreasing confidence.
* _List_ questions require the exact answer to be a list of up to 100 entity names or similar short expressions.
* _Summary_ questions do not require an additional exact answer, only the _ideal_ answer needs to be returned.
### MedProcNER: MEDical PROCedure Named Entity Recognition
The **MedProcNER** task [20] focuses on the detection and mapping of medical procedures in Spanish texts. It consists of three subtasks:
* In subtask 1, systems have to identify medical procedures from Spanish clinical reports.
* In subtask 2, systems have to map the medical procedures identified in subtask 1 to SNOMED CT codes [21].
* In subtask 3, systems have to assign SNOMED CT codes to the full clinical report for later indexing.
## 4 Methodology
### Model
We accessed GPT-3.5-turbo and GPT-4 through the OpenAI API4. We used a simple system message to set the behavior of the model, which can be seen in Listing 1.
Footnote 4: [https://platform.openai.com/docs/guides/chat/introduction](https://platform.openai.com/docs/guides/chat/introduction)
```
You are BioASQ-GPT, an AI expert in question answering, research, and information retrieval in the biomedical domain.
```
Listing 1: System Message
This system message was then followed by task specific zero-shot prompts, including necessary information such as the questions, snippets, or retrieved article titles. More details on these prompts can be found in the subsection corresponding to the particular task. Prompt engineering has developed into a very active field and at this point we should note that there is scope for plenty of future work exploring more systematically the best way of prompting the system for the task at hand.
We experimented with a subset of the BioASQ training and development data to explore the system's behavior and evaluate the performance of individual modules.
Additional parameters that were sent in the API request to the models were _temperature_ which controls the randomness of completion; _frequency_penalty_ which discourages repetition of words or phrases; and _presence_penalty_ which has a similar effect. We set _temperature_ to 0 for all requests to have reproducible results over multiple runs.
As these models are currently **non-deterministic**, even with temperature set to 0, there is a residual randomness in the generations, which can lead to slightly different results in each run5. We also conducted a limited test to roughly estimate the variance of the results by repeating five runs over the same 50 questions.
Footnote 5: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
### Task 11 B
#### 4.2.1 Phase A
Our approach used zero-shot learning for query expansion, query reformulation and reranking directly with the models. For document retrieval, we queried the eUtils API with a _maxdate_ cutoff corresponding to the creation date of the relevant 2023 PubMed snapshot. The Entrez Programming Utilities (eUtils) API is a set of web applications provided by the National Center for Biotechnology Information (NCBI), which offers programmatic access to the various databases and functionalities of the NCBI resources, such as PubMed. We also used the sort by relevance option of PubMed and retrieved only the top 50 results for a given query.
We acknowledge that querying the live PubMed database with the corresponding date cutoff is not the same as searching through the downloaded static snapshot or using the search interface provided by the BioASQ organizers. Articles could be deleted or modified in PubMed, which could affect the reproducibility and comparability of the results with other systems. To estimate the impact of this approach, we looked up all articles that were included in the gold set provided in Phase B of the task after the challenge concluded and found that one out of the 899 referenced articles was no longer retrievable in PubMed6.
Footnote 6: Article from batch 4 that is no longer accessible in PubMed: [http://www.ncbi.nlm.nih.gov/pubmed/36459075](http://www.ncbi.nlm.nih.gov/pubmed/36459075)
We were most interested in the impact of the query expansion step and therefore conducted one run with and one without query expansion for both models, where we instead sent the question directly as a query to PubMed.
The exact steps were:
1. Query expansion
2. Search on PubMed
3. Query refinement only if no documents were found and one additional search on PubMed
4. Reranking of top 50 articles based on title string
All of these steps were executed automatically in Python without manual intervention, the exact code used is available on GitHub7. The zero-shot learning prompt used for query expansion can be seen in Listing 2. Where the placeholder _[question]_ was replaced by the question that was currently processed by the system. For query expansion, we set _frequency_penalty_ to 0.5 and _presence_penalty_ to 0.1.
Footnote 7: [https://github.com/SamyAteia/bioasq](https://github.com/SamyAteia/bioasq)
Some example query expansions for this prompt can be seen in Listing 3. Interestingly, these models seem to not only know what Boolean syntax is accepted by PubMed but also important
internal fields such as _MeSH Terms_ and the syntax on how to query on these fields, but these were not often used in the expanded queries.8
Footnote 8: The identification of suitable MeSH terms in structured queries for systematic reviews has been explored in detail elsewhere, e.g. [22, 23]
```
Question: What are the outcomes of ubiquitination? Expanded Query: ("ubiquitination" OR "ubiquitin modification" OR "ubiquitin conjugation" OR "ubiquitin pathway") AND ("outcomes" OR "effects" OR "consequences") Question: What is the incidence of Leigh syndrome? Expanded Query: ("Leigh syndrome"[MeSH Terms] OR "ubiqu sentence"[All Fields]) AND ("incidence"[MeSH Terms] OR "incidence"[All Fields] OR "prevalence"[MeSH Terms] OR "prevalence"[All Fields])
```
Listing 2: Query Expansion Prompt
For the optional query reformulation step, we used the prompt in Listing 4. This step was introduced after it became clear that some queries constructed by the models were overly specific and returned no results. The placeholder _[question]_ in the prompt was replaced by the question that was currently processed by the system, and the placeholder _[original_query]_ was replaced by the original expanded query that returned no results. For query reformulation, we set _frequency_penalty_ to 0.6 and _presence_penalty_ to 0.2. An example of a query reformulation that generated a slightly broader query that then led to some results can be seen in Listing 5. Additionally, terms added to the query are highlighted in gray.
```
{"role": "user", "content": f"""Given that the following search query for PubMed has returned no documents, please generate a broader query that retains the original question's context and relevance. Assume that phrases are not stemmed; therefore, generate useful variations. Return only the query that can directly be used without any explanation text. Focus on maintaining the query's precision and relevance to the original question. Original question: '{ question}', Original query: '{original_query}'}
```
Listing 3: Query Expansion Example
```
For the final reranking step, we took the titles of the top 50 returned articles as returned by the relevancy sort from PubMed and prompted the model to rerank these articles given the original question and return the top 10 articles. The prompt used for the reranking can be seen in Listing 6, where _[articles_str]_ is replaced by the list of returned article titles, _[question]_ is replaced by the question that is currently processed by the system, and _[nr_of_articles]_ is replaced by the 10 or fewer articles if less relevant articles were returned by PubMed. For reranking, we set _frequency_penalty_ to 0.3 and _presence_penalty_ to 0.1.
```
{"role": "user", "content": f"{articles_str}\n\n Given these articles and the question: '{question }'. Rerank the articles based on their relevance to the question and return the top { nr_of_articles} most relevant articles as a comma separated list of their index ids. Don't explain your answer, return only this list, for example: '1, 2, 3, 4' "}
```
Listing 6: Reranking Prompt
The returned list was then mapped back to the articles retrieved from PubMed, and these were returned as the required output of Phase A.
We also explored the extraction of snippets for the Phase A task but abandoned it, as it required sending all abstracts of the 10 returned papers for processing to the model, which was especially expensive for the GPT-4 model because API usage is priced on token counts, and we were exploring these models on a limited budget.
#### 4.2.2 Phase B
In Phase B, we used the gold (correct) snippets from the test set and sent them along with the question and description of the answer format to the model.
We also conducted a test where this grounding information in the form of relevant snippets was omitted and just the question and description of the answer format were sent to the models.
The prompts utilized for generating these answer types are listed as follows: for ideal answers, refer to Listing 7; for Yes/No answers, see Listing 8; for List answers, Listing 9; and for Factoid responses, see Listing 10.
In all these prompts, _{question['body']_} is replaced by the question that is currently processed by the system, and _{snippets}_ is replaced by the snippets provided by the test set.
For all answer types, we set _frequency_penalty_ to \(0.5\). _Presence_penalty_ was set to \(0.3\) for Yes/No answers, to \(0.1\) for both List and Factoid answers, and to \(0.7\) for the ideal answer type.
### MedProcNER
For the MedProcNER task, we translated all prompt templates, including the system prompt, to Spanish using and comparing deepL9) and ChatGPT10. For substask \(1\), instead of using zero-shot
prompting as before, we instead explored the few-shot prompting approach, where we included three examples from the training set into the request sent to the OpenAI API. We also compared the performance of GPT-3.5-turbo and GPT-4.
The relevant Python code part that constructed the prompt can be seen in Listing 11. The _examples_ list mentioned therein contained three examples taken from the training set.
For substask 2, we used the gazetteer file provided by the MedProcNER task organizers. We filtered the file for all SNOMED CT codes that were tagged as procedure and stemmed their terms, and used Levenshtein distance based fuzzy matching to find an entry for a procedure. The detailed code used for all tasks is available on the aforementioned GitHub repository.
For subtask 3, we just joined all SNOMED CT codes identified in subtask 2 for one document.
## 5 Results
The systems participating in the Biomedical Semantic Q&A task were evaluated in four batches. Results are reported for every batch. For readability, we only included the results of our systems and the top performing systems. The full result tables are publicly available on the BioASQ website11
Footnote 11: [http://participants-area.bioasq.org/results/11b/phaseA/](http://participants-area.bioasq.org/results/11b/phaseA/)
### Task 11 B Phase A
We participated with 4 systems in Task 11 B Phase A, the systems' names and their properties are listed as follows:
* UR-gpt4-zero-ret corresponds to GPT-4 with query expansion.
* UR-gpt3.5-turbo-zero corresponds to GPT-3.5-turbo with query expansion.
* UR-gpt4-simple corresponds to GTP-4 without query expansion.
* UR-gpt3.5-t-simple corresponds to GPT-3.5-turbo without query expansion.
The following Table 1 shows the results of our systems participating in the 4 batches. MAP was the official metric to compare the systems. N stands for the number of participating systems in each batch.
One observation is that GPT-4 achieved better results than GPT-3.5-turbo in all batches except batch 3. It seems to perform better in both query expansion and reranking without query expansion. Query expansion consistently improves the results for all models in all batches. It greatly improves recall in all batches, and in most batches, precision is also slightly increased except in batch 1, where it leads to decreased precision for GPT-3.5-turbo but an overall improved F1 score.
In general, our approach performs worse than most systems. This could be due to the fact that we do not do any embedding based neural retrieval, but instead only rely on the keywords created by the models in the query expansion step and the relevancy ranking provided by PubMed. The reranking window of only 50 article titles might also be too small, or the information provided by the titles is not sufficient for a more effective reranking. A thorough ablation study in future work could help explain the contribution of these individual factors to the overall system performance.
Using only query expansion in the retrieval phase and not having to do any embedding calculations during indexing does come with advantages for applying such an approach to existing or huge search use-cases where efficient reindexing with more advanced embedding based approaches might not be feasible. On the other hand, the used models do take several seconds to create results for both reranking and query expansion, which could limit their usefulness in classical enterprise-search use-cases if sub-second response times are expected.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Batch & Position & System & Precision & Recall & F-Measure & **MAP** & GMAP \\ \hline & 1 & Top Competitor & 0.2118 & 0.6047 & 0.2774 & 0.4590 & 0.0267 \\ & 19 & UR-gpt4-zero-ret & 0.1664 & 0.3352 & 0.1955 & 0.2657 & 0.0009 \\ Batch 1 & 21 & UR-gpt3.5-turbo-zero & 0.1488 & 0.2847 & 0.1782 & 0.2145 & 0.0009 \\ N = 33 & 24 & UR-gpt4-simple & 0.1654 & 0.2508 & 0.1799 & 0.1809 & 0.0005 \\ & 25 & UR-gpt3.5-t-simple & 0.1600 & 0.2290 & 0.1734 & 0.1769 & 0.0003 \\ \hline & 1 & Top Competitor & 0.1027 & 0.5149 & 0.1618 & 0.3852 & 0.0104 \\ Batch 2 & 20 & UR-gpt4-simple & 0.0945 & 0.3011 & 0.1277 & 0.1905 & 0.0011 \\ N = 33 & 21 & UR-gpt3.5-turbo-zero & 0.1153 & 0.2977 & 0.1455 & 0.1736 & 0.0008 \\ \hline & 1 & Top Competitor & 0.0800 & 0.4776 & 0.1320 & 0.3185 & 0.0049 \\ & 21 & UR-gpt3.5-turbo-zero & 0.1295 & 0.3258 & 0.1646 & 0.2048 & 0.0008 \\ Batch 3 & 22 & UR-gpt4-zero-ret & 0.1086 & 0.2289 & 0.1303 & 0.1930 & 0.0003 \\ N = 35 & 23 & UR-gpt4-simple & 0.1089 & 0.2102 & 0.1238 & 0.1727 & 0.0002 \\ & 24 & UR-gpt3.5-t-simple & 0.1078 & 0.1981 & 0.1217 & 0.1553 & 0.0002 \\ \hline & 1 & Top Competitor & 0.0933 & 0.4292 & 0.1425 & 0.3224 & 0.0030 \\ & 18 & UR-gpt4-zero-ret & 0.0791 & 0.1728 & 0.0933 & 0.1251 & 0.0002 \\ Batch 4 & 19 & UR-gpt3.5-turbo-zero & 0.0922 & 0.1956 & 0.1025 & 0.1139 & 0.0002 \\ N = 27 & 20 & UR-gpt4-simple & 0.0785 & 0.1563 & 0.0864 & 0.1010 & 0.0002 \\ & 21 & UR-gpt3.5-t-simple & 0.0752 & 0.1319 & 0.0810 & 0.0912 & 0.0001 \\ \hline \end{tabular}
\end{table}
Table 1: Task 11 B Phase A, Batches 1-4
### Task 11 B Phase A
We participated with 4 systems in Task 11 B Phase B, the systems' names and their properties are listed as follows:
* _UR-gpt4-zero-ret_ corresponds to GPT-4 grounded with snippets.
* _UR-gpt3.5-turbo-zero_ corresponds to GPT-3.5-turbo grounded with snippets.
* _UR-gpt4-simple_ corresponds to GTP-4 answering directly without reading snippets.
* _UR-gpt3.5-t-simple_ corresponds to GPT-3.5-turbo answering directly without reading snippets.
We were not able to complete all runs in batches 1 and 2, which is why some results are missing. We report the results for each answer format (Yes/No, Factoid, List) separately in the following tables. For readability, we again only included the results of our systems and the top-performing systems, the full result tables are publicly available on the BioASQ website12.
Footnote 12: [http://participants-area.bioasq.org/results/11b/phaseB/](http://participants-area.bioasq.org/results/11b/phaseB/)
In the Yes/No question format, our results indicate that GPT-4 surpasses GPT-3.5-turbo in both the grounded and ungrounded settings. For batches 1 and 3, the ungrounded GPT-4 system _UR-gpt4-simple_ even showed a tendency to perform better than the grounded variant of GPT-3.5-turbo _UR-gpt3.5-turbo-zero_ as can be seen in Table 2.
In the Factoid question format, both grounded GPT-4 and grounded GPT-3.5-turbo achieved an MRR score of 0.5789 taking first and second place over all other systems. In the remaining batches, GPT-3.5-turbo stayed consistently in the top 6 systems, while GPT-4 only reached 11th and 13th place in batches 3 and 4. This mixed performance comparison between GPT-3.5-turbo and GPT-4 was also observed in the List question format, where GPT-3.5-turbo achieved 1st
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Batch & Position & System & Accuracy & F1 Yes & F1 No & **Macro F1** \\ \hline \multirow{4}{*}{Batch1} & 1 & Top Competitor & 0.9583 & 0.9697 & 0.9333 & 0.9515 \\ & 8 & UR-gpt4-zero-ret & 0.9167 & 0.9412 & 0.8571 & 0.8992 \\ N = 33 & 9 & UR-gpt4-simple & 0.9167 & 0.9412 & 0.8571 & 0.8992 \\ & 13 & UR-gpt3.5-turbo-zero & 0.8750 & 0.9091 & 0.8000 & 0.8545 \\ \hline \multirow{4}{*}{Batch2} & 1 & Top Competitor & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ & 7 & UR-gpt4-zero-ret & 0.9583 & 0.9655 & 0.9474 & 0.9564 \\ N = 42 & 12 & UR-gpt3.5-turbo-zero & 0.9167 & 0.9333 & 0.8889 & 0.9111 \\ \hline \multirow{4}{*}{Batch3} & 1 & Top Competitor & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ & 9 & UR-gpt4-zero-ret & 0.9167 & 0.9375 & 0.8750 & 0.9063 \\ \cline{1-1} & 12 & UR-gpt4-simple & 0.8750 & 0.9032 & 0.8235 & 0.8634 \\ \cline{1-1} & 14 & UR-gpt3.5-turbo-zero & 0.8750 & 0.9091 & 0.8000 & 0.8545 \\ \cline{1-1} & 21 & UR-gpt3.5-t-simple & 0.7917 & 0.8485 & 0.6667 & 0.7576 \\ \hline \multirow{4}{*}{Batch4} & 1 & Top Competitor & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ & 7 & UR-gpt4-zero-ret & 0.9286 & 0.8889 & 0.9474 & 0.9181 \\ \cline{1-1} & 14 & UR-gpt3.5-turbo-zero & 0.9286 & 0.8571 & 0.9524 & 0.9048 \\ \cline{1-1} & 19 & UR-gpt4-simple & 0.7857 & 0.7273 & 0.8235 & 0.7754 \\ \cline{1-1} & 29 & UR-gpt3.5-t-simple & 0.4286 & 0.5000 & 0.3333 & 0.4167 \\ \hline \end{tabular}
\end{table}
Table 2: Task 11 B Phase B, Yes/No Questions Batches 1-4
place in batch 2 but was behind GPT-4 in batches 3 and 4. The results for the Factoid question format are shown in Table 3 and the results for the List question format are shown in Table 4.
While GPT-4 seems to perform consistently better than GPT-3.5-turbo in the Yes/No question format, there is no clear winner in the more extractive Factoid and List formats.
Both models without grounding information from snippets were not able to compete with the top models but were often placed slightly below the average performing systems, which
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Batch & Position & System & Strict Acc. & Lenient Acc. & **MRR** \\ \hline & 1 & Top Competitor & 0.7861 & 0.6668 & 0.7027 \\ Batch1 & **2** & UR-gpt3.5-turbo-zero & 0.6742 & 0.7249 & 0.6917 \\ N = 33 & 8 & UR-gpt4-zero-ret & 0.6472 & 0.6530 & 0.6495 \\ & 19 & UR-gpt4-simple & 0.4000 & 0.4014 & 0.3939 \\ \hline & **1** & UR-gpt3.5-turbo-zero & 0.4598 & 0.4671 & 0.4316 \\ Batch2 & 2 & Next Competitor & 0.5099 & 0.3577 & 0.3980 \\ N = 42 & 4 & UR-gpt4-zero-ret & 0.3742 & 0.4369 & 0.3828 \\ \hline & 1 & Top Competitor & 0.6519 & 0.6058 & 0.6049 \\ & 3 & UR-gpt4-zero-ret & 0.5518 & 0.6597 & 0.5736 \\ Batch3 & 9 & UR-gpt3.5-turbo-zero & 0.5600 & 0.5140 & 0.5101 \\ N = 47 & 24 & UR-gpt3.5-t-simple & 0.2690 & 0.2385 & 0.2333 \\ & 25 & UR-gpt4-simple & 0.2519 & 0.2343 & 0.2305 \\ \hline & 1 & Top Competitor & 0.7139 & 0.8061 & 0.7440 \\ Batch4 & 2 & UR-gpt4-zero-ret & 0.6902 & 0.7818 & 0.7191 \\ N = 52 & 10 & UR-gpt3.5-turbo-zero & 0.6090 & 0.6710 & 0.6196 \\ & 21 & UR-gpt4-simple & 0.4440 & 0.4214 & 0.4127 \\ & 26 & UR-gpt3.5-t-simple & 0.3944 & 0.3362 & 0.3470 \\ \hline \end{tabular}
\end{table}
Table 4: Task 11 B Phase B, List Questions Batches 1-4
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Batch & Position & System & Strict Acc. & Lenient Acc. & **MRR** \\ \hline & **1** & UR-gpt4-zero-ret & 0.5789 & 0.5789 & 0.5789 \\ Batch1 & **2** & UR-gpt3.5-turbo-zero & 0.5263 & 0.6316 & 0.5789 \\ N = 33 & 3 & Next Competitor & 0.5263 & 0.6316 & 0.5570 \\ & 22 & UR-gpt4-simple & 0.2105 & 0.2632 & 0.2368 \\ \hline & 1 & Top Competitor & 0.5455 & 0.6364 & 0.5909 \\ Batch2 & 2 & Next Competitor & 0.5455 & 0.6364 & 0.5909 \\ N = 42 & 3 & UR-gpt3.5-turbo-zero & 0.5455 & 0.5909 & 0.5682 \\ & 4 & UR-gpt4-zero-ret & 0.5455 & 0.5909 & 0.5682 \\ \hline & 1 & Top Competitor & 0.4615 & 0.6538 & 0.5205 \\ & 5 & UR-gpt3.5-turbo-zero & 0.5000 & 0.5000 & 0.5000 \\ Batch3 & 11 & UR-gpt4-zero-ret & 0.4615 & 0.5000 & 0.4808 \\ N = 47 & 22 & UR-gpt4-simple & 0.2692 & 0.4615 & 0.3654 \\ & 27 & UR-gpt3.5-t-simple & 0.3077 & 0.3077 & 0.3077 \\ \hline & 1 & Top Competitor & 0.6452 & 0.8710 & 0.7323 \\ Batch4 & 6 & UR-gpt3.5-turbo-zero & 0.6452 & 0.6452 & 0.6452 \\ N = 52 & 13 & UR-gpt4-zero-ret & 0.5161 & 0.6129 & 0.5645 \\ & 30 & UR-gpt3.5-t-simple & 0.2581 & 0.2903 & 0.2742 \\ & 33 & UR-gpt4-simple & 0.2258 & 0.2581 & 0.2366 \\ \hline \end{tabular}
\end{table}
Table 3: Task 11 B Phase B, Factoid Questions Batches 1-4
is still surprisingly good as in this setting the models need to rely only on the open-domain knowledge acquired during training for answering these questions.
### Task MedProcNER
In the MedProcNER task, GPT-4 performed better than GPT-3.5-turbo, but was not able to compete with the best performing system. The results are shown in Table 5. Our simple gazetteer based entity linking and indexing approach performed poorly compared to the top-performing system. At the time of this writing, the performance of other systems involved in the task has not been published yet.
Even though the few-shot NER approach did not compete with the top-performing system in the MedProcNER task, it still indicates that GPT-4 can be used for specialized domains in multilingual tasks while only using a minimal amount of training data.
### Discussion and Future Work
The results from our participation in the BioASQ challenge indicate that current commercial GPT models GPT-3.5-turbo and GPT-4 can compete with other presumably fine-tuned leading systems in question answering in the biomedical domain, while only being zero-shot prompted with relevant snippets. Even without relevant snippets, just relying on the biomedical knowledge aquired during their pre-training, these models were performing better than some of the other systems participating in the task.
One big challenge in using zero-shot learning with these GPT models is prompt-engineering. It still seems to be more of an art than a science and requires considerable testing [24]. During system development, it became clear that the expanded queries in Task 11 B Phase A were sometimes too specific and did not return results. We tried to prompt the models to create broader queries that were not using as many phrase terms that are not stemmed in PubMed, but the overall system performance on our development set declined. We therefore experimented with using GPT-4 to come up with a better prompt by supplying it with the original prompt and the 5 worst-performing and 5 best-performing queries. The new prompt actually increased the performance of the system. This self prompt learning might be an interesting approach to investigate further in future work.
Nevertheless, the zero-shot learning approach makes the usage of these models very accessible, as it does not require thorough data preparation, knowledge about classical deep learning techniques, or advanced programming skills.
A prominent problem in these GPT models are so-called hallucinations [25]. These are unsupported or factually wrong statements in the responses. These problems might be espe
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Task** & **Top Performing System F1** & **GPT-3.5-turbo F1** & **GPT-4 F1** \\ \hline NER & 0.7985 & 0.3002 & 0.4814 \\ \hline EL & 0.5707 & 0.1264 & 0.1976 \\ \hline Indexing & 0.6242 & 0.1785 & 0.2695 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of F1 scores of different systems for NER, Entity Linking, and Indexing tasks.
cially observable in the ideal answer setting. In future work, we want to conduct a thorough investigation of the factuality of the ideal answers and especially compare the grounded and ungrounded settings. This could provide error rate estimates that might be useful for generative search systems in specialized domains.
As noted earlier, these commercial models are not completely deterministic, even when the temperature parameter is set to 0. OpenAI states in their documentation:
"OpenAI models are non-deterministic, meaning that identical inputs can yield different outputs. Setting the temperature parameter to 0 will make the outputs mostly deterministic, but a small amount of variability may remain."13
Footnote 13: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
We had concerns about the potential cascading effect of such residual non-determinism, especially in the context of query expansion. To estimate this variability, we performed a limited test by repeating the retrieval task from Task 11 B Phase A over 50 questions taken from the training set five times with the same model. Our test results showed minimal variance across metrics such as MAP, precision, recall, and F-measure, indicating that while variability exists, its impact is currently minimal, with broader investigations pending for future work.
This residual non-determinism in the model output also led to some instability in the system when we fully relied on the model returning the right output format for further processing. For example, in the Yes/No question format, the evaluation system of the BioASQ organizers expects the answers to be all lowercase, either "yes" or "no". The models often returned variants such as "Yes" or "Yes." even if explicitly prompted not to do so. This necessitated an additional normalization post-processing step.
In the MedProcNER task, where we used few-shot learning, it seemed that the examples greatly assisted the model in returning the correct output format. We suspect that giving even just a few examples is a more effective way to guide the models towards the expected output format than explicitly describing the format in a zero-shot learning prompt.
Even if the models were outputting the right format, the overall system was still unstable due to the instability of the OpenAI API. In every run, there were at least 2-3 requests that failed due to internal server errors or the model being overloaded with requests. Thus, retry loops must be incorporated when accessing such external services.
As usage of these models is priced based on token count, some use-cases might not be financially feasible yet. Only running one evaluation batch with GPT-4 can cost around $10 in model usage. At the same time, the GPT-4 model was still much slower in answering requests than GPT-3.5-turbo. These two factors led us to not participate in the snippet generation task, as this task is especially demanding regarding both the amount of tokens to be processed in the prompt and generated as a response. In general, the economic barrier to using these commercial models may hinder some researchers due to the cost of usage. Also, over-reliance on these models might stifle innovation in other research areas.
We also conducted a limited test with grounding the query expansion by suggesting semantically related terms from the word embeddings supplied by the BioASQ organizers, but these terms led to queries that performed worse than just ungrounded ones. We did not investigate this approach thoroughly and leave it open for future work.
Some of our results might indicate that the performance gap between presumably smaller (GPT-3.5-turbo) and more complex models (GPT-4) is narrower in the grounded extractive Q&A setting, because GPT-3.5-turbo sometimes performed better than GPT-4 in answering Factoid or List questions in some of the batches. It would be interesting to see how model performance in this setting scales with model size, and to test whether the use of much smaller generative models is feasible. Some related work in other use-cases already showed promising results in this direction [26][27]. This might open up new possibilities for using these models in enterprise search settings where confidential data must remain on-premise [28].
## 6 Ethical Considerations
The use of large language models like GPT-3.5-Turbo and GPT-4 in biomedical tasks presents several ethical considerations.
First, we must address data privacy. While these models do not retain specific training examples, there is a remote possibility of them generating outputs resembling sensitive data, or sensitive data included in a prompt might be repeated and further processed in downstream tasks. This issue has to be addressed when employing these models in a real world biomedical context.
Second, as these models may produce factually incorrect outputs or "hallucinations" [25], rigorous fact-checking mechanisms must be applied, especially when used in a biomedical context to prevent the spread of harmful misinformation.
Lastly, large language models operate as black-box algorithms, raising issues of interpretability, transparency, and accountability [29].
In conclusion, the potential of large language models in biomedical tasks is significant, but the ethical implications of their deployment need careful attention.
## 7 Conclusion
We showed that in context learning, both zero- and few-shot, with recent LLMs trained on human feedback can compete with presumably fine-tuned state-of-the-art systems in some domain-specific questions answering tasks. Zero- and few-shot learning can greatly simplify and speed up the development of complex NLP or IR systems, which might be especially useful for research and prototyping. It also opens up the possibility to improve use-cases where fine-tuning is not feasible due to a lack of available training data.
Prompt engineering for these models poses challenges, and grounding the answer generation with the right context information is an interesting problem for current and future generative search systems research. Even though the currently offered GPT models have severe limitations regarding cost of usage, speed, and factuality, we see promising research towards making these types of models more affordable and accessible and improving their overall performance and factuality.
## Acknowledgments
We want to thank the organizers of the BioASQ challenge for setting up this challenge and supporting us during our participation. We are also grateful for the feedback and recommendations of the anonymous reviewers.
|
2310.02843 | Incorporating Target Vehicle Trajectories Predicted by Deep Learning
Into Model Predictive Controlled Vehicles | Model Predictive Control (MPC) has been widely applied to the motion planning
of autonomous vehicles. An MPC-controlled vehicle is required to predict its
own trajectories in a finite prediction horizon according to its model. Beyond
this, the vehicle should also incorporate the prediction of the trajectory of
its nearby vehicles, or target vehicles (TVs) into its decision-making. The
conventional trajectory prediction methods, such as the constant-speed-based
ones, are too trivial to accurately capture the potential collision risks. In
this report, we propose a novel MPC-based motion planning method for an
autonomous vehicle with a set of risk-aware constraints. These constraints
incorporate the predicted trajectory of a TV learned using a
deep-learning-based method. A recurrent neural network (RNN) is used to predict
the TV's future trajectory based on its historical data. Then, the predicted TV
trajectory is incorporated into the optimization of the MPC of the ego vehicle
to generate collision-free motion. Simulation studies are conducted to showcase
the prediction accuracy of the RNN model and the collision-free trajectories
generated by the MPC. | Ni Dang, Zengjie Zhang, Jizheng Liu, Marion Leibold, Martin Buss | 2023-10-04T14:20:50Z | http://arxiv.org/abs/2310.02843v1 | Incorporating Target Vehicle Trajectories Predicted by Deep Learning Into Model Predictive Controlled Vehicles
###### Abstract
Model Predictive Control (MPC) has been widely applied to the motion planning of autonomous vehicles. An MPC-controlled vehicle is required to predict its own trajectories in a finite prediction horizon according to its model. Beyond this, the vehicle should also incorporate the prediction of the trajectory of its nearby vehicles, or target vehicles (TVs) into its decision-making. The conventional trajectory prediction methods, such as the constant-speed-based ones, are too trivial to accurately capture the potential collision risks. In this report, we propose a novel MPC-based motion planning method for an autonomous vehicle with a set of risk-aware constraints. These constraints incorporate the predicted trajectory of a TV learned using a deep-learning-based method. A recurrent neural network (RNN) is used to predict the TV's future trajectory based on its historical data. Then, the predicted TV trajectory is incorporated into the optimization of the MPC of the ego vehicle to generate collision-free motion. Simulation studies are conducted to showcase the prediction accuracy of the RNN model and the collision-free trajectories generated by the MPC.
## 1 Introduction
Model Predictive Control (MPC) has attracted increasing attention in autonomous driving due to its capability of incorporating traffic rules, the physical limitations of vehicles, and the collision avoidance requirements into driving control. MPC iteratively solves an optimization problem and gets a feasible trajectory that is subject to these constraints. An MPC-controlled ego vehicle (EV) is said to be able to interact with a target vehicle (TV) if it can predict the future behaviors of the TV and incorporate the predicted behaviors into its decision-making, such that the risk of potential collisions is avoided. Therefore, predicting the future behaviors of a TV is an important topic to realize risk-aware autonomous driving. Trajectory prediction of an autonomous vehicle is conventionally conducted by assuming a constant speed, i.e., the TV is moving while maintaining its current speed [1, 2, 3]. However, these assumptions ignore the influence of the real-time control inputs of the TV on its future trajectory, especially when it is required to perform a different driving task in a short future horizon. To solve this problem, a more realistic prediction method that does not only consider the current state of the TV but also its historical data should be proposed to achieve precise prediction.
Other than the constant-velocity-based trajectory prediction method, learning-based methods have been used to predict the trajectory of the target vehicles based on their historical trajectories. In [4], deep learning (DL) methods have been successfully used for predicting the behaviors of vehicles. Since the vehicle trajectories can be recognized as the sequences of vehicle positions, recurrent neural network (RNN) is most used due to their capability of handling data sequences. In [5], an RNN model with long-short-term memory units is used for trajectory prediction. In [6], the
technology of meta-induction learning is used to incorporate the interaction in a multi-vehicle system. A survey of deep learning methods to solve the vehicle trajectory prediction problem can be referred to in [7]. In this report, we use deep learning to predict the TV's trajectory and encode it into the safety constraint of an MPC. As a result, the MPC incorporates the interaction between the EV and the TV and thus produces risk-aware collision-free motion. To facilitate the interface between deep learning and MPC, calibration of the training data is performed. We also adjust the offset of the predicted TV trajectory to avoid the prediction errors caused by this offset. The rest of the paper is organized as follows. Sec. 2 introduces the MPC formulation for autonomous vehicles. Sec. 3 presents the deep learning-based prediction method. Simulation studies that validate the efficacy of the proposed method are shown in Sec. 4. We conclude our work and discuss the future work in Sec. 5
## 2 MPC Incorporating the Predicted TV Trajectory
This section presents the MPC-based motion planning framework incorporating the predicted TV trajectory. A two-vehicle system that contains an EV and a TV is considered. The EV is described using the following linearized and discretized kinematic bicycle model [1],
\[\xi_{t+1}=\xi_{0}+Tf^{\mathrm{c}}\left(\xi_{0},0\right)+A\left(\xi_{t}-\xi_{0 }\right)+Bu_{t},\ t\in\mathbb{N}, \tag{1}\]
where \(f^{\mathrm{c}}\) is a nonlinear continuous kinematical bicycle model introduced in [8, 9, 10, 11], \(\xi_{t}=(x_{t},y_{t},\psi_{t},v_{t})^{\intercal}\) is the state vector that consists of the longitudinal position \(x_{t}\) and lateral position \(y_{t}\), the velocity \(v_{t}\) and inertial heading \(\psi_{t}\) of the EV at time \(t\), \(\xi_{0}\) is the initial state of the system, \(u_{t}=(a_{t},\delta_{t})^{\intercal}\) is the control input that comprises the acceleration \(a_{t}\) and steering angle \(\delta_{t}\) at time \(t\), \(T\) is the sampling time, and \(A\), \(B\) system matrices are linearized system gains calculated according to [12]. Then, an MPC controller iteratively solves the following optimal control problem at any current time \(t\),
\[\min_{\mathbf{u}} \sum_{k=0}^{N-1}(\left\|\xi_{k}-\xi_{k}^{\mathrm{ref}}\right\|_{Q }^{2}+\left\|u_{k}\right\|_{R}^{2}+\left\|\xi_{N}-\xi_{N}^{\mathrm{ref}} \right\|_{S}^{2})\] (2a) s. t. \[\xi_{k+1}=f^{\mathrm{d}}(\xi_{0},\xi_{k},u_{k}),\ k=0,1,\cdots,N, \tag{2b}\] \[\xi_{k}\in\Xi,\ k=0,1,\cdots,N,\] (2c) \[u_{k}\in\mathcal{U},\ k=0,1,\cdots,N-1,\] (2d) \[\xi_{k}\in\Xi_{k}^{\mathrm{safe}},k=1,2,\cdots,N, \tag{2e}\]
where \(k\) counts from current time \(t\) on, \(N\) is the prediction horizon, \(\xi_{k}^{\mathrm{ref}}\) is the reference trajectory to be tracked by the EV, \(\mathbf{u}=(u_{0},u_{1},\cdots,u_{N-1})^{\intercal}\) is the control input sequence to be solved, \(Q\in\mathbb{R}^{4\times 4}\), \(R\in\mathbb{R}^{2\times 2}\) and \(S\in\mathbb{R}^{4\times 4}\) are the weighting matrices, \(f^{\mathrm{d}}\) is the EV model defined in (1), \(\Xi\) is the safety set to describe the road boundaries, the limitations of the EV, and the traffic rules, \(\mathcal{U}\) is the feasible control set, and \(\xi_{k}\in\Xi_{k}^{\mathrm{safe}}\) is the safety set for the collision avoidance with the TV.
For any current time \(t\), the safety constraint \(\xi_{k}\in\Xi_{k}^{\mathrm{safe}}\) ensures that the EV avoids potential collisions with the TV at prediction time step \(k\). In this paper, we define \(\xi_{k}\in\Xi_{k}^{\mathrm{safe}}\) as an elliptical region around the TV. The center of the ellipse is the geometric center of the TV. The size of the ellipse is sufficiently large to cover the size of the TV. Let \(x_{k}^{\mathrm{TV}}\) and \(y_{k}^{\mathrm{TV}}\) denote the longitudinal and lateral positions of the predicted TV trajectory at step \(k\). Then, the distance between the vehicles in the two directions at prediction step \(k\) are \(\Delta x_{k}=x_{k}-x_{k}^{\mathrm{TV}}\) and \(\Delta y_{k}=y_{k}-y_{k}^{\mathrm{TV}}\). Then, the safety set \(\Xi_{k}^{\mathrm{safe}}\) is represented as and the safety constraint is
\[\Xi_{k}^{\mathrm{safe}}=\left\{\left(\Delta x_{k},\Delta y_{k}\right)\left| \frac{{\Delta x_{k}}^{2}}{a^{2}}+\frac{{\Delta y_{k}}^{2}}{b^{2}}\geq 1. \right\}. \tag{3}\]
We will introduce how to predict the TV trajectory \((x_{k}^{\mathrm{TV}},y_{k}^{\mathrm{TV}})\), \(k=0,1,\cdots,N\), in the next section.
## 3 Trajectory Prediction
In this section, we present how to predict the lane-changing trajectories of a vehicle using its historical trajectories. We first generate the lane-changing data of a vehicle using a polynomial interpolation method. Then, we use the generated data to train a deep neural network that is applied to predicting the future trajectory of a vehicle based on its historical trajectory.
### Data Generation
Predicting future trajectories using historical trajectories renders a regression problem. The primary step is to create the data set used to train a certain prediction model. The data set contains a cluster of historical trajectories of a vehicle as data samples. The ground truth label of each data sample is its corresponding future trajectory. The trajectories are retrieved from a typical type of lane-changing path.
#### 3.1.1 Lane-Changing Path Generation
Lane-changing requires that the vehicle smoothly switches from an original lane to a target lane while maintaining a constant longitudinal velocity \(v\). Therefore, we use piece-wise polynomial splines to represent the lane-changing path of a vehicle. A typical lane-changing path consists of the following three stages.
* **Preparation stage (I):** the vehicle prepares the lane-changing, moving along the original lane and maintaining a constant speed \(v\). This stage lasts for 2 s.
* **Changing stage (II):** the vehicle changes the lane, starting from the original lane at the original speed \(v\) and ending at the target lane at a target speed \(v\). This stage lasts for 4 s.
* **Finishing stage (III):** the vehicle completes the lane-changing, moving along the target lane at speed \(v\). This stage lasts for 2 s.
In this sense, the trajectories of the vehicle at stages I and III are straight lines. In stage II, the trajectory of the vehicle, represented as a sequence of planar coordinates \((x,y)\), is interpolated using a third-order polynomial function
\[y(x) =y_{0}+3(y_{T}-y_{0})\left(\frac{x-x_{0}}{x_{T}-x_{0}}\right)^{2}-2(y_{T}- y_{0})\left(\frac{x-x_{0}}{x_{T}-x_{0}}\right)^{3} \tag{4}\]
where \((x_{0},y_{0})\) and \((x_{T},y_{T})\) are the coordinates of the starting point and the ending point of the vehicle in stage II. The longitudinal coordinate \(x\) is sampled at a constant sampling time \(\Delta t=0.1\) s. The generated trajectory is sufficiently smooth with terminal conditions \(\dot{x}_{0}=\dot{x}_{T}=v\) and \(\dot{y}_{0}=\dot{y}_{T}=0\).
To ensure the diversity of the data set, we create various lane-changing paths using different velocities \(v\). For both velocities, we take their values from 10 m/s to 40 m/s with a constant increment of 0.1 m/s. Therefore, we ultimately obtain 301 paths with different velocity profiles. Note that the size of the paths, namely the number of sampled coordinates in a path, are different due to different velocities \(v\) but the same sampling time \(\Delta t\).
#### 3.1.2 Segmentation
From the generated lane-changing paths of the vehicle, we create the training and test data. Each sample of the data set is the historical trajectory of the vehicle before a certain time instant, and its ground truth label is the future trajectory starting from this instant. We define that all historical and future trajectories have the same size \(M=30\). In this sense, for every generated lane-changing path, we create a data item by taking a segment that contains \(2M\) successive sampled coordinates. The first half of the segment forms a sample of the data, and the other half serves as the ground truth label. The first coordinate of the future trajectory noted as \((x_{s},y_{s})\), is referred to as the splitting point. For a path sized \(N\), \(N>2M\), we obtain \(N-2M\) segments, i.e., \(N-2M\) labeled data samples.
#### 3.1.3 Calibration
Different data samples have different splitting points which bring different offsets to the longitudinal coordinate of the samples. To eliminate the influence of these offsets on the model training, we subtract \(x_{s}\) from all the longitudinal coordinates of the samples, which is referred to as calibration. After calibration, the splitting points of all data samples have zero longitudinal coordinates \(x_{s}=0\).
#### 3.1.4 Data Splitting
Having performed segmentation and calibration, we obtain \(L=6622\) data samples. Each sample contains a historical trajectory, with the corresponding future trajectory being its ground truth label. We split the data into a training set and a test set with a ratio of \(6:4\). We also randomly shuffle the data to avoid the influence of the continuity of the vehicle motion.
### Prediction Model
In this paper, we use a recurrent neural network (RNN) model to predict future trajectories. The RNN is composed of a sequence input layer, an encoder layer, a latent feature layer, a decoder layer, and an output layer, and is implemented using the MATLAB @Deep Learning Toolbox. The details of the network structure are introduced as follows.
#### 3.2.1 Sequence Input Layer
This is a typical input layer used to feed the historical trajectories into the RNN. In MATLAB @, it is created using function _sequenceInputLayer_. The input size is 2, namely the number of planar dimensions. This layer is attached to a normalized layer in the output end.
#### 3.2.2 Encoder Layer
This layer is a gated recurrent unit (GRU) layer used to encode the dependencies between the successive coordinates of the historical trajectories. In MATLAB @, it is created using function _gruLayer_ with layer size 64. This layer is also attached to a normalized layer in the output end.
#### 3.2.3 Latent Feature Layer
This layer is a fully connected layer used to automatically extract the features from the encoded sequential data. It is created using function _fullyConnectedLayer_ with layer size 64. Its output passes through a layer of Linear rectification functions (ReLU).
#### 3.2.4 Decoder Layer
Similar to the encoder layer, this layer is also a GRU layer. It is used to decode the sequential features to sequential data that are used to generate the prediction. Its size is set as 128.
#### 3.2.5 Output Layer
This layer is used to map the decoded sequential data to the predicted trajectories. It is constructed by a fully connected layer sized 2 and a regression layer.
## 4 Simulation Studies
In this section, we use a two-lane straight highway scenario to evaluate the MPC incorporating the predicted TV trajectory using deep learning. The scenario considers three parallel horizontal lanes, where the EV and the TV start at the middle and the bottom lanes, respectively. The EV is required to drive in the middle lane at a constant speed, and the TV needs to change to the middle lane, thus leading to possible collisions. We first evaluate the prediction precision of the trained RNN model. Then, we validate the efficacy of the MPC with predicted TV trajectory.
### Evaluation of the Prediction Model
In this subsection, we evaluate the prediction accuracy of the RNN model. The loss function is based on mean squared errors (MSE) between the outputs of the RNN and the ground truth labels of the samples. Specifically, subtraction is performed between them in an element-wise manner. Then, the squared element-wise errors are summed up before being divided by the total number of elements. The training data set that contains 3973 samples is used to train the model. The optimization of the MSE loss is solved using the Adam optimizer [13]. The training is performed on a Thinkpad laptop with Intel(R) Core(TM) i7-10750H CPU at 2.60 GHz. The entire training process takes 30 epochs and 900 iterations with a learning rate of 0.01. We do not need a large number of epochs since predicting trajectories is not a heavy job.
The rooted MSEs (RMSE) as the iteration number increases are illustrated in Fig. 1. It is noticed that the RMSE decreases as the training proceeds with an ultimate score of 15.92. The RMSE score becomes stable at around iteration 300, which indicates the quick learning speed of the RNN model. This also reflects that trajectory prediction is an easy job for an RNN.
Then, we test the prediction accuracy of the trained RNN model using the test data set that contains 2649 samples. We calculate the RMSE score for each predicted sample. The RMSE scores of all test samples are shown in the histogram
chart in Fig. 2. It can be seen that the RMSE scores of the prediction vary from 0 to 20. This range is very close to the ultimate training score of the model, 15.92. The overall RMSE of the test is 10.91. This indicates the accuracy of the trained RNN model for trajectory prediction.
### Incorporating the Prediction Model to MPC
We use the predicted TV trajectory provided by the trained RNN model to generate the TV future trajectory (\(x_{k}^{\text{TV}},y_{k}^{\text{TV}}\)), \(k=0,1,\cdots,N\), and encode it to the safe set (3). Note that the starting point of the predicted TV trajectory may not be aligned with the current position of the TV due to the prediction error of the RNN model. This misalignment, however, can be eliminated by adding an offset to the predicted trajectory such that its starting point matches the current position of the TV.
The initial state of the EV is \(\mathbf{\xi}_{0}^{\text{EV}}=[28,7.875,0,20]^{\intercal}\). The EV is intended to reach a reference speed 20 m/s. Then, we design an MPC for the EV as (2) incorporating the interaction safety constraint (3). The parameters of the MPC are set as \(N=10\), \(T=0.2\).s, \(y\in[l^{\text{veh}},3w^{\text{lane}}-l^{\text{veh}}]\), \(\psi\in[-1.2,1.2]\) rad, \(v\in[0,70]\) m/s, \(a\in[-9,6]\)m/s\({}^{2}\) and \(\delta\in[-0.52,0.52]\) rad. The parameters of the elliptical safety set \(\Xi_{k}^{\text{safe}}\) are determined as \(a=7\) m and \(b=2.2\) m. The weighting matrices of the cost function are \(Q=\text{diag}(0,0.1,0.001,1)\), \(R=\text{diag}(3,0.5)\) and \(S=\text{diag}(0,0.1,0.001,1)\). The initial state of the TV is \(\mathbf{\xi}_{0}^{\text{TV}}=[36,2.625,0,18]^{\intercal}\). The trajectory of the TV is generated using a similar MPC to the EV but with a reference speed 20 m/s, the center lane as the target lane, and without incorporating the safety constraint (2e).
The generated trajectories of the EV and the TV within 11 simulation steps are shown in Fig. 3, in blue and red, respectively. Their positions in the simulation steps 1, 4, 7, 11 are displayed as colored squares, shadow to dark in the order of time. The predicted TV trajectories at all simulation steps are also shown in the figure as dotted gray lines. From Fig. 3, we can see that the EV decelerates to avoid potential collisions with the TV as the TV starts to change its lane and then block the way of the EV in the center lane. When the TV finishes the lane-changing and accelerates for its own control target, the EV also starts to accelerate to reach its desired speed since it realizes that the risk of potential collisions is mitigated. This indicates that the EV is able to recognize the TV's lane-changing intention and be aware of the risk of collisions before the TV reaches the center lane. Therefore, the capability of the proposed MPC to mitigate the upcoming risks is addressed. Besides, from the figure, we can also see that the predicted TV trajectories become more and more consistent with the ground truth TV trajectory. This is because more data in the historical trajectory leads to higher prediction precision.
## 5 Conclusion
In this paper, we use deep learning to predict the trajectory of the TV and incorporate it into the safety constraint of the MPC of the EV. As a result, the EV is able to incorporate the risk of collisions with the TV into its motion
Figure 1: The training performance of the RNN model: the convergence of RMSE as the iteration increases.
Figure 2: The training performance of the RNN model: the convergence of RMSE as the iteration increases.
planning such that it can make reasonable and risk-aware decisions in autonomous driving. Technical points such as data calibration and offset compensation are used to ensure that the MPC incorporates the more realistic predicted trajectory. An interesting case that is not investigated in this paper is that both vehicles can interact with each other, which renders a two-player game. Our method in this paper will be extended to this case in future work.
|
2304.09889 | Localization of binary neutron star mergers with a single Cosmic
Explorer | Next-generation ground-based gravitational-wave detectors, such as Cosmic
Explorer (CE), are expected to be sensitive to gravitational-wave signals with
frequencies as low as 5 Hz, allowing signals to spend a significant amount of
time in the detector frequency band. As a result, the effects caused by the
rotation of the Earth become increasingly important for such signals.
Additionally, the length of the arms of these detectors can be comparable to
the wavelength of detectable gravitational waves, which introduces
frequency-dependent effects that are not significant in current-generation
detectors. These effects are expected to improve the ability to localize
compact binary coalescences in the sky even when using only one detector. This
study aims to understand how much these effects can help in localization. We
present the first comprehensive Bayesian parameter estimation framework that
accounts for all these effects using \textsc{Bilby}, a commonly used Bayesian
parameter estimation tool. We focus on sky localization constraints for binary
neutron star events with an optimal signal-to-noise ratio of 1000 with one
detector at the projected CE sensitivity. We find that these effects help
localize sources using one detector with sky areas as low as 10 square degrees.
Moreover, we explore and discuss how ignoring these effects in the parameter
estimation can lead to biases in the inference. | Pratyusava Baral, Soichiro Morisaki, Ignacio Magaña Hernandez, Jolien D. E. Creighton | 2023-04-19T18:00:02Z | http://arxiv.org/abs/2304.09889v4 | # Localization of binary neutron star mergers with a single Cosmic Explorer
###### Abstract
Next-generation ground-based gravitational-wave detectors, such as Cosmic Explorer (CE), are expected to be sensitive to gravitational-wave signals with frequencies as low as 5 Hz, allowing signals to spend a significant amount of time in the detector frequency band. As a result, the effects caused by the rotation of the Earth become increasingly important for such signals. Additionally, the length of the arms of these detectors can be comparable to the wavelength of detectable gravitational waves, which introduces frequency-dependent effects that are not significant in current-generation detectors. These effects are expected to improve the ability to localize compact binary coalescences in the sky even when using only one detector. This study aims to understand how much these effects can help in localization. We present the first comprehensive Bayesian parameter estimation framework that accounts for all these effects using Blby, a commonly used Bayesian parameter estimation tool. We focus on sky localization constraints for binary neutron star events with an optimal signal-to-noise ratio of 1000 with one detector at the projected CE sensitivity. We find that these effects help localize sources using one detector with sky areas as low as 10 square degrees. Moreover, we explore and discuss how ignoring these effects in the parameter estimation can lead to biases in the inference.
## I Introduction
The LIGO-Virgo-KAGRA (LVK) [1; 11; 12] collaboration has confidently detected around 90 compact binary coalescences (CBCs) which include binary black hole (BBH) [7; 37], binary neutron star (BNS) [3; 5] and neutron star black hole (NSBH) [8] mergers. One BNS known as GW170817, had an observed electromagnetic (EM) counterpart, opening the door to the unexplored world of multimessenger astronomy [2; 4] with GWs allowing us to test our understanding of gravity, cosmology, and astrophysics [6; 9; 10].
Given the success of current generation GW detectors, several new ground-based next-generation (3G/XG) GW detectors have been proposed, including the Cosmic Explorer (CE) [44] and the Einstein Telescope (ET) [41], which are expected to be operational post-2030. Over the next decade, technological advancements are expected to significantly enhance the sensitivity of ground-based detectors, enabling them to detect frequencies as low as a few hertz. This would enable us to detect \(\mathcal{O}(10^{5}-10^{6})\) CBCs [18] and in particular, signals with extremely high signal-to-noise ratios (SNRs) in the order of \(\mathcal{O}(1000)\) within one year of observation.
Increased sensitivity at lower frequency means that loud gravitational-wave signals from BNS will last in the detector band for about an hour allowing the source to move across the sky relative to Earth's rotation. Long detector arms compel us to calculate the travel time of a GW across the detector beyond the static limit where the wavelength of a gravitational wave is assumed to be much longer than the arms of the detector [42; 43]. These effects make the antenna response time and frequency-dependent, which breaks certain degeneracies that otherwise exist between extrinsic parameters (those relating to the relative position and orientation of the detector and source). This enables us to localize sources using only one detector. Locating a source in the sky is extremely important to facilitate EM follow-up. Given the length of the signal, it might be feasible to localize the source before the merger which is essential for observing prompt afterglows [46]. However, in this paper, we work with the full bandwidth of signals lasting up to the merger.
A few localization studies using Fisher Matrices in XG detectors exist in the literature [58; 22]. Such an approximation, though accurate at some regions of the parameter space, may not generally be valid even at high SNRs [56]. For a single detector, we expect multimodalities in the right ascension (RA) and declination (dec) which is completely neglected by Fisher matrix estimates and hence inadequate. Recent work by Nitz & Canton [36] and Smith et al. [48] performs Bayesian parameter estimation (PE) for BNS mergers in XG detectors. To make the problem computationally feasible, the former work constructs a heterodyned likelihood taking into account all effects due to the rotation of the Earth to study early warning capabilities. The latter work constructs reduced order models taking into account only the amplitude modulations due to Earth-rotation using BNS signals lasting 90 minutes in-band from 5 Hz to 2048 Hz for a network of two Cosmic Explore detectors and a single Einstein Telescope (a proposed triangular ground-based detector). Both of these studies ignore the high-frequency effects due to the size of the detector. It is not clear how ignoring some effects play a role in parameter recovery and so we include these effects in our analysis. For space-based detectors like the Laser Interferometer Space Antenna (LISA) [12], similar studies have been performed [33]. The physics of finite size effects remains the same, and the rotation of Earth effect is replaced by similar effects due to the revolution of LISA around the Sun. However, the implementation varies as LISA operates in a very different frequency range and the detector shapes and sizes are vastly also different.
This work does a proof of concept localization study using comprehensive Bayesian parameter estimation for BNS mergers at an SNR of 1000 using simulated data with a single |
2307.10888 | Non-asymptotic statistical test of the diffusion coefficient of
stochastic differential equations | We develop several statistical tests of the determinant of the diffusion
coefficient of a stochastic differential equation, based on discrete
observations on a time interval $[0,T]$ sampled with a time step $\Delta$. Our
main contribution is to control the test Type I and Type II errors in a non
asymptotic setting, i.e. when the number of observations and the time step are
fixed. The test statistics are calculated from the process increments. In
dimension 1, the density of the test statistic is explicit. In dimension 2, the
test statistic has no explicit density but upper and lower bounds are proved.
We also propose a multiple testing procedure in dimension greater than 2. Every
test is proved to be of a given non-asymptotic level and separability
conditions to control their power are also provided. A numerical study
illustrates the properties of the tests for stochastic processes with known or
estimated drifts. | Anna Melnykova, Patricia Reynaud-Bouret, Adeline Samson | 2023-07-20T14:06:17Z | http://arxiv.org/abs/2307.10888v2 | # Non-asymptotic statistical test of the diffusion coefficient of stochastic differential equations
# Non-asymptotic statistical test of the diffusion coefficient of stochastic differential equations
Anna Melnykova1, Patricia Reynaud-Bouret2, Adeline Samson 3
Footnote 1: Avignon Université, Laboratoire de Mathématiques d’Avignon (EA 2151), E-mail: [email protected]
Footnote 2: Université Côte d’Azur, CNRS, LJAD, France E-mail: [email protected]
Footnote 3: Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France, E-mail: [email protected]
**Abstract.** We develop several statistical tests of the determinant of the diffusion coefficient of a stochastic differential equation, based on discrete observations on a time interval \([0,T]\) sampled with a time step \(\Delta\). Our main contribution is to control the test Type I and Type II errors in a non asymptotic setting, i.e. when the number of observations and the time step are fixed. The test statistics are calculated from the process increments. In dimension 1, the density of the test statistic is explicit. In dimension 2, the test statistic has no explicit density but upper and lower bounds are proved. We also propose a multiple testing procedure in dimension greater than 2. Every test is proved to be of a given non-asymptotic level and separability conditions to control their power are also provided. A numerical study illustrates the properties of the tests for stochastic processes with known or estimated drifts.
**AMS classification.** 60B20, 60H10, 62F03
**Keywords.** Statistical tests, non-asymptotic settings, stochastic differential equations.
## 1 Introduction
Stochastic diffusion is a classical tool for modeling physical, biological or ecological dynamics. An open question is how stochasticity should be introduced into the stochastic dynamic process, on what coordinate and at what
scale. For example, diffusions have been widely used to model neuronal activity, either of a single neuron (Ditlevsen and Samson, 2014; Hopfner et al., 2016; Leon and Samson, 2018), or of a large neural network (Ditlevsen and Locherbach, 2017; Ableidinger et al., 2017). Although the intrinsic stochasticity of neurons is well established, where and on what scale this stochasticity should be introduced (on ion channels or membrane potential or both) is still a matter of debate (Goldwyn and Shea-Brown, 2011). Examples also exist in other applications, for example in the modeling of oscillatory systems or movement behavior in ecology. From a statistical point of view, this corresponds to testing the noise level of a multivariate diffusion process. The aim of this paper is to answer this question. We propose to do this by testing whether the determinant of the diffusion coefficient is smaller than a certain value or not.
Let us formally introduce the stochastic process. Consider a filtered probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\geq 0},\mathds{P})\). Let \(X\) be a \(d-\)dimensional process solution of the following Stochastic Differential Equation (SDE):
\[dX_{t}=b_{t}dt+\Sigma dW_{t},\quad X_{0}=x_{0},\quad t>0, \tag{1}\]
with a drift function \(b_{t}:\mathds{R}\to\mathds{R}^{d}\), a diffusion matrix \(\Sigma\in\mathds{R}^{d\times d}\), and \(W\) a \(d\)-dimensional Brownian motion. In this paper, for simplicity's stake, we assume a diagonal \(\Sigma\). We consider discrete observations of \(X\) on a time interval \([0,T]\) with a regular time step \(\Delta\), denoted \(\left\{X_{i\Delta}\right\}_{i=0,\ldots,n}\).
The objective is to construct a statistical test procedure to decide between the two following hypotheses :
\[H_{0}:\det\Sigma\Sigma^{T}=\det\Sigma_{0}\Sigma_{0}^{T}\] \[H_{1}:\det\Sigma\Sigma^{T}>\det\Sigma_{0}\Sigma_{0}^{T}.\]
Our test consists in rejecting the null hypothesis when an estimator of \(\det\Sigma\Sigma^{T}\), chosen as the testing statistic, is greater than a certain critical value. The main issue in constructing the test procedure is the choice of the critical value guaranteeing that the test is exactly at the desired level \(\alpha\). In addition, to understand the performance of the constructed procedure, we want to find conditions leading to non-asymptotic control of the type II error.
When working with real data, observations are sampled with a fixed time interval \([0,T]\) and a fixed time step \(\Delta\). The framework is therefore non-asymptotic in the sense that we have to control the type I and type II errors of the test procedure for fixed \(n\) and \(\Delta\). Controlling the type I and type II errors of a statistical test in a non asymptotic setting is difficult.
Here, it is all the more difficult because the non asymptotic framework is also an problem for SDE inference. Indeed, estimators of drift and diffusion coefficients have be shown to be consistent in different asymptotic settings (either \(T\) fixed and \(n\) going to infinity or \(T\) going to infinity) but few results are available in a non-asymptotic setting. Here, we face both difficulties.
Several tests have been proposed on the matrix \(\Sigma\Sigma^{T}\) of a diffusion process, but in the asymptotic setting \(\Delta\) goes to zero and \(n\) goes to infinity (Dette and Podolskij, 2008, Jacod et al., 2008, Podolskij and Rosenbaum, 2012). The test statistic therefore has an asymptotic distribution from which we can construct a statistical test with a given asymptotic level \(\alpha\) through a rejection area. Among others, we can cite Dette and Podolskij (2008) which proposes to test the parametric form of volatility with empirical processes of integrated volatility. Podolskij and Rosenbaum (2012) construct a test statistic and derive its asymptotic behavior to test the local volatility hypothesis. Their test statistic is a function of the increments of the stochastic process. Jacod et al. (2008), Jacod and Podolskij (2013) test the rank of the matrix \(\Sigma\Sigma^{T}\). In Jacod et al. (2008), they consider continuous-time observations of \(X\) and construct a test statistic based on the process perturbed by a random noise. Random perturbation of the increment matrix enables a ratio statistic based on the multilinearity property of the determinant to be applied. Random perturbation ensures that the denominator of the ratio never vanishes. They prove that the limit of the ratio statistic identifies the rank of the volatility. They also study the asymptotic distribution of this statistic. In Jacod and Podolskij (2013), they extend their work to the case of discrete observations \(\left\{X_{i\Delta}\right\}_{i\in\mathds{N}}\). They also prove its asymptotic distribution when \(\Delta\) goes to zero. Fissler and Podolskij (2017) consider testing the maximal rank of the volatility process for a continuous diffusion observed with noise, using a pre-averaging approach with weighted averages of process increments that eliminate the influence of noise. Reiss and Winkelmann (2021) extend their work to time-varying covariance matrices, again in an asymptotic setting.
In all these cases, the distribution of the test statistic is not explicit and only asymptotic distributions have been obtained by applying asymptotic convergence theorems when \(\Delta\) goes to zero.
As already mentioned, our framework is different: we assume that the time step \(\Delta\) is fixed, which places us in a non-asymptotic setting. So we want to construct a test procedure that guarantees a given level \(\alpha\) in the non-asymptotic setting with \(\Delta\) and \(n\) fixed. This is a major difference with the works cited above. Although statistical tests reveal good properties in the asymptotic setting, they are generally difficult to apply in a non-asymptotic
setting. For example, in some cases, even if the rank of \(\Sigma\Sigma^{T}\) is strictly less than \(d\), the corresponding empirical covariance matrix may be numerically full rank, i.e. in the non-asymptotic setting. This problem is circumvented in the asymptotic setting in Jacod and Podolskij (2013) by adding a random perturbation and studying the convergence of determinant ratio statistics. But if we want to work in the non-asymptotic setting, we need to use other estimators and probabilistic tools.
We have chosen to test the determinant of \(\Sigma\Sigma^{t}\) rather than the rank. The test statistic is therefore the determinant of the diffusion increments matrix. In the asymptotic case, the influence of the drift is negligible, since it is of order \(O(\Delta)\). In the non-asymptotic case, drift must be taken into account. We therefore propose to center the statistics by estimating the drift using a parametric estimator. We then study the distribution of the test statistic. Under the assumption that the drift does not depend on \(X_{t}\) itself (model (1)), the increments are independent. This makes it possible to derive the analytic distribution of the statistic in some simple cases, and in other cases to prove lower and upper bounds of the distribution using concentration inequalities. This drift assumption is rather restrictive, as it is not satisfied by autonomous diffusion processes, but it has also been formulated in Jacod et al. (2008) and Jacod and Podolskij (2013). The extension to a drift depending on \(X\) is discussed at the end of the paper.
Our first main contribution is to construct procedures for testing \(H_{0}\) versus \(H_{1}\) that satisfy non-asymptotic performance properties. In particular, we propose a choice of critical values based either on the explicit distribution of the test statistic (for one-dimensional SDE with known drift) or on the lower bounds of the test statistic. In particular, for each \(\alpha\) in \([0,1]\), these tests are of level \(\alpha\), i.e. they have a probability of Type I error at most equal to \(\alpha\). For particular models, they are even of size \(\alpha\), the probability of Type I error being exactly \(\alpha\) since they are based on the exact non-asymptotic distribution of the test statistic.
Our second main contribution consists in deriving non-asymptotic conditions on the alternative hypothesis which guarantee that the probability of Type II error is at most equal to a prescribed constant \(\beta\). This can be done for one-dimension SDE with necessary and sufficient conditions, when the drift is fully known or even known up to a linear parameter. For two-dimension SDE, the distribution is not exact and we use concentration inequalities to prove upper bounds on the test statistic. The separability condition can then be deduced. When the drift parameter is unknown, the test procedure is adapted. Power deteriorates slightly, however, when the parameter is estimated on the first half of the sample. For a dimension
greater than 2, this is much more difficult, and we are unable to prove the lower and upper bounds of the test statistics. Instead, we propose an approach based on multiple one-dimensional tests and prove that we control the level of the overall procedure. This procedure gives very good results in practice.
This paper is organized as follows. First, we consider the case of a one-dimensional diffusion process in Section 2. We calculate the exact distribution for the non-centered and centered statistics, then deduce the critical value and study conditions to control the Type II error. We show that, from a non-asymptotic point of view, the centering of the test statistics has a considerable influence on the test separation rates. We also extend thus result to the case of unknown drift. In Section 3, we deal with a two-dimensional process with known drift. We consider the center statistic and prove the lower and upper bounds of its distribution. We then propose critical values and conditions such that Type I and II errors are controlled. The Section 4 presents the multiple testing approach. Next, Section 5 presents a numerical study to illustrate the properties of the testing procedure on different SDEs. We conclude with a discussion and perspective.
## 2 Test for a one-dimensional SDE
We start with a simple one-dimensional Brownian motion with drift:
\[dX_{t}=b_{t}dt+\sigma dW_{t},\quad X_{0}=x_{0},\quad t>0, \tag{2}\]
where \(b_{t}:\mathds{R}\to\mathds{R}\) is the drift function that depends on time \(t\), \(\sigma\in\mathds{R}\) is a constant diffusion coefficient and \(W\) is a one-dimensional Brownian motion. Process \((X_{t})_{t\geq 0}\) is discretely observed on a time interval \([0,T]\) at equidistant time step \(\Delta\), \(t_{0}=0,t_{1}=\Delta,\ldots,t_{n}=n\Delta=T\). Our aim is to construct a statistical test to decide between the two following hypotheses:
\[H_{0}:\sigma^{2}=\sigma_{0}^{2}\quad\text{versus}\quad H_{1}:\sigma^{2}> \sigma_{0}^{2},\]
where \(\sigma_{0}^{2}\) is a pre-chosen positive constant.
In Section 2.1, we consider an exact testing procedure by calculating the exact distribution of the test statistic. We then introduce a centered version of the test statistic in Section 2.2. Finally, we deal with the case where the drift is unknown and estimated in Section 2.3.
For each test, we present the test statistic and its exact distribution. We then construct the test by calculating the critical values that control
the type I error. Finally, we study the type II error of the test by deriving non-asymptotic and optimal conditions on the alternative hypothesis. We will use the notations \(\mathds{P}_{\sigma_{0}}\) and \(\mathds{P}_{\sigma}\) to distinguish the probability under the null hypothesis or the alternative hypothesis.
NotationsIn the following, we denote \(\mathcal{N}(\mu,\omega^{2})\) a normal distribution with mean \(\mu\) and variance \(\omega^{2}\), \(\chi^{2}_{n}(0)\) a chi-squared random distribution with \(n\) degrees of freedom, \(\chi^{2}_{n}(\lambda)\) a chi-squared random distribution with \(n\) degrees of freedom and a non-centrality parameter \(\lambda\). Let us also denote the quantiles \(q_{\mathcal{N},\beta}\), \(q_{\chi^{2}_{n},\beta}\) and \(q_{\chi^{2}_{n}(\lambda),\beta}\) of order \(\beta\) of the distributions \(\mathcal{N}(0,1)\), \(\chi^{2}_{n}(0)\) and \(\chi^{2}_{n}(\lambda)\), respectively. Further, the symbol "\(\sim\)" is used throughout the paper as an alias for "follows a certain probability distribution".
### Non-centered statistics
We consider the normalized increments of process \(X\) defined as:
\[\xi_{i}:=\frac{X_{i\Delta}-X_{(i-1)\Delta}}{\sqrt{\Delta}},\quad i=1,\ldots,n. \tag{3}\]
Let \(\xi=(\xi_{1},\ldots,\xi_{n})\). Note that the \(\{\xi_{i}\}\) are independent in \(i\), since the increments do not overlap. We then define the test statistic:
\[S=\frac{1}{n}\sum_{i=1}^{n}\xi_{i}^{2}=\frac{1}{n}\|\xi\|^{2}. \tag{4}\]
We calculate the distribution of \(\xi_{i}\), \(\|\xi\|^{2}\) and \(S\) in the next lemma:
**Lemma 1**.: _Let \(\xi_{i}\) be the random variables defined by (3). We have_
1. \(\xi_{i}\sim\mathcal{N}\left(\frac{\int_{(i-1)\Delta}^{i\Delta}b_{s}ds}{\sqrt{ \Delta}},\sigma^{2}\right)\)_._
2. \(\|\xi\|^{2}\sim\sigma^{2}\chi^{2}_{n}(\lambda(\sigma))\)_, with a non-centrality parameter_ \(\lambda(\sigma)\) _equal to:_ \[\lambda(\sigma)=\frac{\sum_{i=1}^{n}\left(\int_{(i-1)\Delta}^{i\Delta}b_{s}ds \right)^{2}}{\sigma^{2}\Delta}.\]
3. \(S\sim\frac{\sigma^{2}}{n}\chi^{2}_{n}(\lambda(\sigma))\)_. Its cumulative distribution function is_ \(\forall t>0\)__ \[\mathds{P}_{\sigma^{2}}\left(S\leq t\right)=1-Q_{n/2}\left(\sqrt{\lambda(\sigma )},\sqrt{\frac{nt}{\sigma^{2}}}\right),\]
_where_ \(Q_{m}(u,v)\) _is a Markum Q-function, defined as:_ \[Q_{m}(u,v) = \exp\left(-\frac{u^{2}+v^{2}}{2}\right)\sum_{k=1-m}^{\infty}\left( \frac{u}{v}\right)^{k}I_{k}(uv),\] (5) _where_ \(I_{k}\) _is a modified Bessel function of the first kind of order_ \(k\)_._
**Remark**.:
1. _If the function_ \(b_{s}\) _is constant, the non-centrality parameter_ \(\lambda(\sigma)=n\Delta b^{2}/\sigma^{2}\) _is of order_ \(O(n\Delta)\)_. In the asymptotic setting_ \(T\) _fixed, it is a constant. In the asymptotic setting_ \(\Delta\) _fixed and_ \(n\rightarrow\infty\)_, it converges to_ \(\infty\)_._
2. _Note that expression (_5_) is not explicit, even though several packages or approximations exist_ _(_Gil et al._,_ 2014_)__. We will show in the next section that centering the statistic gives results that are easier to use._
The following proposition directly follows Lemma 1:
**Proposition 1**.: _[_1d-Test with noncentered statistics] Let \(\alpha\in]0;1[\) be a fixed constant. Let \(S\) be the test statistic defined by (4) and let us define the test \(\Upsilon\) which rejects \(H_{0}\) if_
\[S\geq z_{1-\alpha}=:\frac{\sigma_{0}^{2}}{n}q_{\chi_{n}^{2}(\lambda(\sigma_{0 })),1-\alpha}.\]
_Then, the test \(\Upsilon\) is of Type I error \(\alpha\) and therefore it is of level \(\alpha\)._
_Further, let \(\beta\in]0;1[\) be a constant such that \(1-\beta\geq\alpha\). For all \(\sigma^{2}>0\) such that_
\[\sigma^{2}\geq\sigma_{0}^{2}\frac{q_{\chi_{n}^{2}(\lambda(\sigma_{0})),1- \alpha}}{q_{\chi_{n}^{2}(\lambda(\sigma)),\beta}}, \tag{6}\]
_the test \(\Upsilon\) satisfies_
\[\mathds{P}_{\sigma^{2}}\left(\Upsilon\text{ accepts }H_{0}\right)\leq\beta.\]
_Condition (6) is sufficient and necessary._
Proof.: Since \(S\) is distributed according to a non-centered chi-squared distribution, it is straightforward to obtain
\[\mathds{P}_{\sigma_{0}^{2}}\left(S\geq\frac{\sigma_{0}^{2}}{n}q_{\chi_{n}^{2} (\lambda(\sigma_{0})),1-\alpha}\right)=\alpha.\]
For the Type II error, we have
\[\mathds{P}\left(S\leq z_{1-\alpha}\right)=\mathds{P}\left(\chi_{n}^{2}( \lambda(\sigma))\leq\frac{\sigma_{0}^{2}}{\sigma^{2}}q_{\chi_{n}^{2}(\lambda( \sigma_{0})),1-\alpha}\right).\]
It implies that \(\mathds{P}_{\sigma^{2}}\left(S\leq z_{1-\alpha}\right)\leq\beta\) as soon as \(\frac{\sigma_{0}^{2}}{\sigma^{2}}q_{\chi_{n}^{2}(\lambda(\sigma_{0})),1-\alpha }\leq q_{\chi_{n}^{2}(\lambda(\sigma)),\beta}\). Type II error is bounded by a \(\beta\) when (6) holds.
We want to understand the influence of \(n\) and \(\Delta\) on the threshold \(z_{1-\alpha}\) and the separability condition (6). However, they are implicitly defined as they depend on \(\sigma_{0}^{2}\) and \(\sigma^{2}\) via the non-centrality parameters \(\lambda(\sigma_{0})\) and \(\lambda(\sigma)\). In what follows, we consider the simplified case of a constant drift \(b\) and provide a quantile approximation to detail the effect of \(n\), \(\Delta\) and deduce more intuitive conditions on \(\sigma\).
In the following, \(\Box\) denotes a positive quantity that is upper and lower bounded by positive constants. Its value can change from line to linen and even within the same equation. In the same spirit, \(\Box_{\beta}\) designates a quantity that is upper and lower bounded by positive functions of \(\beta\).
Thanks to Lemma 7 in the appendix, we have that for \(\alpha<1/\sqrt{2\pi}\),
\[n-1+\lambda(\sigma_{0})+\log(1/\alpha)\leq q_{\chi_{n}^{2}(\lambda(\sigma_{0} )),1-\alpha}\leq n+\Box\sqrt{n\log(1/\alpha)}+\Box\log(1/\alpha)+\Box\lambda( \sigma_{0}).\]
So the critical value satisfies
\[\sigma_{0}^{2}\frac{n-1}{n}+\Box\sigma_{0}^{2}\frac{\log(1/\alpha)}{n}+\Box \Delta b^{2}\leq z_{1-\alpha}\leq\sigma_{0}^{2}+\Box\sigma_{0}^{2}\frac{\sqrt{ \log(1/\alpha)}}{\sqrt{n}}+\Box\sigma_{0}^{2}\frac{\log(1/\alpha)}{n}+\Box \Delta b^{2}.\]
Let us now describe the behavior for the two asymptotic settings:
1. \(T\) fixed, \(n\to\infty\) and \(\Delta=T/n\to 0\). With the previous inequalities, we know that the critical value \(z_{1-\alpha}\underset{n\to\infty}{\longrightarrow}\sigma_{0}\).
2. \(\Delta\) fixed, \(n\to\infty\) and \(T=\Delta n\to\infty\). Then the critical value does not converge towards \(\sigma_{0}^{2}\). There is a bias of the order of \(\Delta b^{2}\) up to a multiplicative constant.
Study on the separability condition (6)We have shown that the Type II error is less than \(\beta\) if and only if
\[\sigma^{2}\geq\bar{\sigma}_{\alpha,\beta}^{2}=\sigma_{0}^{2}\frac{q_{\chi_{n }^{2}(\lambda(\sigma_{0})),1-\alpha}}{q_{\chi_{n}^{2}(\lambda(\sigma)),\beta}}.\]
We can approximate this bound thanks to Lemma 7 for \(\alpha<1/\sqrt{2\pi}\) and \(\beta<0.5\). On one hand
\[\bar{\sigma}_{\alpha,\beta}^{2}\leq\sigma_{0}^{2}\frac{n+\Box\sqrt{\left(n+ \frac{n\Delta b^{2}}{\sigma_{0}^{2}}\right)\log(1/\alpha)}+\Box\log(1/\alpha) +\frac{n\Delta b^{2}}{\sigma_{0}^{2}}}{n+\frac{n\Delta b^{2}}{\sigma^{2}}- \Box_{\beta}\sqrt{n}-\Box_{\beta}\sqrt{\frac{n\Delta b^{2}}{\sigma_{0}^{2}}}}.\]
On the other hand
\[\bar{\sigma}_{\alpha,\beta}^{2}\geq\sigma_{0}^{2}\frac{n-1+\log(1/\alpha)+\frac{n \Delta b^{2}}{\sigma_{0}^{2}}}{n+\frac{n\Delta b^{2}}{\sigma^{2}}+\Box\sqrt{n}}.\]
By introducing \(u=1+\frac{\Delta b^{2}}{\sigma^{2}}\) and \(u_{0}=1+\frac{\Delta b^{2}}{\sigma_{0}^{2}}\), we get that Equation (6) is therefore implied by
\[\sigma^{2}u\geq\sigma_{0}^{2}u_{0}\frac{1+\Box_{\alpha}(nu_{0})^{-1/2}}{1- \Box_{\beta}n^{-1/2}(1+\sqrt{u_{0}-1})u^{-1}}.\]
But \(u\geq 1\) hence \(u^{-1}\leq 1\) and since we are under \(H_{1}:\sigma^{2}\geq\sigma_{0}^{2}\), we have \(u_{0}\geq u\). Hence (6) is implied by
\[\sigma^{2}u\geq\sigma_{0}^{2}u_{0}\frac{1+\Box_{\alpha}(nu_{0})^{-1/2}}{1- \Box_{\beta}n^{-1/2}(1+\sqrt{\frac{\Delta b^{2}}{\sigma_{0}^{2}}})}.\]
This is equivalent to
\[\sigma^{2}+\Delta b^{2}\geq(\sigma_{0}^{2}+\Delta b^{2})\left(1+\frac{\Box_{ \alpha,\beta}}{\sqrt{n}}\left[\frac{\sigma_{0}}{\sqrt{\sigma_{0}^{2}+\Delta b ^{2}}}+1+\sqrt{\frac{\Delta b^{2}}{\sigma_{0}^{2}}}\right]\right)\]
or finally to
\[\sigma^{2}\geq\sigma_{0}^{2}+\frac{\Box_{\alpha,\beta}}{\sqrt{n}}\left[\sigma _{0}\sqrt{\sigma_{0}^{2}+\Delta b^{2}}+(\Delta b^{2}+\sigma_{0}^{2})\left(1+ \sqrt{\frac{\Delta b^{2}}{\sigma_{0}^{2}}}\right)\right].\]
This is a sufficient condition for having a Type II error of value \(\beta\). We are losing the necessary condition because of the lower bound on the chi-squared quantile of the numerator, where the term in \(\sqrt{n}\) disappears. This is why we cannot prove that it is a sufficient and necessary condition up to a constant. However we believe it is the true rate in the sense that the Gaussian concentration inequality has been proved recently to be tight for two-sided quantiles of convex Lipschitz functions (Valettas, 2019). We have just not been able to pass from two-sided to one sided bounds. Let us now describe the behavior for the two asymptotic settings:
1. \(T\) fixed, \(n\rightarrow\infty\) and \(\Delta=T/n\to 0\). We have \(\Delta=T/n\). We recover a rate (at least for the upper bound) equal to \(\sigma_{0}^{2}(1+\frac{\Box_{\alpha,\beta}}{\sqrt{n}})\).
2. \(\Delta\) fixed, \(n\rightarrow\infty\) and \(T=\Delta n\rightarrow\infty\). In this case the limit deteriorates and converges at the same \(\sqrt{n}\) rate but with a multiplicative constant that worsens for large \(\Delta\). If \(\Delta b^{2}/\sigma_{0}^{2}\geq 1\), the upper bound is at least in \(\sigma_{0}^{2}(1+\frac{\Box_{\alpha,\beta}}{\sqrt{n}}\frac{\Delta b^{2}}{\sigma _{0}^{2}})\) and up to \(\sigma_{0}^{2}\left(1+\frac{\Box_{\alpha,\beta}}{\sqrt{n}}\left(\frac{\Delta b ^{2}}{\sigma_{0}^{2}}\right)^{3/2}\right)\), depending on whether one is looking only at the numerator or if we take into account the concentration of the denominator in Equation (6). In both cases, it means that the multiplicative factor is increasing with \(\Delta\) and we loose the \(\sqrt{n}\) rate of separability of the two conditions when \(\Delta\) is "large".
### Centered statistics with known drift
In this section, we propose a new statistic to remove the dependency on the drift and avoid the rate lost in the separability condition. To do so, we introduce a centered test statistic. For \(i=1,\ldots,n\), let us denote
\[\dot{\xi}_{i}=\xi_{i}-\frac{1}{\sqrt{\Delta}}\int_{(i-1)\Delta}^{i\Delta}b_{s }ds=\frac{X_{i\Delta}-X_{(i-1)\Delta}-\int_{(i-1)\Delta}^{i\Delta}b_{s}ds}{ \sqrt{\Delta}}, \tag{7}\]
such that \(\dot{\xi}_{i}\sim\mathcal{N}(0,\sigma^{2})\). Then, we define the statistics \(\dot{S}\) as follows:
\[\dot{S}=\frac{1}{n}\sum_{i=1}^{n}\dot{\xi}_{i}^{2}. \tag{8}\]
Note that \(\dot{S}\) follows a rescaled centered chi-squared distribution with \(n\) degrees of freedom
\[\dot{S}\sim\frac{\sigma^{2}}{n}\chi_{n}^{2}(0).\]
**Proposition 2** (1d-Test with centered statistics and known drift).: _Let \(\alpha\in\ ]0;1[\) be a fixed constant. Let \(\dot{S}\) be the statistic defined in (8) and let us define the test \(\dot{\Upsilon}\) which rejects \(H_{0}\) if_
\[\dot{S}\geq\dot{z}_{1-\alpha}=:\frac{\sigma_{0}^{2}}{n}q_{\chi_{n}^{2},1- \alpha}. \tag{9}\]
_Then, the test \(\dot{\Upsilon}\) is of Type I error \(\alpha\) and therefore it is of level \(\alpha\)._
_Let \(\beta\in]0;1[\) be a constant such that \(1-\beta\geq\alpha\). For all \(\sigma^{2}\) such that_
\[\sigma^{2}\geq\frac{q_{\chi_{n}^{2},1-\alpha}}{q_{\chi_{n}^{2},\beta}}\sigma_ {0}^{2}, \tag{10}\]
the test \(\dot{\Upsilon}\) satisfies_
\[\mathds{P}_{\sigma^{2}}\left(\dot{\Upsilon}\text{ accepts }H_{0}\right)\leq\beta.\]
_It is again a necessary and sufficient condition._
Proof.: Since \(\dot{S}\) is distributed according to the centered chi-squared law with \(n\) degrees of freedom, it is straightforward to show that
\[\mathds{P}_{\sigma_{0}^{2}}\left(\dot{S}\geq\dot{z}_{1-\alpha}\right)=\alpha. \tag{11}\]
For the power of the test, we first note that
\[\mathds{P}_{\sigma^{2}}\left(\dot{S}\leq\dot{z}_{1-\alpha}\right)=\mathds{P}_ {\sigma^{2}}\left(\chi_{n}^{2}(0)\leq\frac{n}{\sigma^{2}}\frac{\sigma_{0}^{2}} {n}q_{\chi_{n}^{2},1-\alpha}\right),\]
It implies that \(\mathds{P}_{\sigma^{2}}\left(\dot{S}\leq\dot{z}_{1-\alpha}\right)\leq\beta\) as soon as \(\frac{\sigma_{0}^{2}}{\sigma^{2}}q_{\chi_{n}^{2},1-\alpha}\leq q_{\chi_{n}^{2 },\beta}\). Thus Type II error is bounded by a fixed risk level \(\beta\in]0;1[\) when (10) holds.
Study of the threshold \(\dot{z}_{1-\alpha}=\frac{\sigma_{0}^{2}}{n}q_{\chi_{n}^{2},1-\alpha}\)We use again Lemma 7 to prove that
\[\sigma_{0}^{2}\left(1+\frac{\square_{\alpha}}{n}\right)\leq\dot{z}_{1-\alpha} \leq\sigma_{0}^{2}\left(1+\frac{\square_{\alpha}}{\sqrt{n}}\right).\]
This approximation does not depend on \(\Delta\), only on the sample size \(n\). The order is thus the same for the setting \(T\) fixed, \(n\to\infty,\Delta=T/n\to 0\), and the setting \(\Delta\) fixed, \(n\to\infty,T=\Delta n\to\infty\).
Study of condition (10)We have
\[\sigma^{2}\geq\frac{q_{\chi_{n}^{2},1-\alpha}}{q_{\chi_{n}^{2},\beta}}\sigma _{0}^{2}.\]
The same study as Section 2.2 leads again to a discrepancy between the upper and lower bound. However if we look only at the upper bound, a necessary condition for (10) to hold is
\[\sigma^{2}\geq\sigma_{0}^{2}\left(1+\frac{\square_{\alpha,\beta}}{\sqrt{n}} \right).\]
Therefore we see here that whatever the asymptotic regime, the multiplicative constant in front of the separability rate does not explode when \(\Delta\) increases since it does not depend on it. Of course our reasoning to compare
the centered and non centered procedure is purely on the upper bound. But as mentionned earlier, because of recent results in Gaussian concentration (Valettas, 2019), we believe that the upper bounds are more tight than the lower bounds, even if we have not been able to prove it. This difference between the behavior of the centered and non centered procedures for large \(\Delta\) has been confirmed on simulations, see Section 5.
### Centered statistics with unknown drift
The drift is rarely known and has to be estimated from the discrete observations \(\{X_{i\Delta}\}_{i=0,\ldots,n}\). We present in this section an adaptation of the previous test to the specific case of a parametric drift depending on a linear parameter:
\[dX_{t}=\theta f_{t}dt+\sigma dW_{t},\quad X_{0}=x_{0},\quad t>0, \tag{12}\]
where \(\theta\in\mathbb{R}\) is an unknown scalar parameter and \(f_{t}:\mathbb{R}\rightarrow\mathbb{R}\) is a known function. A standard estimator of \(\theta\) is the mean square estimator:
\[\hat{\theta}=\arg\min_{\theta}\sum_{i=1}^{n}\left(X_{i\Delta}-X_{(i-1)\Delta}- \theta\int_{(i-1)\Delta}^{i\Delta}f_{s}ds\right)^{2}. \tag{13}\]
This estimator has an explicit form and is normally distributed even when \(\Delta\) is fixed.
**Lemma 2**.: _Let \(\hat{\theta}\) be defined by (13). Then, the following holds:_
* \(\hat{\theta}=\frac{\sum_{i=1}^{n}\left(X_{i\Delta}-X_{(i-1)\Delta}\right)\int _{(i-1)\Delta}^{i\Delta}f_{s}ds}{\sum_{i=1}^{n}\left(\int_{(i-1)\Delta}^{i \Delta}f_{s}ds\right)^{2}}\)_._
* \(\hat{\theta}\sim\mathcal{N}(\theta,\sigma_{\theta}^{2})\) _with_ \(\sigma_{\theta}^{2}=\frac{\Delta\sigma^{2}}{\sum_{i=1}^{n}\left(\int_{(i-1) \Delta}^{i\Delta}f_{s}ds\right)^{2}}\)_._
Proof is given in Appendix.
Now, let \(\hat{\xi}_{i}\) be the increments centered around the estimated drift:
\[\hat{\xi}_{i}=\xi_{i}-\frac{\hat{\theta}}{\sqrt{\Delta}}\int_{(i-1)\Delta}^{i \Delta}f_{s}ds=\frac{X_{i\Delta}-X_{(i-1)\Delta}-\hat{\theta}\int_{(i-1)\Delta }^{i\Delta}f_{s}ds}{\sqrt{\Delta}}. \tag{14}\]
We study the distribution of the vector \(\hat{\xi}=(\hat{\xi}_{1},\ldots,\hat{\xi}_{n})\).
**Lemma 3**.: _Let us introduce \(i=1,\ldots,n\):_
\[Z_{i}=\frac{1}{\sqrt{\Delta}}\int_{(i-1)\Delta}^{i\Delta}f_{s}ds,\]
_and \(Z=(Z_{1},\ldots,Z_{n})^{t}\). Let \(L\) be the projection matrix:_
\[H:=Z(Z^{t}Z)^{-1}Z^{t}.\]
_Let \(C\) be a matrix such that \((C^{t}C)^{+}=(I-H)\), where \(A^{+}\) denotes a Moore-Penrose inverse of a matrix \(A\). Then_
* \(\hat{\xi}\sim\mathcal{N}(0,\sigma^{2}(I-H)),\)__
* \(\frac{1}{\sigma^{2}}\|C^{t}\hat{\xi}\|^{2}\sim\chi_{n-1}^{2}(0).\)__
Proof is given in Appendix. In practice, as the matrix \(I-H\) has rank \(n-1\), we use the singular value decomposition (SVD) of \(I-H\). SVD produces two unitary matrices \(U\) and \(V\), and a diagonal matrix \(D\) with \(n-1\) non zero values such that \(I-H=UDV^{t}\). Then we take \(C=UD^{-1/2}\).
We also define a new statistic:
\[\tilde{S}=\frac{1}{n-1}\|C^{t}\hat{\xi}\|^{2}, \tag{15}\]
such that
\[\frac{n-1}{\sigma^{2}}\tilde{S}\sim\chi_{n-1}^{2}(0).\]
We can now define the test procedure.
**Proposition 3** (1d-Test with centered statistics and unknown drift).: _Let \(\alpha\in]0;1[\) be a fixed constant. Let \(\tilde{S}\) be the test statistic defined by (15) and let us define the test \(\tilde{\Upsilon}\) which rejects \(H_{0}\) if_
\[\tilde{S}\geq\hat{z}_{\alpha}=\frac{\sigma_{0}^{2}}{n-1}q_{\chi_{n-1}^{2},1- \alpha}.\]
_Then, the test \(\tilde{\Upsilon}\) is of Type I error \(\alpha\) and therefore it is of level \(\alpha\)._
_Let \(\beta\in]0;1[\) be a constant such that \(1-\beta\geq\alpha\). For all \(\sigma^{2}\) such that_
\[\sigma^{2}\geq\sigma_{0}^{2}\frac{q_{\chi_{n-1}^{2},1-\alpha}}{q_{\chi_{n-1}^ {2},\beta}}. \tag{16}\]
_the test \(\tilde{\Upsilon}\) satisfies_
\[\mathds{P}_{\sigma^{2}}\left(\tilde{\Upsilon}\text{ accepts }H_{0}\right)\leq\beta.\]
_This is a necessary and sufficient condition._
The proof is analogous to Proposition 3. The condition for the type II error is essentially the same as the previous test and especially it does not depend on \(\Delta\) as well.
**Remark**.:
1. _This procedure could be generalized to the case of a multidimensional vector_ \(\theta\) _when for example the drift_ \(b_{t}\) _is defined as_ \(b_{t}=\sum_{1}^{p}\theta_{k}f_{kt}\) _for a set of_ \(p\) _known functions_ \((f_{kt})_{k=1,\ldots,p}\)_._
2. _A non-linear drift_ \(b_{t}=f(t,\theta)\) _could be considered with estimators obtained through contrasts for example. But, we would loose the exact level of the test._
To conclude the study of the one-dimensional case, we proved that centering the statistics is important in a non-asymptotic setting, since it allows us to find separation rates that are of parametric rate \(1/\sqrt{n}\) in both settings (\(T\) fixed, \(\Delta\to 0\) and \(\Delta\) fixed, \(T\to\infty\)). This is not the case if the centering is not done and if \(\Delta\) is fixed and \(T\to\infty\).
## 3 Test for a two-dimensional SDE
Now let us turn to a two-dimensional SDE \(X=(X_{t}^{1},X_{t}^{2})\), defined as:
\[dX_{t}=b_{t}dt+\Sigma dW_{t},\quad X_{0}=x_{0},\quad t>0, \tag{17}\]
where \(b_{t}=(b_{t,1},b_{t,2})^{T}\) is a known drift and \(\Sigma\) is a diagonal diffusion matrix with constant coefficients \(\sigma_{1}\) and \(\sigma_{2}\) on the main diagonal and \(W\) is a 2-dimensional Brownian motion. The goal is to construct a statistical test of the following hypothesis:
\[H_{0}:\det\Sigma\Sigma^{T}=\det\Sigma_{0}\Sigma_{0}^{T}\] \[H_{1}:\det\Sigma\Sigma^{T}>\det\Sigma_{0}\Sigma_{0}^{T}.\]
As we assume \(\Sigma\) diagonal, it is equivalent to testing
\[H_{0}:\sigma_{1}^{2}\sigma_{2}^{2}=\sigma_{1,0}^{2}\sigma_{2,0}^{2},\quad \text{versus}\quad H_{1}:\sigma_{1}^{2}\sigma_{2}^{2}>\sigma_{1,0}^{2}\sigma_ {2,0}^{2}.\]
We define the 2-dimensional centered increments with shifted indices to allow independent variables for \(j=1,2,i=1,\ldots,n/2\):
\[\dot{\xi}_{ij}:=\frac{X_{(2i+j-2)\Delta}-X_{(2i+j-3)\Delta}-\int_{(2i+j-3) \Delta}^{(2i+j-2)\Delta}b_{s}ds}{\sqrt{\Delta}}. \tag{18}\]
**Lemma 4**.: _The vectors \(\dot{\xi}_{ij}\) are independent in \(i\) and \(j\). Moreover \(\forall j\in\{1,2\}\), \(i\in\{1,\ldots,n/2\}\):_
\[\dot{\xi}_{ij}\sim\mathcal{N}\left(0,\Sigma\Sigma^{T}\right).\]
Note that the independence in \(i\) and \(j\) is not true when the drift depends on the process \(X\) itself. Let us define the determinant of the following 2x2 matrices \(\dot{s}_{i}=\det[(\dot{\xi}_{i1})^{2},(\dot{\xi}_{i2})^{2}]=\dot{\xi}_{i11}^{2} \dot{\xi}_{i22}^{2}-\dot{\xi}_{i12}^{2}\dot{\xi}_{i21}^{2}\). The first terms are
\[\dot{s}_{1} = \det\left[\left(\frac{X_{\Delta}-X_{0}-\int_{0}^{\Delta}b_{s}ds}{ \sqrt{\Delta}}\right)^{2},\left(\frac{X_{2\Delta}-X_{\Delta}-\int_{\Delta}^{2 \Delta}b_{s}ds}{\sqrt{\Delta}}\right)^{2}\right]\] \[\dot{s}_{2} = \det\left[\left(\frac{X_{3\Delta}-X_{2\Delta}-\int_{2\Delta}^{3 \Delta}b_{s}ds}{\sqrt{\Delta}}\right)^{2},\left(\frac{X_{4\Delta}-X_{3\Delta} -\int_{3\Delta}^{4\Delta}b_{s}ds}{\sqrt{\Delta}}\right)^{2}\right],\]
and so on. The statistic is thus the sum of independent variables:
\[\dot{S}=\frac{1}{n/2}\sum_{i=1}^{n/2}\dot{s}_{i}. \tag{19}\]
We start by some preliminary results on \(\dot{S}\) in Section 3.1. Then, we study the type I and type II errors of the test in Section 3.2. The two previous sections consider the drift known. The case of an unknown drift is presented in Section 3.3.
### Preliminary results on the test statistic \(\dot{S}\)
First, the distribution of \(\dot{s}_{i}\) is studied. Thanks to the centered statistics, its cumulative distribution function is explicitly known, as detailed in the following proposition (proof is given in Appendix).
**Proposition 4**.:
1. _The density function of_ \(\dot{s}_{i}\) _is given by:_ \[g_{\dot{s}_{i}}(x)=\frac{1}{2\sqrt{\sigma_{1}^{2}\sigma_{2}^{2}}}\frac{e^{- \sqrt{\frac{x}{\sigma_{1}^{2}\sigma_{2}^{2}}}}}{\sqrt{x}}.\] (20)
2. _Its expectation and variance are defined by:_ \[\mathds{E}\left[\dot{s}_{i}\right] = 2\sigma_{1}^{2}\sigma_{2}^{2},\] (21) \[Var\left[\dot{s}_{i}\right] = 20\sigma_{1}^{4}\sigma_{2}^{4}.\] (22)
3. _The following holds for all_ \(i\)_,_ \(\forall x\)_:_ \[\mathds{P}\left(\dot{s}_{i}\leq x\right)=1-e^{-\sqrt{\frac{x}{\sigma_{1}^{2} \sigma_{2}^{2}}}}.\]
The following Theorem provides that the lower bound of \(\dot{S}\) is sub-gaussian due to the fact that \(\dot{S}>0\) and the upper bound of \(\dot{S}\) is obtained using Chebyshev's inequality. The proof is given in Appendix.
**Theorem 1**.: _Let \(\dot{S}\) be defined by (19)._
1. _For any_ \(t\in\mathbb{R}\)_, we have the lower bound_ \[\mathds{P}\left(\dot{S}-\mathds{E}\left[\dot{S}\right]\leq-t\right)\leq\exp \left(-\frac{nt^{2}}{192\sigma_{1}^{4}\sigma_{2}^{4}}\right).\] (23)
2. _For any_ \(t\in\mathbb{R}\)_, we have the upper bound_ \[\mathds{P}\left(\dot{S}-\mathds{E}\left[\dot{S}\right]\geq t\right)\leq\frac{ 1}{n/2}\frac{20\sigma_{1}^{4}\sigma_{2}^{4}}{t^{2}}.\] (24)
Note that the lower bound (23) is decaying exponentially fast as \(t\) grows. In comparison, the upper bound (24) is decaying at much slower rate.
### Control of Type I and Type II errors
Using Theorem 1 we can define the rejection zone for the test statistic \(\dot{S}\).
**Theorem 2** (2-dimensional test with centered statistics).: _Let \(\alpha\in]0,1[\) be a fixed constant and let \(\dot{S}\) be the test statistic defined in (19). Let us define a test \(\hat{\Upsilon}\) which rejects \(H_{0}:\det\Sigma\Sigma^{T}=\det\Sigma_{0}\Sigma_{0}^{T}\) if_
\[\dot{S}\geq\dot{z}_{\alpha}=2\det\Sigma_{0}\Sigma_{0}^{T}\left(\sqrt{\frac{10 }{n\alpha}}+1\right).\]
_Then \(\dot{\Upsilon}\) is a test of type I error \(\alpha\) and therefore it is of level \(\alpha\). Let \(\beta\in]0,1[\) such that \(1-\beta\geq\alpha\). If \(n>48(-\log\beta)\) and if_
\[\det\Sigma\Sigma^{T}\geq\frac{\det\Sigma_{0}\Sigma_{0}^{T}\left(\sqrt{\frac{1 0}{n\alpha}}+1\right)}{1-4\sqrt{-\frac{3}{n}\log\beta}}, \tag{25}\]
_then the test \(\dot{\Upsilon}\) satisfies_
\[\mathds{P}_{\sigma}\left(\dot{\Upsilon}\quad accepts\quad H_{0}\right)\leq\beta.\]
Proof.: We start with the Type I error. We apply Theorem 1 to control the probability to surpass some given threshold \(\dot{z}_{\alpha}\):
\[\mathds{P}_{\sigma_{0}}\left(\dot{S}\geq\dot{z}_{\alpha}\right)= \mathds{P}_{\sigma_{0}}\left(\dot{S}-\mathds{E}\left[\dot{S}\right]\geq\dot{z}_ {\alpha}-\mathds{E}\left[\dot{S}\right]\right)\leq\frac{1}{n/2}\frac{20(\det \Sigma_{0}\Sigma_{0}^{T})^{2}}{\left(\dot{z}_{\alpha}-2\det\Sigma_{0}\Sigma_{0} ^{T}\right)^{2}}.\]
We want to limit the risk of the Type I error to \(\alpha\). We have to solve the following inequality:
\[\frac{1}{n/2}\frac{20(\det\Sigma_{0}\Sigma_{0}^{T})^{2}}{\left( \dot{z}_{\alpha}-2\det\Sigma_{0}\Sigma_{0}^{T}\right)^{2}}\leq\alpha.\]
Thus
\[\dot{z}_{\alpha}\geq 2\det\Sigma_{0}\Sigma_{0}^{T}\left(\sqrt{\frac{10}{n \alpha}}+1\right).\]
It remains to control the power of the test. Under \(H_{1}\), \(\mathds{E}[\dot{S}]=2\det\Sigma\Sigma^{T}\).We are looking for conditions on \(\det\Sigma\Sigma^{T}\), such that \(\mathds{P}_{\sigma}\left(\dot{S}\leq\dot{z}_{\alpha}\right)\leq\beta\). Then, by Theorem 1:
\[\mathds{P}_{\sigma}\left(\dot{S}\leq\dot{z}_{\alpha}\right) = \mathds{P}_{\sigma}\left(\dot{S}-2\det\Sigma\Sigma^{T}\leq\dot{z} _{\alpha}-2\det\Sigma\Sigma^{T}\right)\] \[\leq\exp\left(-\frac{n\left(\dot{z}_{\alpha}-2\det\Sigma\Sigma^{ T}\right)^{2}}{2*96(\det\Sigma\Sigma^{T})^{2}}\right).\]
Now, the right part of the expression is bounded by a fixed risk level \(\beta\) if
\[\det\Sigma\Sigma^{T}\geq\frac{\dot{z}_{\alpha}}{2-4\sqrt{-\frac{1 2}{n}\log\beta}}.\]
Replacing \(\dot{z}_{\alpha}\) by its definition, we obtain the result. For certain values of \(n\) and \(\log\beta\) it is possible that the lower bound of condition (25) takes negative values. It is not the case as soon as \(n>48(-\log\beta)\).
**Remark**.: _Theorem 2 is valid under condition \(n>48(-\log\beta)\). For example, for \(\beta=0.05\), one needs at least \(150\) observations._
Study of condition (25).Let us approximate condition (25):
\[\det\Sigma\Sigma^{T}\geq\det\Sigma_{0}\Sigma_{0}^{T}\left(1+\frac{1}{\sqrt{n} }\left(\sqrt{\frac{10}{\alpha}}+4\sqrt{-3\log\beta}\right)+\frac{4}{n}\sqrt{ \frac{-30\log\beta}{\alpha}}\right)\]
This does not depend on the setting \(T\) fixed, \(n\to\infty\) or \(\Delta\) fixed, \(n\to\infty\). For both cases, the separation rate has order \(1/\sqrt{n}\).
### Test with unknown drift
As it is not realistic to assume the drift fully known, we consider the case of a drift depending on a linear vector \(\theta=(\theta_{1},\theta_{2})^{t}\) and a vector of drift \(f_{t}=(f_{t,1},f_{t,2})^{t}\):
\[dX_{t}=\theta^{t}f_{t}dt+\Sigma dW_{t}.\]
If the parameter \(\theta\) is estimated on the same sample than the one used for testing, the centered increments used to define the test statistics are not independent. Instead, we propose to split the sample in two sub-samples \((X_{1},\ldots,X_{n_{e}})\) and \((X_{n_{e}+1},\ldots,X_{n})\).
Standard estimators of \(\theta_{k}\), \(k=1,2\) are the mean square estimators calculated on \((X_{1},\ldots,X_{n_{e}})\) and their distribution is known, by following the same steps as in one-dimension (Lemma 2). Then we prove the next lemma:
**Lemma 5**.: _Let us define the estimators of \(\theta_{l}\), for \(l=1,2\)_
\[\hat{\theta}_{l}=\arg\min_{\theta_{1}}\sum_{i=1}^{n_{e}}\left(X_{i\Delta,l}-X _{(i-1)\Delta,l}-\theta_{l}\int_{(i-1)\Delta}^{i\Delta}f_{s,l}ds\right)^{2}.\]
_Their distributions are_
\[\hat{\theta}_{l}\sim\mathcal{N}(\theta_{l},\sigma_{\theta,l}^{2})\quad\text{ with}\quad\sigma_{\theta,l}^{2}=\frac{\Delta\sigma_{l}^{2}}{\sum_{k=1}^{n_{e}}\left( \int_{(k-1)\Delta}^{k\Delta}f_{s,l}ds\right)^{2}}.\]
The estimators \(\hat{\theta}_{1},\hat{\theta}_{2}\) are calculated from the first sub-sample \((X_{1},\ldots,X_{n_{e}})\) and are thus independent of the second sub-sample \((X_{n_{e}+1},\ldots,X_{n})\). This allows to define independent increments centered around the estimated value of the drift, for \(l=1,2\), \(j=1,2\) and \(i=\frac{n_{e}+3}{2},\ldots,\frac{n}{2}\):
\[\tilde{\xi}_{ij,l} = \frac{X_{(2i+j-2)\Delta,l}-X_{(2i+j-3)\Delta,l}}{\sqrt{\Delta}}- \frac{\hat{\theta}_{l}}{\sqrt{\Delta}}\int_{(2i+j-2)\Delta}^{(2i+j-3)\Delta}f_ {s,l}ds. \tag{26}\]
For \(l=1,2\), we have \(\tilde{\xi}_{ij,l}=\xi_{ij,l}+\frac{\hat{\theta}_{l}-\theta_{l}}{\sqrt{ \Delta}}\int_{(2i+j-2)\Delta}^{(2i+j-3)\Delta}f_{s,1}ds\) and we prove the following Lemma.
**Lemma 6**.: _The distributions of the increments are, for \(l=1,2\), \(j=1,2\) and \(i=\frac{n_{e}+3}{2},\ldots,\frac{n}{2}\),_
\[\tilde{\xi}_{ij,l}\sim\mathcal{N}(0,\sigma_{l}^{2}(1+h_{ij,l}))\quad\text{with} \quad h_{ij,l}=\frac{\left(\int_{(2i+j-3)\Delta}^{(2i+j-3)\Delta}f_{s,l}ds \right)^{2}}{\sum_{k=1}^{n_{e}}\left(\int_{(k-1)\Delta}^{k\Delta}f_{s,l}ds \right)^{2}}.\]
Let us define the determinant of the following 2x2 matrices \(\tilde{s}_{i}=\det[(\tilde{\xi}_{i1})^{2},(\tilde{\xi}_{i2})^{2}]\)\(=\tilde{\xi}_{i1,1}^{2}\tilde{\xi}_{i2,2}^{2}-\tilde{\xi}_{i1,2}^{2}\tilde{\xi}_{i2,1}^{2}\). Conditionally on \(\hat{\theta}\), its expectation and variance are approximated by:
\[\mathds{E}\left[\tilde{s}_{i}\right] = 2\sigma_{1}^{2}\sigma_{2}^{2}(1-h_{ii,1})(1-h_{ii,2}), \tag{27}\] \[Var\left[\tilde{s}_{i}\right] = 20\sigma_{1}^{4}\sigma_{2}^{4}(1-h_{ii,1})^{2}(1-h_{ii,2})^{2}. \tag{28}\]
We can then apply the same methodology developed for the known drift case. Let us define the statistic
\[\tilde{S}=\frac{2}{n-n_{e}-1}\sum_{i=\frac{n_{e}+3}{2}}^{\frac{n}{2}}\tilde{s }_{i}. \tag{29}\]
Proposition 4 and Theorem 1 can be easily extended to this case. We can then define the rejection zone for the test statistic \(\tilde{S}\).
**Theorem 3** (2-dimensional test with centered statistics and unknown drift).: _Let \(\alpha\in]0,1[\) be a fixed constant and let \(\tilde{S}\) be the test statistic defined in (29). Let us define a test \(\tilde{\Upsilon}\) which rejects \(H_{0}:\det\Sigma\Sigma^{T}=\det\Sigma_{0}\Sigma_{0}^{T}\) if_
\[\tilde{S}\geq\tilde{z}_{\alpha}=2\det\Sigma_{0}\Sigma_{0}^{T}\left(\sqrt{ \frac{10}{n\alpha}}+1\right).\]
_Then \(\tilde{\Upsilon}\) is a test of type I error \(\alpha\) and therefore it is of level \(\alpha\). Let \(\beta\in]0,1[\) such that \(1-\beta\geq\alpha\). If \(n>48(-\log\beta)\) and if_
\[\det\Sigma\Sigma^{T}\geq\frac{\det\Sigma_{0}\Sigma_{0}^{T}\left(\sqrt{\frac{1 0}{n\alpha}}+1\right)}{1-4\sqrt{-\frac{3}{n}\log\beta}}, \tag{30}\]
_then the test \(\tilde{\Upsilon}\) satisfies_
\[\mathds{P}_{\sigma}\left(\tilde{\Upsilon}\quad accepts\quad H_{0}\right)\leq\beta.\]
As in dimension 1, this could also be extended to the case of a drift defined as a linear combination of known functions (\(b_{t}=\sum_{k=1}^{p}\theta_{k}^{t}f_{kt}\)).
## 4 Test in dimension \(d\geq 2\) with a multiple testing approach
The previous tests are difficult to adapt to the case \(d>2\) because we loose the equivalent of Proposition 4. An alternative is to consider several
tests \(\delta_{j,\alpha}\), one for each component \(j=1,...,d\) and then correct them for multiplicity. This multiple procedure is not equivalent to the test of \(H_{0}=\text{"}\det(\Sigma)=\sigma_{0,1}^{2}...\sigma_{0,d}^{2}\)" versus \(H_{1}=\text{"}\det(\Sigma)>\sigma_{0,1}^{2}...\sigma_{0,d}^{2}\)". However it is of main interest when the primary objective is to identify on which SDE coordinate the noise acts (for example in neurosciences).
More precisely, let us consider the test \(\delta_{j,\alpha}\) testing \(H_{0,j}=\text{"}\sigma_{j}^{2}=\sigma_{0,j}^{2}\)" versus \(H_{1,j}=\text{"}\sigma_{j}^{2}>\sigma_{0,j}^{2}\)" at level \(\alpha\). In particular, we can use any of the tests developed in Section 2, coordinate per coordinate, like the ones with centered statistics, that have been proved to have better performance.
Note that if the hypotheses are considered as a set of probabilities where the hypotheses hold and if the model consists in saying that \(\sigma_{j}\geq\sigma_{0,j}\) for all \(j\), we have that
\[H_{0}=\bigcap_{j=1,...,d}H_{0,j}\text{ and }\bigcup_{j=1,...,d}H_{1,j}=H_{1}.\]
So we can build a test of \(H_{0}\) versus \(H_{1}\) by saying that we reject \(H_{0}\) if there exists a test \(\delta_{j,\alpha/d}\) that rejects. Note that we use the level \(\alpha/d\). This comes from the Bonferroni bound (Roquain, 2011):
\[\mathbb{P}_{H_{0}}(\exists j=1,...d,\quad\delta_{j,\alpha/d} \text{ rejects }H_{0,j}) \leq \sum_{j=1..d}\mathbb{P}_{H_{0}}(\delta_{j,\alpha/d}\text{ rejects }H_{0,j})\] \[\leq d\alpha/d=\alpha.\]
Thus this multiple testing approach controls the first type error.
In addition to being a test of the same level, this aggregation of individual tests gives us an extra information: the indices \(j\) for which the test \(\delta_{j,\alpha/d}\) rejects, that is the coordinates \(j\) for which the noise is large.
## 5 Numerical experiments
In this section, we illustrate the numerical properties of the test in dimension 1 or 2. We focus on studying their power and the impact of designs by letting \(n\) and \(\Delta\) varying. In dimension 1, we consider three test statistics: the non-centered drift statistics \(S\) (Section 2.1), the centered statistics \(\dot{S}\) with the drift being explicitly known (Section 2.2) and, the centered statistics \(\tilde{S}\) with the drift being estimated from the discrete observations (Section 2.3). In dimension 2, we consider the test statistic with the drift known (Section 3.2) or estimated (Section 3.3) and the multiple testing approach (Section 4).
### One dimensional process with known drift
Let us consider the following toy SDE, a randomly perturbed sinusoidal function, defined as follows:
\[dX_{t}=\theta\sin(t)dt+\sigma dW_{t},\quad X_{0}=0, \tag{31}\]
where \(\theta\in\mathds{R}\) and \(\sigma\in\mathds{R}\). The parameter \(\theta\) is fixed to \(1\) in all simulations.
To study the power of the test procedures, processes are simulated under \(H_{1}\) for a given value of \(\sigma^{2}\) and the test is applied to each process. Different values of \(\sigma^{2}\) are considered, varying from \(0\) to \(0.36\) with a step \(0.001\). For each value of \(\sigma^{2}\), \(N=5000\) processes are simulated with Euler-Maruyama scheme with a time step \(0.01\), for different values of time horizon \(T\) and subsampled with different discretization step \(\Delta\). These processes are denoted \(X_{\sigma}\). The power of a test procedure \(\Psi\) is then estimated as the proportion of processes for which the test is rejected and is denoted \(\Pi(\Psi)\):
\[\Pi(\Psi)=\frac{\text{\# processes for which $H_{0}$ is rejected according to test $\Psi$}}{N}. \tag{32}\]
The power functions \(\Pi(\Upsilon)\), \(\Pi(\hat{\Upsilon})\) and \(\Pi(\tilde{\Upsilon})\) are computed in three settings: \(T=1,\Delta=0.1\) and \(n=10\); \(T=1,\Delta=0.01\) and \(n=100\); and \(T=10,\Delta=0.1\) and \(n=100\). Note that the decision rules are given in Propositions 1, 2 and 3, respectively.
All three power functions are plotted on Figure 1. The performance of the centered statistics in tests \(\hat{\Upsilon}\) and \(\tilde{\Upsilon}\) are almost identical and depend mostly on the number of available observations. The performance of the non-centered test \(\Upsilon\) is sensitive to the step size \(\Delta\).
Especially the power functions \(\Pi(\hat{\Upsilon})\) and \(\Pi(\tilde{\Upsilon})\) are identical when \(T=10,\Delta=0.1\) and \(T=1,\Delta=0.01\). This is in accordance with the concluding remark in Section 2.2: the performance of the test does not depend on the time horizon nor on the step size, only on the number of observations. For the non-centered statistics \(\Pi(\Upsilon)\), however, it is not the case: as the law of the statistics depends on the drift, the performance of the test depends both on the number of observations, and on the discretization step.
### 2-dimensional process with known drift
To illustrate how the method works in dimension two, we use a randomly perturbed sinusoid \(X_{t}=(X_{t,1},X_{t,2})\):
\[\begin{split} dX_{t,1}&=\theta_{1}\sin(t)dt+ \sigma_{1}dW_{t,1},\\ dX_{t,2}&=\theta_{2}\cos(t)dt+\sigma_{2}dW_{t,2}. \end{split} \tag{33}\]
Figure 1: Power functions of the test of \(H_{0}:\sigma^{2}=0.1^{2}\) against \(H_{1}:\sigma^{2}>0.1^{2}\) as a function of \(\sigma_{20}^{2}\). Processes \(X_{\sigma}\) are simulated for \(\sigma_{20}^{2}\) varying between 0 and 0.36. Three tests are considered: the one-dimensional non-centered test \(S\) with known drift (Section 2.1) in dashed blue line, with centered statistic \(\dot{S}\) (Section 2.2) in dotted green line, with centered statistics and estimated drift \(\tilde{S}\) (Section 2.3) in plain red line. Three designs are considered: \(\Delta=0.01,T=1\) (left), \(\Delta=0.1,T=1\) (middle) and \(\Delta=0.1,T=10\) (right).
Parameters used for simulations are \(X_{0,1}=X_{0,2}=0,\theta_{1}=\theta_{2}=1\), \(\sigma_{2}=1\). We generate \(N=5000\) processes under \(H_{1}\) with \(\sigma_{1}^{2}\) varying between 0 and 0.36 (with a step 0.001) in order to study the power of the test. We use 3 different scenarios: with \(T=1,\Delta=0.01\); \(T=1,\Delta=0.1\) and \(T=10,\Delta=0.1\).
We define the power function as in (32) for the 2-dimensional tests with known drift (Section 3.2) or estimated (Section 3.3), and for the multiple testing procedure (Section 4) with either known drift, or estimated.
Results are presented in Figure 2. For 2-dimensional tests, the power is influenced by the number of observations \(n\). When \(\Delta=0.1,T=10\), the powers are almost identical to the case \(\Delta=0.01,T=1\). This is in
accordance with the remark following Theorem 2, the separation rate of the two hypotheses depends only on the number of observations. Unsurprisingly, in the scenario with very few observations (\(\Delta=0.1,T=1\)), the hypotheses fail to separate even when \(\sigma^{2}>>\sigma_{0}^{2}\). When the parameters of the drift are estimated, the power of the test is slightly smaller. This is expected as the test statistic is build only on half of the sample (the first half sample being used to estimate the parameters).
The multiple testing gives better results. For both known and estimated drift, the multiple test gives a perfect separation already at \(\sigma=0.14\) (for settings \(\Delta=0.01,T=1\) and \(\Delta=0.1,T=10\)), while for the two-dimensional test a separation occurs closer to \(\sigma=0.25\).
## 6 Conclusions
We develop various tests of the diffusion coefficients of SDEs. In dimension one, we propose a test statistic that has an explicit distribution, even when the (linear) drift parameter is unknown. The tests are of exact level \(\alpha\). We also prove separability conditions to achieve a given power. The test with an unknown parameter can be applied to a non-parametric drift estimated by a projection on a functional basis, e.g. on a spline basis. It can therefore be used to test the diffusion coefficient of a one-dimensional SDE even when the drift is unknown.
In dimension 2, we propose a test statistic, with a non-explicit distribution. However, thanks to concentration inequalities, we prove a test procedure with a non-asymptotic level. When the drift parameter is unknown, the test procedure is adapted by estimating the parameters on the first half of the sample and then applying the test statistic using data from the second half of the sample. We therefore loose power when the parameters are estimated, as the simulations also illustrate.
We therefore propose an alternative, which is also suitable for a dimension \(d\) greater than 2. This alternative uses a one-dimensional test on each coordinate and corrects the procedure by a multiple testing approach. This allows to control the type I error of the global test. Since the one-dimensional tests have the exact level even when the linear drift parameters are estimated, the multiple testing procedure detects the diffusion coefficient with the exact level, even when the drift is estimated on a functional basis (spline basis, for example).
Further work would involve considering SDE whose drift depends on the process itself. The main difficulty lies in the fact that the increments
are then non-independent. Proving the upper and lower bounds of the test statistics would require further concentration inequalities.
## Acknowledgments
A.S. was supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003) and by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01) funded by the French program Investissement d'avenir.
P.R.-B. was supported by the French government, through UCA\({}^{Jedi}\) and 3IA Cote d'Azur Investissements d'Avenir managed by the National Research Agency (ANR-15 IDEX-01 and ANR-19-P3IA-0002), directly by the ANR project ChaMaNe (ANR-19-CE40-0024-02), and by the interdisciplinary Institute for Modeling in Neuroscience and Cognition (NeuroMod).
|
2308.09584 | Parsec scales of carbon chain and complex organic molecules in AFGL 2591
and IRAS 20126 | (Abridged) There is a diverse chemical inventory in protostellar regions
leading to the classification of extreme types of systems. Warm carbon chain
chemistry sources, for one, are the warm and dense regions near a protostar
containing unsaturated carbon chain molecules. Since the presentation of this
definition in 2008, there is a growing field to detect and characterise these
sources. The details are lesser known in relation to hot cores and in high-mass
star-forming regions -- regions of great importance in galactic evolution. To
investigate the prevalence of carbon chain species and their environment in
high-mass star-forming regions, we have conducted targeted spectral surveys of
two sources in the direction of Cygnus X -- AFGL 2591 and IRAS 20126+4104 --
with the Green Bank Telescope and the IRAM 30m Telescope. We have constructed a
Local Thermodynamic Equilibrium (LTE) model using the observed molecular
spectra to determine the physical environment in which these molecules
originate. We map both the observed spatial distribution and the physical
parameters found from the LTE model. We also determine the formation routes of
these molecules in each source using the three-phase NAUTILUS chemical
evolution code. We detect several lines of propyne, CH$_3$CCH, and
cyclopropenylidene, $c$-C$_3$H$_2$ as tracers of carbon chain chemistry, as
well as several lines of formaldehyde, H$_2$CO, and methanol, CH$_3$OH, as a
precursor and a tracer of complex organic molecule chemistry, respectively. We
find excitation temperatures of 20-30 K for the carbon chains and 8-85 K for
the complex organics. The CH$_3$CCH abundances are reproduced by a warm-up
model, consistent with warm carbon chain chemistry, while the observed CH$_3$OH
abundances require a shock mechanism sputtering the molecules into the gas
phase. | P. Freeman, S. Bottinelli, R. Plume, E. Caux, C. Monaghan, B. Mookerjea | 2023-08-18T14:23:44Z | http://arxiv.org/abs/2308.09584v1 | # Parsec scales of carbon chain and complex organic molecules in AFGL 2591 and IRAS 20126
###### Abstract
Context:There is a diverse chemical inventory in protostellar regions leading to the classification of extreme types of systems. Warm carbon chain chemistry sources, for one, are the warm and dense regions near a protostar containing unsaturated carbon chain molecules. Since the presentation of this definition in 2008, there is a growing field to detect and characterise these sources. The details are lesser known in relation to hot cores and in high-mass star-forming regions - regions of great importance in galactic evolution.
Aims:To investigate the prevalence of carbon chain species and their environment in high-mass star-forming regions, we have conducted targeted spectral surveys of two sources in the direction of Cygnus X - AFGL 2591 and IRAS 20126+4104.
Methods:We observed these sources in frequency ranges around 85, 96, and 290 GHz with the Green Bank Telescope and the IRAM 30m Telescope. We have constructed a Local Thermodynamic Equilibrium (LTE) model using the observed molecular spectra to determine the physical environment in which these molecules originate. We map both the observed spatial distribution and the physical parameters found from the LTE model. We also determine the formation routes of these molecules in each source using the three-phase NautrillUS chemical evolution code.
Results:We detect several lines of propyne, CH\({}_{3}\)CCH, and cyclopropenylidene, \(c\)-C\({}_{3}\)H\({}_{2}\) as tracers of carbon chain chemistry, as well as several lines of formaldehyde, H\({}_{2}\)CO, and methanol, CH\({}_{3}\)OH, as a precursor and a tracer of complex organic molecule chemistry, respectively. We find excitation temperatures of 20-30 K for the carbon chains and 8-85 K for the complex organics. The observed abundances, used as input for the chemical evolution code, are 10\({}^{-7}\) to 10\({}^{-10}\) for both CH\({}_{3}\)CCH and CH\({}_{3}\)OH. The CH\({}_{3}\)CCH abundances are reproduced by a warm-up model, consistent with warm carbon chain chemistry, while the observed CH\({}_{3}\)OH abundances require a shock mechanism sputtering the molecules into the gas phase.
Conclusions:Single-dish observations are useful for studying the envelope-scale chemistry of star-forming regions, including mechanisms such as warm carbon chain chemistry. As well, LTE models lend well to the wide-band maps obtained from these telescopes. The physical and chemical environment determined for complex hydrocarbons and complex organics lends understanding to high-mass star formation.
## 1 Introduction
Molecular clouds are the sites of star formation, where gas collapses into dense cores before forming protostars. The chemical makeup of molecular clouds therefore leads to that of protostellar systems. Observing the chemical complexity of star formation and modelling its evolution provides an invaluable link between the interstellar medium and planetary composition. In the last few decades, the diverse chemical makeup of star-forming regions has been revealed (Herbst & van Dishoeck, 2009; Sakai & Yamamoto, 2013; Jorgensen et al., 2020).
There are two major carbon-based groups commonly studied in these regions: interstellar complex organic molecules (ICOMs) and carbon chain molecules (CCMs). Complex, in interstellar terms, means any molecule with six or more atoms. COMs and CCMs are important to observe as their formation is sensitive to environmental conditions, thus can supplement existing information about interstellar objects traced by simpler, often studied, molecules such as CO, HCO+, or HCN.
COMs, which are carbon-bearing saturated complex molecules, have been observed in abundance in the hot (T \(>\) 100 K) and dense (\(n_{\rm H_{2}}\)\(>\) 10\({}^{6}\) cm\({}^{-3}\)) cores of star-forming regions (see Herbst & van Dishoeck, 2009 for a review of these molecules). The hot cores of high-mass star formation are well known (high mass stars have M \(>\) 8 M\({}_{\odot}\); Blake et al., 1987; Caselli et al., 1993; Macdonald, G. H. et al., 1996; Helmich & van Dishoeck, 1997) as are their compact (\(\leq\) 100 AU in radius) equivalent in low-mass regions, dubbed hot corinos (Cazaux et al., 2003; Ceccarelli, 2004; Bottinelli et al., 2004; Bottinelli et al., 2007). In star-forming regions complex molecules are formed on and released from interstellar grain surfaces. This chemistry is initiated by hydrogenation of CO, depleted onto grain surfaces from the gas phase, to make CH\({}_{3}\)OH. COM formation in the gas phase is dominated by ion-neutral reactions, though the formation of saturated molecules in cold phases is inefficient (Herbst & van Dishoeck, 2009).
CCMs, on the other hand, are unsaturated hydrocarbons (e.g. C\({}_{n}\)H, HC\({}_{n}\)N) that are well-known in cold (T \(\leq\) 10 K) molecular
clouds and starless cores such as Sagitarrius B2, Taurus Molecular Cloud, and Lupus-1A (Avery et al., 1976; Broten et al., 1978; Kroto et al., 1978; Little et al., 1978; Sakai et al., 2010). The cold starless core phase forms CCMs in the gas phase efficiently prior to the uptake of C in CO, a stable molecule. CCMs, then, were thought to have a short lifetime - present in the early stages of cloud core evolution but becoming deficient in later stage star-forming regions.
However, another formation route for CCMs is 'Warm Carbon Chain Chemistry' (WCCC) (Sakai et al., 2008; Aikawa et al., 2008; Sakai and Yamamoto, 2013). If atomic carbon depletes onto interstellar grains early in the star formation process, successive hydrogenation of C on the grain surface forms CH\({}_{4}\). As the gas warms in the star formation process and CH\({}_{4}\) is liberated from dust grains, it acts as the precursor for CCM formation. The condition keeping carbon in its atomic form prior to depleting onto grain surfaces is thought to be a short timescale for collapse (Sakai et al., 2008) or UV radiation (Spezzano et al., 2016; Higuchi et al., 2018). A WCCC source is characterised by two conditions: that there is an abundance of various carbon-chain molecules, and these molecules are concentrated in the warm and dense part around a protostar (Sakai and Yamamoto, 2013). Following the detection of the low-mass star-forming region L1527 as a WCCC source (Sakai et al., 2008), WCCC characteristics have also been seen in a high-mass star-forming region, a giant H II region, and a starless core (Mookerjea et al., 2012; Saul et al., 2015; Wu et al., 2019).
Hot corinos and warm carbon chain chemistry (WCCC) sources now classify two extreme types of protostellar systems, due to this chemical differentiation. However, many sources have characteristics of both types - a hybrid source. The second WCCC source reported by Sakai et al. (2009), IRAS 15398\(-\)3359 in Lupus, hosts a hot corino (Okoda et al., 2023). iCOMs are emitted on a scale 10-100\(\times\) smaller than the CCM emission. Okoda et al. (2023) suggest the hybrid nature supports both the grain mantle composition and the physical environment influencing the chemistry. L483, a Class 0 protostar, shows WCCC characteristics on a 1000 AU scale and also hot corino characteristics on (\(<\)) 100 AU scales (Sakai et al., 2009; Oya et al., 2017). The Bok globules B335 and CB 68, which are useful environments to study isolated protostellar sources, show hot corino emission on tens of AU scales and also carbon chain emission on hundreds to a thousand AU scale (Imai et al., 2016; Imai et al., 2022). Higuchi et al. (2018) find numerous intermediate sources in the Perseus Molecular Cloud, though find high CCH/CH\({}_{3}\)OH ratios - characteristics of WCCC - towards isolated or edge-of-the-cloud sources owing to both differences in timescales and effects of UV radiation.
Carbon chain chemistry, traced by C\({}_{4}\)H (Lindberg et al., 2016; Graninger et al., 2016), CCH (Higuchi et al., 2018; Bouvier et al., 2020), and cyanopolyynes (Taniguchi et al., 2018, 2021), may be compared to complex organic chemistry, usually traced by CH\({}_{3}\)OH, through parameters such as line density, column density, spatial distribution, or temperature. These studies identify that CCMs or their tracers are often in cooler, more extended gas, such as the envelope around hot cores, produced from CH\({}_{4}\) after it is sublimated from dust grain surfaces. There are dissimilarities between CCMs and COMs in spatial distribution, line profiles, and column densities. Lindberg et al. (2016), referencing Rodgers and Charmley (2001), are careful to note that certain classes of CCMs may have different correlations to methanol.
In this paper, we examine tracers of warm carbon chain and complex organic chemistry in the high-mass star-forming regions AFGL 2591 and IRAS 20126. Section 2 describes the molecules and sources. Section 3 describes the observations and Sect. 4 the detected lines and Local Thermodynamic Equilibrium (LTE) model results. Section 5 discusses the results in wider context, and Sect. 6 provides a summary.
## 2 Objects of study
### Warm carbon chain molecules - propyne and cyclopropylidene
Propyne, or methyl acetylene, CH\({}_{3}\)CCH, is part of the methylpolyyne family (Irvine et al., 1981). It is a symmetric rotor with many transitions closely spaced in frequency. Numerous lines are thus captured in a small bandwidth, and are useful for analysis of the physical conditions, especially temperature, in LTE (Aske et al., 1984; Kuiper et al., 1984; Bergin et al., 1994). It was first detected in the giant molecular clouds Sgr B2 and Orion A, two sites with active star formation (Snyder and Buhl, 1973; Lovas et al., 1976). It continues to be detected in low- and high-mass star-forming regions and is used as a tracer of chemical complexity (Cazaux et al., 2003; Taniguchi et al., 2018; Giannetti et al., 2017; Santos et al., 2022).
CH\({}_{3}\)CCH can form both on grain surfaces and in the gas phase, with the former route directing observed abundances (Calcutt et al., 2019). On grain surfaces, CH\({}_{3}\)CCH develops from the successive hydrogenation of C\({}_{3}\)(Hickson et al., 2016; Andron et al., 2018). In the gas phase, the primary route is through ion-neutral reactions (Schiff and Bohme, 1979; Taniguchi et al., 2019) with contributions from neutral-neutral reactions (Turner et al., 1999). Recently, studies note the importance of CH\({}_{4}\), forming on then desorbing from grain surfaces in the warm-up phase, and producing larger hydrocarbons through gas-phase ion-neutral reactions (Sakai and Yamamoto, 2013; Calcutt et al., 2019; Taniguchi et al., 2019). These hydrocarbons dissociatively recombine to form CH\({}_{3}\)CCH.
Thaddeus et al. (1985) identified cyclopropenylidene, c-C\({}_{3}\)H\({}_{2}\), as the first astronomically observed organic ring molecule. The follow up in Vrtilek et al. (1987), reports on detections in Sgr B2, Orion, and TMC-1. c-C\({}_{3}\)H\({}_{2}\) is formed in dense and 'chemically young' gas, where carbon is in the atomic form (Spezzano et al., 2016). Gas-phase ion molecule, H-atom transfer, and electron recombination reactions lead to its production (Park et al., 2006; Sakai and Yamamoto, 2013). As this process starts with CH\({}_{4}\), desorbed from dust grains at 25 K, this molecule can be representative of WCCC. Aikawa et al. (2020) showed that with a new multi-layered ice mantle model abundances of c-C\({}_{3}\)H\({}_{2}\) correlate with CH\({}_{4}\).
### Complex organic molecules - formaldehyde and methanol
Formaldehyde, H\({}_{2}\)CO, is not technically a complex species; however, it is chemically associated with larger organic molecules and we include it as a COM precursor in this paper. H\({}_{2}\)CO was the first polyatomic organic molecule detected in space, Snyder et al. (1969) observed absorption spectra towards both galactic and extragalactic sources. Methanol, CH\({}_{3}\)OH, is the simplest alcohol molecule. It has many low energy levels due to the torsional motion it undergoes as an asymmetric top molecule (Ball et al., 1970). Methanol is an abundant complex molecule with numerous transitions in the mm and sub-mm range which are useful, and commonly used, as a probe of the physical conditions of an interstellar region.
H\({}_{2}\)CO and CH\({}_{3}\)OH are both formed by the successive hydrogenation of CO after it freezes out onto the surface of grains in the cold cloud phase (Charnley et al. 1997; Garrod & Widicus Weaver 2013). The hydrogenation of CO and H\({}_{2}\)CO require activation energy, suggesting not all molecules of these species will be converted into larger molecules. Still, this is a competitive process even at low temperatures; these species are discovered in a variety of sources such as cold clouds, hot cores, outflows, shocks, the Galactic centre, and external galaxies (see Herbst & van Dishoeck 2009 and references therein). These species are observed in both gas and solid phases and are key precursor molecules to larger COMs.
Certain CH\({}_{3}\)OH transitions are masers, pumped into an excited state either collisionally (Class I) or by far-infrared radiation (Class II). CH\({}_{3}\)OH masers are detected in IRAS 20126 (Class I and II) and AFGL 2591 (Class II) (Battla & Menten 1988; Plambeck & Menten 1990; Harvey-Smith et al. 2008; Moscadelli et al. 2011; Rodriguez-Garza et al. 2017; Rygl et al. 2012).
### Sources
AFGL 2591 is the prototypical object in which to study physical and chemical processes during high-mass star formation. Figure 1 displays a Herschel PACS 160 \(\mu m\) continuum image of the source. At a distance of 3.33\(\pm\)0.11 kpc (Rygl et al. 2012), AFGL 2591 has a total mass of about 2\(\times\)10\({}^{4}\) M\({}_{\odot}\) and a measured total IR luminosity of 2\(\times\)10\({}^{5}\) L\({}_{\odot}\) (Sanna et al. 2012). Within this clump there are several radio continuum sources. VLA 1 and 2 are optically thin H II regions (Trinidad et al. 2003). VLA 3 is believed to be youngest (dynamical age of about 2\(\times\)10\({}^{4}\) yr; Doty et al. 2002; Stauber et al. 2005) and most massive source in the cluster, with an estimated mass of 38 M\({}_{\odot}\) (Sanna et al. 2012). VLA 4 and 5 are faint sources detected in cm continuum observations (Johnston et al. 2013). A large-scale east-west bipolar outflow powered by VLA 3 is seen in molecular observations which extends past the warm and dense envelope of the source (Lada et al. 1984; Hasegawa & Mitchell 1995; Sanna et al. 2012; Johnston et al. 2013). The simplest COMs and CCMs (CH\({}_{3}\)OH, H\({}_{2}\)CO, and CCH) are reported in AFGL 2591 via the James Clerk Maxwell Telescope (JCMT) Spectral Legacy and the CHESS surveys (van der Wiel et al. 2011; Kazmierczak-Barthel et al. 2014). However, these surveys are at high frequencies (\(>\) 300 GHz) where COMs and CCMs do not necessarily emit their strongest lines. Thus, little is known about the relative abundance of COMs and CCMs in this source.
IRAS 20126+4104 (hereafter IRAS 20126) is a well-studied massive protostar that allows us to observe the early stages of massive star formation in a relatively simple system. Figure 2 displays a Herschel PACS 160 \(\mu m\) continuum image of the source. In the Cygnus X region at a distance of 1.6 kpc, IRAS 20126 has a luminosity of 1.3\(\times\)10\({}^{4}\) L\({}_{\odot}\) (Moscadelli et al. 2011). The natal dense clump, which is a site of massive star formation, is isolated with a temperature of 40 K (Shepherd et al. 2000). Within this clump observations of C\({}^{34}\)S by Cesaroni et al. (2005) confirm the presence of a large Keplerian disk with a radius of 5000 AU. Near-infrared K, L', and M' band observations by Sridharan et al. (2005) with the United Kingdom Infrared Telescope, however, resolve the central source and suggest the presence of a more compact disk (R \(\sim\) 1000 AU). This was confirmed by Chen et al. (2016) whose modelling of SMA observations of CH\({}_{3}\)CN confirm a 1.5 M\({}_{\odot}\) disk with a radius of 850 AU rotating about a 12 M\({}_{\odot}\) protostar. Numerous COMs have been detected in JCMT and Plateau de Bure Interferometer (PdBI) observations (Isokoski et al. 2013; Palau et al. 2017) which have also included the detection of some simple CCMs (CCH, CH\({}_{3}\)CCH). However no surveys targeting CCMs have been reported. Thus, little is known about the relative abundance of COMs versus CCMs in this source as well.
## 3 Observations
The IRAM 30-m observations were completed November 2020 and April 2021 (project codes 021-20 and 122-20, PIs S. Bot
Figure 1: Herschel PACS 160 \(\mu m\) image of AFGL 2591 (proposal ID KPGT_ fmotite. 1, PI F. Motte). The colour scale is logarithmic from 0.8 to 80 Jy/pixel, while the contours are 1, 5, 10, 30, 40, and 60 Jy/pixel. The black ’x’ represents the dominant protostellar source VLA 3.
Figure 2: Herschel PACS 160 \(\mu m\) image of IRAS 20126+4104 (proposal ID OT1_reasron_1, PI R. Cesaroni). The colour scale is logarithmic from 0.4 to 40 Jy/pixel, while the contours are 1, 5, 10, 20, and 30 Jy/pixel. The black ’x’ represents the protostellar source.
tinelli and R. Plume). Using the on-the-fly observational mode with position switching, we mapped a \(1^{\prime}\times 1.5^{\prime}\) region in both AFGL 2591 and IRAS 20126 in the frequency range 287.22 - 295 GHz. The offset position was -120\({}^{\prime\prime}\) horizontally and -240\({}^{\prime\prime}\) vertically. The phase centre was \(\alpha\)(2000)\(=20^{\rm h}29^{\rm m}24^{\rm s}9\) and \(\delta\)J2000\(=+40^{\circ}11^{\prime}20\farcs 0\) for AFGL 2591 and \(\alpha\)J(2000)\(=20^{\rm h}14^{\rm m}26\fs 04\) and \(\delta\)J(2000)\(=+41^{\circ}13^{\prime}32\farcs 5\) for IRAS 20126. The EMRI receiver, in bands E1 and E2, was connected to the FTS200 backend (Fast Fourier Transform Spectrometer at 200 kHz resolution). The range 131.2 - 138.98 GHz was observed simultaneously in this set-up, but will not be discussed in this paper due to the much larger beam size. The pointing, checked every 90 minutes or less, was done using either K3-50A or NGC 7027 and resulted in typical corrections of 1-2\({}^{\prime\prime}\). The focus, completed every three hours or after a sunset or sunrise, was done with the same source as for pointing and resulted in typical corrections of \(<0.3^{\prime\prime}\). The atmospheric opacity, as measured by the 225 GHz taunter, aside from part of the day November 28, 2020, was \(<0.3\), and often \(<0.1\). The system temperatures observing IRAS 20126 were 350-500 K, and observing AFGL 2591 were 300-350 K. The beam size is 9.3\({}^{\prime\prime}\) at 290 GHz, with a pixel size of 4.4\({}^{\prime\prime}\). Data reduction, and the production of maps, was completed with GILDAS/CLASS1.
Footnote 1: [https://www.iram.fr/IRAMFR/GILDAS](https://www.iram.fr/IRAMFR/GILDAS)
The 100m Green Bank Telescope observations were completed in March 2021, in the frequency ranges 84.5-85.75 GHz and 95.55-96.8 GHz using all 16 beams of the ARGUS focal plane array and the VErsatile GBT Astronomical Spectrometer (VEGAS) spectral line backend (project code 21A-039, PI P. Freeman). We obtained \(1^{\prime}\) DAISY on-the-fly maps for both frequency ranges in AFGL 2591 and IRAS 20126, centred as above, utilising position-switching to a location +3\({}^{\prime}\) in azimuth for the reference measurements. Each petal, with a map radius of 0.5\({}^{\prime}\), took 54.7 seconds with a scan rate of 2.3\({}^{\prime\prime}\) per second and an integration time of 1 second. The pointing and focus was completed every 30-40 minutes with the X-band receiver using the GBT's automatic scan AutoPeakFocus on source 2015+3710. At the ARGUS frequencies, the automatic scan AutoOOF was used to correct for errors in the reflecting surface using source 1229+0203. The system temperatures ranged from 100-200 K. With spectrometer mode 2 we obtained a spectral sampling of 92 kHz in both frequency ranges, or 0.32 km s\({}^{-1}\) at 85 GHz and 0.28 km s\({}^{-1}\) at 96 GHz. The beam size is 9.2\({}^{\prime\prime}\) or 10.0\({}^{\prime\prime}\) at 96 or 85 GHz respectively, with a pixel size of 2\({}^{\prime\prime}\). The data were reduced and calibrated using GBTIDL2.
Footnote 2: [https://gbtidl.nrao.edu/index.shtml](https://gbtidl.nrao.edu/index.shtml)
Footnote 3: [https://casa.nrao.edu/](https://casa.nrao.edu/)
The spectral axis of the IRAM data was smoothed from the native spectrometer resolution to reduce the high levels of noise. The final spectral sampling is 781 kHz, or 0.80 km s\({}^{-1}\). Given the widths of the spectral lines, this is still adequate to confidently detect and fit these lines. The GBT 55 GHz data was spatially smoothed by a factor of two in pixel size to improve the noise levels and to closer match the IRAM pixel size. With this, and the higher quality GBT observations, the GBT data was not spectrally smoothed as the noise levels are lower. The GBT 96 GHz and IRAM 290 GHz data were regridded, smoothed, and rebinned in CASA3 to match the 85 GHz beam size and spatial grid. The final restored beam is 10.0\({}^{\prime\prime}\) with a pixel size of 4\({}^{\prime\prime}\). Average rms noise levels are shown in Table 1. In the Results, Sect. 4, the data are in units of T\({}_{A}\)*, adjusted during calibration. In the LTE Model, the data are converted to T\({}_{\rm mb}\) using the telescope B\({}_{\rm eff}\)/F\({}_{\rm eff}\) values provided by the respective observatories: GBT at 85 GHz, 0.4545; GBT at 96 GHz, 0.3838; IRAM EMIR at 290 GHz, 0.547.
Footnote 4: [http://cassis.irap.omp.eu/](http://cassis.irap.omp.eu/)
Footnote 5: [http://cassis.irap.omp.eu/help/?page=html/m_lte_radex](http://cassis.irap.omp.eu/help/?page=html/m_lte_radex)
## 4 Results
### Detected lines
Species were identified through the CASSIS4 LTE+RADEX5 module. First, the CDMS and JPL catalogues under the "Species" tool in CASSIS allowed us to determine possible detected species across the observed spectrum. These two catalogues are cross-referenced for confirmation. Next, we used the LTE Model in the module to model the spectrum. We input an estimated column density, excitation temperature, FWHM line width, source size and \(v_{LSR}\). The spectrum was computed across the frequency range using telescope parameters specific to IRAM and the GBT as documented within CASSIS. Line identification was then done visually, matching the model to the data. It is important to note that this model is only for line identification and not to accurately determine the physical conditions.
Footnote 5: [http://cassis.irap.omp.eu/](http://cassis.irap.omp.eu/)
The combined GBT and IRAM data set contain several simple and complex molecules. The complete spectral surveys are to be described in Freeman et al. (in prep). Table 2 lists all possible CH\({}_{3}\)CCH (CDMS tag 40502 for the vibrational ground state), \(c\)-C\({}_{3}\)H\({}_{2}\) (CDMS tag 38508), CH\({}_{3}\)OH (CDMS tag 32504), and H\({}_{2}\)CO (CDMS tag 30501) lines with A\({}_{\rm ij}\)\(>\) 1\(\times\)10\({}^{-6}\) s\({}^{-1}\) and E\({}_{\rm up}\)\(<\) 150 K. For CH\({}_{3}\)OH the A\({}_{\rm ij}\) limit was set to 2\(\times\)10\({}^{-6}\) s\({}^{-1}\) to remove the 40 K maser line. For CH\({}_{3}\)CCH the E\({}_{\rm up}\) limit was increased to 200 K to account for the a-CH\({}_{3}\)CCH line that has E\({}_{\rm up}\)\(<\) 150 K, but has a higher tabulated value when the a- and e-types are combined in CDMS. This range was selected to restrict the catalogue transitions to the ones that could be detected with the sensitivity of our observations. The species types - a- and e-CH\({}_{3}\)CCH, o- and p-c-C\({}_{3}\)H\({}_{2}\), A- and E-CH\({}_{3}\)OH, o- and p-H\({}_{2}\)CO - were not differentiated as we do not detect enough lines of each. The VASTEL database information for each type is similar to that of CDMS, aside from including higher E\({}_{\rm up}\) a-CH\({}_{3}\)CCH lines, and we expect the results to not be affected drastically by using the combination of the types.
Figures 3 and 4 show examples of the spectra at the brightest pixel for the CH\({}_{3}\)CCH 5\({}_{0}\)-4\({}_{0}\) (E\({}_{\rm up}\) = 12 K) line and the CH\({}_{3}\)OH 2\({}_{0}\)-1\({}_{0}\), 2\({}_{-1}\)-1\({}_{-1}\), 2\({}_{0}\)-1\({}_{0}\) triplet (E\({}_{\rm up}\) = 7, 13, 20 K). Figures 1, 2, 4, and 5 display all transition lines of each molecule. The propyne detections with E\({}_{\rm up}\) from 10-30 K are easily detected and well-defined; rarely showing blended or complicated structure. The methanol lines from six to several tens of K are clearly detected. The higher energy lines are often blended - in part due to methanol's complicated line structure - and often show structure that is not represented by a single Gaussian. The IRAM data also suffer from higher noise due to unstable weather conditions and higher observed frequencies. These trends are consistent across both IRAS 20126 and AFGL 2591, with the latter displaying more complicated methanol line profiles.
Figures 5 and 6 show integrated intensity maps for select lines covering a range of upper energy levels. These are notably bright in the lower energy lines, and decrease in strength as the upper state energy of the lines increase. Upwards of a
few tens of K, there is little structure to be seen in the carbon chain molecules. In AFGL 2591 (Fig. 5), there are differences in spatial distribution among the different molecules as well as between the different energy level lines. Especially in CH\({}_{3}\)OH, the warm lines of E\({}_{\rm up}\) = 20-50 K trace the cloud near the known sources VLA 1-5, while the colder 6-20 K lines trace a region, known to be abundant in methanol (referred to as the'methanol plume' by van der Wiel et al. 2011), to the north and north-east. The similarity between molecules is in the quite extended emission around the hot cores region.
In IRAS 20126 all maps are quite featureless. The COM distributions, across energy levels, have a slight south-east-north-west tilt, and are all similarly concentrated around a single region, coinciding with the hot core. The CCM lines show a similar spatial concentration however with a slight south-north extension in the lines detected, as opposed to the south-east-north-west extension seen in CH\({}_{3}\)OH. The integrated intensities of the carbon chain lines drop off in strength considerably from the lowest energy lines of 12 K and 6 K, for CH\({}_{3}\)CCH and \(c\)-C\({}_{3}\)H\({}_{2}\), respectively.
The beam size, 10.0\({}^{\prime\prime}\), as displayed in the integrated intensity maps, corresponds to 3.3\(\times\)10\({}^{4}\) AU (0.16 pc) in AFGL 2591 and 1.6\(\times\)10\({}^{4}\) AU (0.08 pc) in IRAS 20126 which is sufficient to resolve the extended molecular emission of tens of thousands of AU in each source (Cesaroni et al. 1999; van der Wiel et al. 2011).
### LTE modelling
#### 4.2.1 Model description and implementation
We used a more comprehensive LTE model to determine the physical conditions of AFGL 2591 and IRAS 20126. The model, developed in Python, was based on the CASSIS LTE model scripts in Jython. This formalism is fully documented in Vastel's
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Telescope & Frequency & Spectral Sampling & Spatial Sampling & IRAS 20126 RMS noise & AFGL 2591 RMS noise \\ & [GHz] & [km s\({}^{-1}\)] & [\({}^{\prime\prime}\)] & [T\({}_{A}\)*, mK] & [T\({}_{A}\)*, mK] \\ \hline GBT & 84.50-85.75 & 0.32 & 4 & 27.20 & 25.91 \\ GBT & 95.55-96.80 & 0.28 & 4 & 25.99 & 23.61 \\ IRAM & 287.00-295.00 & 0.80 & 4 & 37.49 & 75.28 \\ \hline \end{tabular} 1
\end{table}
Table 1: Details of the data set used in this paper.
Figure 3: Select transition lines of the carbon chain molecules, CH\({}_{3}\)CCH and \(c\)-C\({}_{3}\)H\({}_{2}\), the complex organic molecules CH\({}_{3}\)OH, and the precursor complex organic molecule H\({}_{2}\)CO, at the central pixel of the AFGL 2591 GBT and IRAM maps. The chosen lines represent a range of different upper energy levels, which are indicated in black on the plot. The dashed grey line represents the observed transition frequency at this location. The observed data is shown in black. The model spectra, to be discussed in Sect. 4.2, are shown in colour. The red solid line represents the total model, and the sole component if only one component is fit. For the multi-component fit in CH\({}_{3}\)OH the separate components are shown in blue, yellow, and brown dashed lines.
Formalism for the CASSIS software6. The LTE model simultaneously fit numerous transitions of one or more molecules across a spectrum for one or more physical components - defined by the physical parameters source size, excitation temperature, column density, spectral line width, and source Doppler velocity - of a source.
Footnote 6: [http://cassis.irap.omp.eu/docs/RadiativeTransfer.pdf](http://cassis.irap.omp.eu/docs/RadiativeTransfer.pdf)
The model identified the molecular transitions producing spectral lines in a given frequency range. Then, it generated spectral line profiles for any desired molecule iterating over the physical parameters of the gas until it produced a model spectrum that best matches the data. The resulting best fit, therefore, provided these five physical parameters for the gas emitting the observed spectra. As we produced maps with our observations, the code was modified to loop over multiple pixels modelling all the observed spectra, ultimately producing spatial maps of the physical conditions.
The best fit was determined via the Levenberg-Marquardt algorithm in LMFIT (Newville et al., 2014), a curve-fitting method for non-linear least-squares problems. The Levenberg-Marquardt algorithm is sufficiently quick for finding the minimum value, and we trust that with appropriately restricted parameter ranges (initial estimations are described below and adjusted in successive iterations of the model) we found the global minimum. The goodness-of-fit was reported with the reduced \(\chi^{2}\), which describes the least-squares minimisation and takes into account the number of data points and variables included in the model. For an LTE model, there may be degeneracy between parameters such as column density and temperature. In some cases, we have few spectral lines compared to the number of free parameters and cannot exclude the possibility of degeneracy. However, we present a best effort to produce reasonable results given this limitation. In other cases, we have several strong lines or have fixed certain parameters in order to reduce the discrepancy between data points and free parameters. This is discussed in the LTE model results below.
In the analysis, we excluded the possible CH\({}_{3}\)OH maser line at 84.5 GHz (\(5_{1,52}-4_{0,4,1}\)), seen in Kalenskii et al. 2002, for example. While determining maser sources is outside the scope of this paper, our initial LTE models for identifying lines greatly under-represented the 84.5 GHz line strengths in several pixels with strong CH\({}_{3}\)OH emission. Thus, we removed it in case it was a maser. Within our frequency ranges, no other methanol masers are reported in these sources.
We restricted the number of pixels the model loops through with a signal-to-noise ratio (S/N) mask for each molecule of interest. We fit a Gaussian to the spectrum using Astropy (Astropy Collaboration et al., 2022), calculated the peak of the observed spectrum using the Gaussian1D fitter and calculated the noise from a nearby, line-free frequency range with the SigmaClip function. We fit specific, strong lines of each molecule to create this mask. Any pixel that had a S/N \(>\) 2.5 for the CH\({}_{3}\)CCH 12 K 5\({}_{0}\)-4\({}_{0}\) (E\({}_{\rm up}\) = 12 K) and 5\({}_{1}\)-4\({}_{1}\) (E\({}_{\rm up}\) = 19 K) lines at 85 GHz are included in the CH\({}_{3}\)CCH and \(c\)-C\({}_{3}\)H\({}_{2}\) LTE models, and any pixel that had the same threshold for the CH\({}_{3}\)OH 2\({}_{0,20}\)-1\({}_{0,10}\) (E\({}_{\rm up}\) = 7 K) and 2\({}_{1,2,2}\)-1\({}_{1,12}\) (E\({}_{\rm up}\) = 12 K) CH\({}_{3}\)OH lines at 96 GHz or the 6\({}_{0,6,6}\)-5\({}_{0,5}\) (E\({}_{\rm up}\) = 48 K) and 6\({}_{1,6,2}\)-5\({}_{1,5,2}\) (E\({}_{\rm up}\) = 54 K) lines at 290 GHz (to capture lines in both GBT and IRAM data) are used for the CH\({}_{3}\)OH and H\({}_{2}\)CO models. The S/N threshold of 2.5 is lower than the standard 3 and was used to ensure we capture the cloud edges where, from a visual inspection of the spectral lines, we believed the emission is still real. As well, the corresponding \(c\)-C\({}_{3}\)H\({}_{2}\) and H\({}_{2}\)CO masks produced more extended
Figure 4: Same as Fig. 3 for the central pixel of the IRAS 20126 GBT and IRAM maps. For the multi-component fits in \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)OH the separate components are shown in blue and yellow dotted lines.
maps. To allow comparison between molecules we continued with the more restrictive CH\({}_{3}\)CCH and CH\({}_{3}\)OH maps with a loosened S/N threshold.
To begin the analysis, we extracted the spectrum from the brightest pixel of certain molecular transition lines - CH\({}_{3}\)CCH 5\({}_{0}\)-4\({}_{0}\) (E\({}_{\rm up}\) = 12 K); _c_-C\({}_{3}\)H\({}_{2}\) 2\({}_{1,2}\)-1\({}_{0,1}\) (E\({}_{\rm up}\) = 6 K); CH\({}_{3}\)OH 2\({}_{0,2,0}\)-1\({}_{0,1,0}\), 2\({}_{1,2,2}\)-1\({}_{1,1,2}\), 20\({}_{2,1}\)-1\({}_{0,1}\), triplet (E\({}_{\rm up}\) = 7, 12, 20 K, CH\({}_{3}\)OH 6\({}_{0,6}\)-5\({}_{0,5}\) G\({}_{\rm up}\) = 48 K); H\({}_{2}\)CO 4\({}_{0,4}\)-3\({}_{0,3}\) (E\({}_{\rm up}\) = 35 K) - and fit one or more Gaussians to these lines with the CASSIS Line Analysis module. This module, with a chosen molecule, frequency range, and transition thresholds (such as \(E_{\rm up,max}\) and \(A_{\rm|i,min}\)), selected and displayed all possible transition lines in the data. By fitting Gaussians by hand, we determined the number of possible components to use in the LTE model and used the line width and velocity as guides for the input parameters. The temperature and column density, on the other hand, are physical properties of the source and input ranges are estimated based on previous knowledge of these sources as well as our integrated intensity maps, Figs. 5, and 6. The size of the source was set at 20\({}^{\prime\prime}\), assumed to be two times the size of the beam. This was to account for the extended emission seen by single dish telescopes, and it was appropriate as we are not resolving the compact protostellar sources but the larger scale envelope. Assuming a source size larger than the beam also removed any beam dilution effects when modelling the molecular spectral lines. With smaller (10\({}^{\prime\prime}\)) beam sizes, the resulting column density changed but remained within a factor of 3. With larger (30\({}^{\prime\prime}\)) beam sizes, the column density changed by less than a factor of 2. Otherwise, the results did not change and we maintained this constant beam size.
#### 4.2.2 AFGL 2591 LTE model results
Initially, each molecule was modelled individually with one component, unless the CASSIS Gaussian fitting described suggested there were multiple components - either through different velocity components, line wings, or otherwise non-Gaussian shape. If multiple components were needed, certain values were fixed to balance the number of transition lines available and the number of free parameters in the model. The input parameters for all models are shown in Table 1.
The CH\({}_{3}\)OH spectral line profiles contain more velocity structure than compared to the other molecules. CH\({}_{3}\)OH was fit unsuccessfully by a one-component fit, as determined by non-converging models, high reduced \(\chi^{2}\) values, or unphysical values for the excitation temperature or column density. The low energy GBT lines were often over-represented compared to the higher energy IRAM lines, and produced very low excitation temperatures unexpected for LTE.
Given the possible non-LTE nature as indicated by the low excitation temperatures found, we tried to model CH\({}_{3}\)OH in RADEX (van der Tak et al., 2007) with CASSIS. Unlike the LTE model, RADEX is too intensive to run over all pixels in the map. Thus, select points near the source VLA 3 and along the eastward extension of the molecules are modelled, along with one pixel off-source to the north, as seen in Fig. 7. Table 3 shows the results, where the model delineates the source into two components - one which traces the higher column density (10\({}^{14}\) cm\({}^{-2}\)), warmer (20 K), gas near the source v\({}_{lsr}\) of -5.7 km s\({}^{-1}\) represented by the higher energy IRAM lines, and a second which traces the colder (10 K), lower column density (10\({}^{13}\) cm\({}^{-2}\)) more negative velocity (-7 to -9 km s\({}^{-1}\)) gas. Figure 8 shows the resulting model spectra for pixel (38,31).
The kinetic temperature from the RADEX model is then used to guide the input excitation temperature, a key parameter for this molecule, for the LTE model. Since the kinetic and excitation temperatures matched, the assumption of LTE was valid and we used the LTE model to produce parameter maps for CH\({}_{3}\)OH. In subsequent LTE modelling, the latter cold component was able to be differentiated into two components in the LTE model - the CH\({}_{3}\)OH 'blob' towards the north-east below in component two and the blue-shifted component three, both described below.
H\({}_{2}\)CO, lastly, is well represented by a one-component model where the parameters, aside from size, are free. Taking direction from the CCMs, we tried to fit H\({}_{2}\)CO and CH\({}_{3}\)OH in the same component of the model to represent these molecules originating from the same gas. However, none of the multiple colder components of CH\({}_{3}\)OH aligned with the hotter H\({}_{2}\)CO emission.
The parameter maps, for velocity, excitation temperature, column density, and line width (FWHM) of the four molecules are displayed in Fig. 9. For all parameter maps, the first component is shown for each molecule. This component may also be seen as the'main' component, as it is most closely associated with the protostar and the envelope. The second and third components for CH\({}_{3}\)OH are displayed in Fig. 10.
All four molecules show a fairly constant velocity structure near the source v\({}_{lsr}\) of -5.7 (\(\pm\)0.03-0.5) km s\({}^{-1}\). Component three of CH\({}_{3}\)OH, however, is notably blue-shifted and fixed at -9.4 km s\({}^{-1}\). Component two of CH\({}_{3}\)OH, representing the offset methanol
Figure 5: AFGL 2591 integrated intensity \(\int T_{A}\mathrm{d}v\) (K km s\({}^{-1}\)) maps of rotational transition lines spanning a range of energy levels. The upper state energy level is indicated in the top right-hand corner and the beam size in the bottom left-hand corner. The source VLA 3 is noted by a white or black ‘x’. CH\({}_{3}\)CCH, in the first column, \(c\)-C\({}_{3}\)H\({}_{2}\), in the second, H\({}_{2}\)CO, in the third, have contour levels of 10, 20, 30, and 40 times the noise level of the respective frequency range. CH\({}_{3}\)OH, in the fourth column, has contour levels of 10, 20, 40, 80, and 120 times the noise level of the respective frequency range.
'blob,' has no contribution towards the centre leaving a hole in the map. Towards the edges, where the detected lines become weaker, this value is slightly blue-shifted.
The carbon chain molecules show a fairly uniform temperature of 30 (\(\pm\)2-4) K across the main emitting areas of the source. The excitation temperature drops to near 10 K where the all lines are weaker and the \(c\)-C\({}_{3}\)H\({}_{2}\) lines dominate. The temperature of H\({}_{2}\)CO reaches towards 70 (\(\pm\)5) K slightly offset from the VLA 1-3 sources and shows a warm temperature across extended spatial scales. For CH\({}_{3}\)OH, not displayed, the excitation temperatures were fixed at 18 K for component one and 8 K for components two and three.
For all components, the column density maps trace features of the integrated intensity maps in Fig. 5. CH\({}_{3}\)CCH, at a peak of 4 (\(\pm\)0.4) \(\times 10^{14}\) cm\({}^{-2}\), extends along and towards the east of AFGL 2591, with two distinct peaks. \(c\)-C\({}_{3}\)H\({}_{2}\) has the lowest column density, 2 (\(\pm\)0.3) \(\times 10^{13}\) cm\({}^{-2}\), and shows an extended distribution with a slight peak near the VLA sources. H\({}_{2}\)CO peaks at 9(\(\pm\)0.03) \(\times 10^{13}\) cm\({}^{-2}\), and distinctly is distributed near the VLA sources. CH\({}_{3}\)OH peaks at 5 (\(\pm\)0.2), 5 (\(\pm\)0.2), and 2 (\(\pm\)0.1) \(\times 10^{14}\) cm\({}^{-2}\) for components one, two, and three, respectively. Component one has the highest column density of all molecules and peaks slightly offset of the protostars, similar to the other molecules. Component two shows the extended emission to the north-east found in the lower energy transition lines, while component three shows a less dense, blue-shifted component just north of the source.
The line widths of the carbon chain molecules are around 3 km s\({}^{-1}\) wide across the emitting region, and H\({}_{2}\)CO is uniform at a slightly broader width near 4 km s\({}^{-1}\). Components two and three of CH\({}_{3}\)OH were fixed at 3.9 km s\({}^{-1}\) and 2.5 km s\({}^{-1}\) respectively, while component one results in line widths of 2-3 km s\({}^{-1}\).
While all our detected lines represent extended envelope emission, the differing temperatures in LTE suggest the molecules are in different gas. As well, the multiple components reveal different structures in the region, which is examined in Sect. 5.1.
#### 4.2.3 Iras 20126 LTE model results
As in AFGL 2591, each molecule is initially modelled individually. For IRAS 20126, this produced the best results and the presented analysis keeps all molecular models separate. Contrary to AFGL 2591, the \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)CCH combination did not represent the lines well. The \(c\)-C\({}_{3}\)H\({}_{2}\) lines in IRAS 20126 are stronger, and drove the excitation temperatures too low for CH\({}_{3}\)CCH. CH\({}_{3}\)OH and H\({}_{2}\)CO were also kept separate, as the slight disparities in temperature worsened the fits if combined. Given the limitations of the number of components we could
Figure 6: Same as Fig. 5 for IRAS 20126, the source noted by a white or black ’x’. CH\({}_{3}\)CCH, in the first column, has contour levels of 10, 30, 50, and 70 times the noise of the respective frequency range. \(c\)-C\({}_{3}\)H\({}_{2}\), in the second column, H\({}_{2}\)CO, in the third, and CH\({}_{3}\)OH, in the fourth, have contour levels of 10, 40, 70, and 100 times the noise of the respective frequency range.
discern with this data and this modelling, there may be shared components which we do not see.
Each molecule is modelled with one component unless the CASSIS Gaussian fit suggested there were multiple components. Figure 4 displays the model results for select transition lines covering a range of upper energy levels. For CH\({}_{3}\)CCH and H\({}_{2}\)CO, one component was adequate. For \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)OH, a second narrow, cold component was added to better represent the detected lines. This component, in both molecules, is a foreground screen across the source in the lower energy transition lines - as seen in the \(c\)-C\({}_{3}\)H\({}_{2}\) 2\({}_{1,2}\)-1\({}_{0,1}\) (E\({}_{\rm up}\) = 6 K) and CH\({}_{3}\)OH 2\({}_{0,2,0}\)-1\({}_{0,1,0}\) (E\({}_{\rm up}\) = 7 K) lines in Fig. 4. In these cases of multiple components, certain values were fixed to balance the number of
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Pixel & T\({}_{kin}\) & V\({}_{\rm lsr}\) & N\({}_{\rm lsr}\) & N\({}_{\rm lsr}\) & N\({}_{\rm H2}\) & FWHM \\ & (K) & (km s\({}^{-1}\)) & (cm\({}^{-2}\)) & (cm\({}^{-3}\)) & (km s\({}^{-1}\)) \\ \hline
1 (32,34) & 12.1 & -6.15 & 4.1\(\times\)10\({}^{14}\) & 6.4\(\times\)10\({}^{7}\) & 2.8 \\ & 11.0 & -7.30 & 12.9\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
2 (35,34) & 17.5 & -6.35 & 4.1\(\times\)10\({}^{14}\) & 1.5\(\times\)10\({}^{7}\) & 3.1 \\ & 10.6 & -9.50 & 6.7\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
3 (38,34) & 19.5 & -5.95 & 3.5\(\times\)10\({}^{14}\) & 3.9\(\times\)10\({}^{8}\) & 2.8 \\ & 11.2 & -9.50 & 10.0\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
4 (41,34) & 16.5 & -6.00 & 2.3\(\times\)10\({}^{14}\) & 1.0\(\times\)10\({}^{7}\) & 1.9 \\ & 10.9 & -9.30 & 3.2\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
5 (32,31) & 14.5 & -5.30 & 2.4\(\times\)10\({}^{14}\) & 3.8\(\times\)10\({}^{6}\) & 3.1 \\ & 10.2 & -7.10 & 3.1\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
6 (35,31) & 16.5 & -5.70 & 3.6\(\times\)10\({}^{14}\) & 7.8\(\times\)10\({}^{6}\) & 2.6 \\ & 10.8 & -8.90 & 14.9\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
7 (38,31) & 21.4 & -5.75 & 2.6\(\times\)10\({}^{14}\) & 3.8\(\times\)10\({}^{7}\) & 2.6 \\ & 10.3 & -9.60 & 2.6\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\
8 (41,31) & 19.1 & -6.15 & 2.06\(\times\)10\({}^{14}\) & 1.1\(\times\)10\({}^{7}\) & 2.4 \\ & 10.3 & -9.00 & 3.4\(\times\)10\({}^{13}\) & 1.5\(\times\)10\({}^{5}\) & 3.2 \\
9 (39,42) & 15.3 & -4.85 & 6.6\(\times\)10\({}^{13}\) & 1.9\(\times\)10\({}^{6}\) & 3.2 \\ & 11.6 & -8.8 & 19.2\(\times\)10\({}^{13}\) & 1.0\(\times\)10\({}^{5}\) & 3.2 \\ \hline \end{tabular}
\end{table}
Table 3: RADEX results for CH\({}_{3}\)OH in AFGL 2591.
Figure 8: RADEX fit for CH\({}_{3}\)OH in the central pixel of AFGL 2591, corresponding to pixel 7 in Fig. 7 and Table 3. The solid red line is the overall two-component model, the solid black line is the data, the dashed blue line is component one and the dashed yellow line is component two. Component two is only seen in the lower energy lines within the GBT range, thus component one and the overall model are the same for the lines in the IRAM range. The dashed grey line represents the observed transition frequency, with the energy level of this transition labelled.
Figure 7: Pixels fit with a RADEX model in AFGL 2591 marked on a Herschel PACS continuum image. The black and white numbers correspond to said pixels, with results listed in Table 3. The black ‘x’ represents the source VLA 3.
transition lines available and the number of free parameters in the model. In giving the errors of the varying components, we present the statistical errors from the model in the range seen across the map, excluding edges.
The parameter maps, for velocity, excitation temperature, column density, and line width of the four molecules are displayed in Fig. 10. For all parameter maps, the main or first component is shown for each molecule. The second components of \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)OH are displayed in Fig. 11.
There is a consistent velocity range for all species between \(-\)2.0 to \(-\)4.5 (\(\pm\)0.02-0.1) km s\({}^{-1}\). This is a range around the source V\({}_{lsr}\) of \(-\)3.5 km s\({}^{-1}\). For the CCMs this is oriented with a red-shifted northern and blue-shifted southern feature. For the COMs, this is oriented with a red-shifted north-west and blue-shifted south-east feature. In \(c\)-C\({}_{3}\)H\({}_{2}\), the second component was fixed at the same velocity as the first component, and in CH\({}_{3}\)OH the second component was fixed at \(-\)0.2 km s\({}^{-1}\) from the first.
Similar to the velocity, the excitation temperature structure is differentiated between categories of molecules. The carbon chains have a roughly consistent temperature across the source, with CH\({}_{3}\)CCH at 25-30 (\(\pm\)0.9-4) K and \(c\)-C\({}_{3}\)H\({}_{2}\) at 8-12 (\(\pm\)0.8-3) K. The COMs show a warmer central region falling off in temperature towards the edges, with a range of 15-35 (\(\pm\)0.3-2) K for CH\({}_{3}\)OH located just off the protostar, and 55-85 (\(\pm\)3-6) K for H\({}_{2}\)CO located on the protostar. The second component of \(c\)-C\({}_{3}\)H\({}_{2}\) varies from 4-8 (\(\pm\)0.4-6) K with few pixels at 10-12 K, while the second component of CH\({}_{3}\)OH was fixed at 10 K.
Given the low excitation temperature of \(c\)-C\({}_{3}\)H\({}_{2}\), it is possible that it is not in LTE and these results may not be a true representation of the physical gas temperature. To explore this, we used RADEX to model a pixel that was bright in \(c\)-C\({}_{3}\)H\({}_{2}\), as shown in Fig. 11.
Figure 9: Parameters maps of AFGL 2951 showing the results of the LTE models. The black ‘x’ represents the source VLA 3. For CH\({}_{3}\)OH, T\({}_{ex}\) was fixed at 18 K.
in Fig. 11. We reproduced the spectrum using a temperature of 30 K, as expected from the CH\({}_{3}\)CCH results, an ortho- to para-ratio of 3, and a density of 10\({}^{5}\) cm\({}^{-3}\).
All column density maps peak near the site of the protostar, with the COMs showing a steeper falloff towards the edges. CH\({}_{3}\)CCH, CH\({}_{3}\)OH, and H\({}_{2}\)CO peak near 6, 9, and 3 (\(\pm\) 0.3, 0.2, and 0.1) \(\times 10^{14}\) cm\({}^{-2}\) respectively, while \(c\)-C\({}_{3}\)H\({}_{2}\) peaks near 5 (\(\pm\)0.3) \(\times 10^{13}\) cm\({}^{-2}\). The second component of \(c\)-C\({}_{3}\)H\({}_{2}\) was fixed at 1\(\times 10^{13}\) cm\({}^{-2}\) and for CH\({}_{3}\)OH it was fixed at 2.1\(\times 10^{13}\) cm\({}^{-2}\).
The CCM lines are consistently narrower in line width, ranging from about 1-2.5 km s\({}^{-1}\), while the COM lines are 4-7km s\({}^{-1}\) wide. The former show a slightly wider line profile near the protostar, while the latter show a widening towards the south-east of the source. The second component of \(c\)-C\({}_{3}\)H\({}_{2}\) was fixed at a line width of 0.5 km s\({}^{-1}\) while for CH\({}_{3}\)OH it was fixed at 1.2 km s\({}^{-1}\).
The visual and quantitative differences in the results, namely velocity and temperature, between the types of molecules suggests they are coming from different physical environments. We elaborate on the known and observed structure in Sect. 5.2.
## 5 Discussion
Detections of carbon chains in high-mass star-forming regions have increased over recent years, with candidate, but not confirmed, WCCC sources (Saul et al. 2015; Taniguchi et al. 2018a). Mookerjea et al. (2012) present the first trace of WCCC in a high-mass star-forming region with detections of the simple chain C\({}_{3}\) in DR21(OH), another source with hot cores in Cygnus X. The C\({}_{3}\) abundance is consistent with the warm-up model of WCCC in Sakai et al. (2008). The emission and the resulting temperatures of 30-50 K - a 'lukewarm corino' - are consistent with the warm envelope around the hot core. The extended spatial distribution and the low, but warm, temperatures are also seen in our sources AFGL 2591 and IRAS 20126.
Figure 10: Parameters maps of IRAS 20126 showing the results of the LTE models. The black ‘x’ represents the protostellar source.
Comparing carbon chain molecules to CH\({}_{3}\)OH is a recently available and practised method of analysing star-forming regions through molecular observations. Table 4 summarises several studies that used either CH\({}_{3}\)CCH or \(c\)-C\({}_{3}\)H\({}_{2}\), as in this study. At first glance, there is a spread of N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH), possibly due to the variety of studies and sources that examine this ratio. All CH\({}_{3}\)CCH studies presented used single dish observations and resolved objects on scales of 1-7\(\times\)10\({}^{4}\) AU, comparable to this study, and consistent with this species being found in the cold and warm extended envelopes. In Fayolle et al. (2015), CH\({}_{3}\)CCH is only detected with the IRAM 30m and not with higher resolution Submillimeter Array observations. There is also variation within the protostellar age - Fayolle et al. (2015) note their sources, showing N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) \(>\) 1, are potentially younger than typical hot core sources, while Taniguchi et al. (2018b) find N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) \(>\) 1 for their source with a UCH II region. For \(c\)-C\({}_{3}\)H\({}_{2}\), Higuchi et al. (2018) look at scales of a few thousand AU, a smaller region, yet they find variation within the survey of protostars and comparable values to our sources.
In this work, at the continuum peak of IRAS 20126 N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) = 0.99 while N(\(c\)-C\({}_{3}\)H\({}_{2}\))/N(CH\({}_{3}\)OH) = 0.06. For AFGL 2591, N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) = 0.97 while N(\(c\)-C\({}_{3}\)H\({}_{2}\))/N(CH\({}_{3}\)OH) = 0.05. The column density ratios are show across the source maps in Fig. 12. The ratio maps in IRAS 20126 are fairly uniform and structured compared to AFGL 2591, which could arise from the smaller spatial extension seen in IRAS 20126, at half the distance of AFGL 2591. However, there is a wide range of ratios within a single source and not just from source to source, making cloud to cloud comparisons somewhat unreliable.
Environmental differences cause the variation between CCMs and CH\({}_{3}\)OH. In our data, the column densities of CH\({}_{3}\)CCH and CH\({}_{3}\)OH are of comparable magnitude, and CH\({}_{3}\)CCH is typically seen on a uniform, extended scale in comparison to concentrated CH\({}_{3}\)OH closer to protostellar sources. Santos et al. (2022) most recently compare N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) in the massive hot molecular core G331.512-0.103 with single-point APEX observations and find a ratio of 0.42. The authors compare abundance ratios to that of other high-mass star-forming regions, with values across a range of 0.31-2.2 (with error, Taniguchi et al. 2018b; Giannetti et al. 2017; Fayolle et al. 2015). CH\({}_{3}\)OH is likely to be more spatially compact than CH\({}_{3}\)CCH, however. The latter is typically an envelope species, as in Fayolle et al. (2015) where the ratios are \(>\)1. This study focuses on organic poor massive young stellar objects, rather than CH\({}_{3}\)OH rich hot cores, though the presence and relative abundance of CH\({}_{3}\)CCH and other complex species makes them comparable to, or possible precursors to, high mass hot cores.
\(c\)-C\({}_{3}\)H\({}_{2}\) is often an order of magnitude lower in column density than CH\({}_{3}\)OH, with no correlation between the two molecules. This is not surprising, due to the different formation routes but has been discussed in previous observations. Spezzano et al. (2016) find different environmental conditions for \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)OH in the prestellar core L1544, with interstellar radiation keeping C in its atomic form and allowing carbon chains to form near the edge of the source - a new view of carbon chain chemistry. In the shielded region of the cloud, CO forms, leading to the production of complex organic molecules. Thus, there is a stark difference in spatial distribution between the two molecules (their Fig. 1). L1554, however, is a much different environment than our two protostellar regions. In general, as a collapsing core evolves, the gas and dust environment will change. The original view of WCCC, which will be discussed for our sources in Sect. 5.3, is dependent upon the timescale, where carbon chains are reproduced in the warm protostellar environment. More specifically, AFGL 2591 and IRAS 20126 are relatively isolated sources and have no external irradiation sources to affect chemical processes. For a more direct comparison, Higuchi et al. (2018) find no correlation between \(c\)-C\({}_{3}\)H\({}_{2}\) and CH\({}_{3}\)OH for 36 protostars in the Perseus region. The column density ratio they find is on order of magnitude 10\({}^{-2}\)-10\({}^{-1}\), similar to what we find in both IRAS 20126 and AFGL 2591.
### Afgl 2591
The environment of AFGL 2591 has well-known features: sources VLA 1-5, a warm inner envelope, and an extended methanol plume (Hasegawa & Mitchell 1995; van der Wiel et al. 2011; Jimenez-Serra et al. 2012; Gieser et al. 2019). An extended distribution around this region is also seen in smaller molecules such as CO, HCN, HCO+, H\({}_{2}\)CO (van der Tak et al. 1999). Traces of these features are able to be seen from our LTE model (Fig. 9). The sources VLA 1-5 are not spatially resolved in our single dish observations, and the structures present in both our integrated intensity maps (Fig. 5) and column density maps (Fig. 9) - aside from the CH\({}_{3}\)OH 2\({}_{0.2,0}\)-1\({}_{0.1,0}\) (E\({}_{\rm up}\) = 7 K) line and component two CH\({}_{3}\)OH maps, which trace the methanol 'plume' - are showing the smoothed out sources as well as the inner envelope. The dense inner envelope is shown to be a dominant component for transition lines with E\({}_{\rm up}\)\(\leq\) 200 K (van der Wiel et al. 2013).
VLA 3, an early B-type star and the central heating source (Trinidad et al. 2003), is also connected to a disk (van der Tak et al. 2006; Wang et al. 2012; Gieser et al. 2019) as well as large scale (\(>\) 1') outflow blue-shifted towards the south-west and red-shifted towards the north-east, traced in simple molecules (Hasegawa & Mitchell 1995; Jimenez-Serra et al. 2012; van der Wiel et al. 2013; Gieser et al. 2019). van der Wiel et al.
Figure 11: RADEX fit for \(c\)-C\({}_{3}\)H\({}_{2}\) in the central pixel of IRAS 20126. The model used a temperature of 30 K, as expected from the CH\({}_{3}\)CCH results, an ortho- to para- ratio of 3, and a density of 10\({}^{5}\) cm\({}^{-3}\). The model is in red while the data is in black. The upper state energy level of each transition is shown on each plot.
(2011) detect the methanol plume extending past this inner envelope towards the north-east. The CH\({}_{3}\)OH 2\({}_{0,2,0}\)-1\({}_{0,1,0}\) (E\({}_{\rm up}\) = 7 K) (Fig. 5) line, which is blended with the 2\({}_{1,2,2}\)-1\({}_{1,1,2}\) and 2\({}_{0,2,1}\)-1\({}_{0,1,1}\) (E\({}_{\rm up}\) = 13, 20 K) lines, clearly traces this plume in multiple energy levels and at greater intensity than around VLA 1-5. The 6\({}_{0,6,0}\)-5\({}_{0,5,0}\) (E\({}_{\rm up}\) = 48 K) line shows greatest intensity around VLA 1-5 but still extends into the NE, further supporting the findings from van der Wiel et al. (2011).
The colder temperatures, \(<\)20 K, for gas-phase CH\({}_{3}\)OH indicate there are likely non-thermal desorption methods liberating this molecule after formation on dust grain surfaces. Cold methanol, at different scales and differing spatial features, is seen in numerous sources with differing mechanisms proposed. Bouvier et al. (2020), with single dish observations of Orion Molecular Cloud 2/3, conclude the detected CH\({}_{3}\)OH is originating from the photo-dissociation regions rather than the hot corino.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Source & Source Type & N(CH\({}_{3}\)CCH)/N(CH\({}_{3}\)OH) & N(\(c\)-C\({}_{3}\)H\({}_{2}\))/N(CH\({}_{3}\)OH) \\ \hline G331.51-0.103\({}^{a}\) & hot molecular core & 0.42\(\pm\)0.05 & - \\ G12.89+0.49\({}^{b}\) & massive young stellar object & 0.34\({}^{+0.28}_{-0.18}\) & - \\ G16.86-2.16\({}^{b}\) & massive young stellar object & 0.36\({}^{+0.28}_{-0.18}\) & - \\ G28.28-0.36\({}^{b}\) & massive young stellar object & 1.61\({}^{+0.86}_{-0.18}\) & - \\ Inner galaxy \({}^{c}\) & ATLASGAL high mass clumps, averaged & 0.31 & - \\ NGC 7538 IRS 9\({}^{d}\) & organic-poor massive young stellar object & 1.3\(\pm\)0.4 & - \\ W3 IRS5\({}^{d}\) & organic-poor massive young stellar object & 2.2\(\pm\)0.7 & - \\ AFGL 490\({}^{d}\) & organic-poor massive young stellar object & 1.8\(\pm\)0.8 & - \\ Perseus molecular cloud \({}^{e}\) & Class 0/1 protostars & - & 0.009\(\pm\)0.003-0.80\(\pm\)0.34 \\ L1527\({}^{e}\) & low-mass star-forming region & - & 0.60\(\pm\)0.07 \\ IRAS 20126\({}^{f}\) & high-mass star-forming region & 0.99 & 0.06 \\ AFGL 2591\({}^{f}\) & high-mass star-forming region & 0.97 & 0.05 \\ \hline \end{tabular}
\end{table}
Table 4: Column density ratios from the literature of carbon chain molecules with respect to methanol.
Figure 12: Column density ratio maps for CH\({}_{3}\)CCH, \(c\)-C\({}_{3}\)H\({}_{2}\), and H\({}_{2}\)CO with respect to CH\({}_{3}\)OH. The upper panel shows AFGL 2591 and the lower panel IRAS 20126. The white ‘x’ represents the protostellar sources within each region.
Vastel et al. (2014), with single dish observations of the prestellar core L1544, conclude that non-thermal desorption methods, specifically photo-desorption, is responsible for the cold CH\({}_{3}\)OH detected. Soma et al. (2015), similarly in Taurus Molecular Cloud 1, offer several possibilities for non-thermal CH\({}_{3}\)OH desorption: weak shocks, photoevaporation, and chemical desorption. Favre et al. (2020) find a methanol 'blob' at smaller scales (1000 AU) using interferometric observations of L1521F - a very low luminosity object transitioning from prestellar to protostellar stages - with the likeliest mechanism for desorption being a slow, gentle shock. In AFGL 2591, van der Wiel et al. (2011) propose this large-scale 'plume' could be CH\({}_{3}\)OH liberated by a shock front. The outflow (Fig. 14 of van der Wiel et al. 2011) coincides spatially with the production of this 'plume.' In addition, it is seen in a range of E\({}_{\rm up}\) transition lines and thus is not discriminatory by temperature. Our component two aligns spatially with this feature. Our component three, also at T\({}_{ex}\) = 8 K, represents a blue-shifted structure to the northeast of VLA 3. Gieser et al. (2019), at small scales, suggest there is a young secondary outflow traced in SiO, and reason it is possible if there are numerous young stellar objects within VLA 3. Suri et al. (2021), subsequently, present NOEMA observations showing fragmentation of VLA 3 into three low-mass cores. In our observations, however, we do not have enough information to determine if CH\({}_{3}\)OH component three is due to an associated outflow.
CH\({}_{3}\)CCH has a range of E\({}_{\rm up}\) transition lines giving confidence in the LTE excitation temperature results. Similar to other star-forming regions, and as seen in WCCC sources, the carbon chains are on an extended scale with gas temperatures of around 30 K. In comparison, the COMs are not tracing the hot core at extended scales as such, and represent other mechanisms leading to their presence in the gas phase. Chemical modelling is needed to determine the formation routes, and is presented in Sect. 5.3.
The comprehensive large scale structure of AFGL 2591, as suggested by our results and the literature, is shown graphically in Fig. 13. The brown circles represent the sources VLA 1-5, with the dominant and most massive hot core VLA 3 highlighted. The warm, dense inner envelope is in light grey, while the methanol plume is in dark grey. The general direction of the outflows are represented in red and blue, marking the shift in velocity as well.
### Iras 20126
IRAS 20126 is a relatively simple high-mass star-forming region. The peak of molecular emission, in both our integrated intensity (Fig. 6) and column density maps (Fig. 10), is located near the protostellar source. The central molecular gas peaks at the site of the protostar and is oriented perpendicular to a small-scale outflow, involving a known disk (Cesaroni et al. 1997; Zhang et al. 1998; Cesaroni et al. 1999, 2005). The increased line widths (Fig. 10) towards the central protostellar region can be hypothesised to be due to increased turbulence, as in Cesaroni et al. (1997). The found temperatures in this source are not high enough to produce thermal broadening - it would be, for a methanol molecule at 30 K, only 0.4 km s\({}^{-1}\).
The protostar is young and embedded in a dense hot core (Cesaroni et al. 1997), reaching up to 200 K in the core where COMs are known to exist (Cesaroni et al. 2005; Xu et al. 2011. Our single dish observations are insufficient to spatially resolve the protostar, and we are seeing the protostar smoothed into the surrounding dense gas. Cesaroni et al. (1997), also with IRAM 30m observations, found a rotational temperature of 50 K for CH\({}_{3}\)OH, similar to the peak in our maps of both CH\({}_{3}\)OH and CH\({}_{3}\)CCH.
In our observations, CH\({}_{3}\)CCH and \(c\)-C\({}_{3}\)H\({}_{2}\) have visibly different velocity orientations than CH\({}_{3}\)OH and H\({}_{2}\)CO; however, neither group are oriented with the dense molecular gas surrounding the disk but with each of the two known outflows. The disk orientation, at scales of about 1000 AU (Sridharan et al. 2005), is also not seen in our beam of 16 000 AU. A large-scale (up to 2\({}^{\prime}\)) outflow traced in \({}^{12}\)CO has a red-shifted northern flow and a blue-shifted southern flow (Wilking et al. 1990; Shepherd et al. 2000), corresponding to the same gradient and orientation seen in the \(v_{lm}\) of CH\({}_{3}\)CCH with velocities increasing southward. The small-scale (\(<30^{\prime\prime}\)) bipolar outflow oriented north-west to south-east is traced in simple molecules (Cesaroni et al. 1997, 1999), suggested to be a jet-driven bow shock (Su et al. 2007).
In Cesaroni et al. (1997) there are curious dynamics seen in the CS(3-2) and HCO+(1-0) transition lines with blue-shifted outer wings and red-shifted inner wings in the north-west, and red-shifted outer wings and blue-shifted inner wings in the south-east. They interpret the structure as due to the orientation of the outflow axis relative to the plane of the sky. This outflow is traced by CH\({}_{3}\)OH and H\({}_{2}\)CO in our observations with a blue-shifted south-east flow, indicating it arises similarly to the inner wings. Palau et al. (2017) also find that COMs are seen extended towards the south-east in IRAS 20126, suggesting they arise from the warm cavity walls or post-shock region. Palau et al. (2017), in their modelling, find that H\({}_{2}\)CO, while possible to form in the gas phase, is co-spatial with larger COMs and thus likely arises from the same mechanism. Chemical modelling for our results is presented in Sect. 5.3.
Figure 13: Cartoon depiction of AFGL 2591. The brown circles represent the sources VLA 1-5, with the dominant protostellar source VLA 3 highlighted. The warm, dense inner envelope is in light grey, while the methanol plume is in dark grey. The general direction of the outflows are represented in red and blue, marking the shift in velocity as well.
As in AFGL 2591, at kpc distances our single dish observations are showing large-scale emission and not the hot core. While we cannot discern molecular presence at the hot core scale, and thus the differentiation between COMs and WCCC species, there is a notable difference in the physical conditions that the tracers CH\({}_{3}\)OH and H\({}_{2}\)CO arise in versus CH\({}_{3}\)CCH and c-C\({}_{3}\)H\({}_{2}\). These components, as seen in our observations and past studies, are summarised graphically in Fig. 14.
In the above discussion for both IRAS 20126 and AFGL 2591, our results are limited by the number of lines detected compared to the number of physical components present in the sources. A one- or two-component fit is representative in certain cases but may not reveal composite structure. Bouvier et al. (2020) are careful to note that a lot of complex molecule studies, which would include our own, are single dish observations and cannot spatially differentiate sub-structures of the regions.
These results would also have increased confidence with better S/N data, especially at high frequencies. There is a thorough E\({}_{\rm{ep}}\) range, however, covered in the transition lines within our frequency ranges which lends confidence to the excitation temperatures found, especially when molecules are in LTE.
### Chemical model
In order to provide concrete support for the chemical formation of these molecules in the observed environments, we modelled the species abundances of both high-mass star-forming regions using the NAUTILUS gas-grain code (Ruaud et al., 2016). NAUTILUS simulates the chemical evolution of a region in three phases - gas, grain surface, and grain mantle - using coupled differential equations to describe a network of over 10,000 reactions connecting 489 species. With NAUTILUS, we model how the abundance of any given species changes over time as a function of different astrophysical conditions. These conditions can be static or dynamic throughout the simulation, and include properties such as gas density, dust and gas temperature, visual extinction, ultraviolet flux and cosmic ray ionisation.
In this discussion, we lead with and display the models for CH\({}_{3}\)CCH and CH\({}_{3}\)OH. These two species have more numerous strong detections relative to \(c\)-C\({}_{3}\)H\({}_{2}\) and H\({}_{2}\)CO and are representative of CCM and COM evolution, respectively.
#### 5.3.1 AFGL 2591 chemical model results
#### 5.3.2 CH\({}_{3}\)CCH and CH\({}_{3}\)OH
We first determined the fractional abundances of CH\({}_{3}\)CCH and CH\({}_{3}\)OH in the cloud - the ratio between the column density of the analysed chemical to the column density of molecular hydrogen, H\({}_{2}\). We calculated the H\({}_{2}\) column density map for AFGL 2591 using archival JCMT SCUBA2 850 \(\mu m\) emission (JCMT-CAL observations on 2022/09/05). These maps were re-gridded in CASA7 to match the spatial resolution of the GBT and IRAM data.
Footnote 7: [https://casa.nrao.edu/](https://casa.nrao.edu/)
The 850 \(\mu m\) continuum is caused by dust emission, which correlate to the H\({}_{2}\) column densities as molecular hydrogen generally forms on dust grains (Wakelam et al., 2017). We used a gas to dust mass ratio of 100, a constant dust mass opacity coefficient of \(\kappa=1\) cm\({}^{2}\)/g appropriate for MRN dust with a thin ice mantle (MRN dust describes the dust size distribution in the Galaxy; Mathis et al., 1977; Ossenkopf & Henning, 1994), a mean mass per particle of \(\mu=2.32\), and a temperature derived from the LTE modelling of CH\({}_{3}\)CCH and CH\({}_{3}\)OH (typically \(\approx\) 30 K). We divided the CH\({}_{3}\)CCH LTE model column density map by the H\({}_{2}\) column density map and found CH\({}_{3}\)CCH abundances varying from \(5\times 10^{-10}\) to \(5\times 10^{-9}\) with the higher abundances in the northernmost 'blob' in Fig. 9.
CH\({}_{3}\)OH is more complicated with three distinct kinematic components. As we could not, a priori, determine how much of the total H\({}_{2}\) column density is associated with each component, we explored the CH\({}_{3}\)OH abundances in three small regions where we could reasonably isolate each component - a position in one component where there is no emission from the other two
Figure 14: Cartoon depiction of IRAS 20126. The orange circle represent the protostellar source, and the brown ellipse represents the central molecular gas and disk. The small-scale south-east–north-west outflow, traced by simple molecules, is in light red where red-shifted, and light blue where blue-shifted. The large-scale north-south outflow is similarly represented in dark red and dark blue.
\begin{table}
\begin{tabular}{l c} \hline \hline Species & Abundance \\ \hline He & \(9.0\times 10^{-2}\) \\ N & \(6.2\times 10^{-5}\) \\ O & \(2.4\times 10^{-4}\) \\ H & 0.0 \\ H2 & 0.5 \\ C+ & \(1.7\times 10^{-4}\) \\ S+ & \(1.5\times 10^{-5}\) \\ Si+ & \(8.0\times 10^{-9}\) \\ Fe+ & \(3.0\times 10^{-9}\) \\ Na+ & \(2.0\times 10^{-9}\) \\ Mg+ & \(7.0\times 10^{-9}\) \\ P+ & \(2.0\times 10^{-10}\) \\ Cl+ & \(1.0\times 10^{-9}\) \\ F & \(6.7\times 10^{-9}\) \\ \hline \end{tabular}
\end{table}
Table 5: Initial ISM abundances in the NAUTILUS model.
components. Since the LTE modelling of these three components utilised fixed temperatures of 18 K for component one and 8 K for components two and three, we used these temperatures to calculate the H\({}_{2}\) column densities in each of these components from the 850 \(\mu m\) map. We divided the CH\({}_{3}\)OH column density maps for each component by the appropriate H\({}_{2}\) column density map and found CH\({}_{3}\)OH abundances of 6.3\(\times 10^{-10}\) for component one (18 K), 3.7 \(\times 10^{-10}\) for component two (8 K), and 2.3 \(\times 10^{-10}\) for component three (8 K).
In NAUTILUS, we started with a two-phase model. First, a cold quiescent cloud was allowed to evolve for 10\({}^{5}\) years. This stage was run with initial abundances appropriate for the diffuse ISM (see Table 5), a temperature of 10 K, a density of 10\({}^{4}\) cm\({}^{-3}\), and a visual extinction of 50 magnitudes. Then, we assume that embedded protostars warm up the cloud, a stage evolved for another 10\({}^{6}\) years. In the second part of this simulation, the observed temperature of each pixel found from the LTE models is used as the dust and gas temperatures (typically 20-30 K), while the visual extinction is calculated from the average H\({}_{2}\) column density. Studies of the H\({}_{2}\) to A\({}_{V}\) conversion range from A\({}_{V}=\frac{N_{H_{2}}}{2.1\times 10^{4}}\)(Rachford et al., 2009; Zhu et al., 2017) to A\({}_{V}\) = \(\frac{N_{H_{2}}}{1.9\times 10^{4}}\)(Bohlin et al., 1978; Whittet, 1981). In this paper, we assumed an 'average' value of A\({}_{V}=\frac{N_{H_{2}}}{2\times 10^{4}}\). We used a standard cosmic ray ionisation rate of 1.3\(\times 10^{-17}\) s\({}^{-1}\). For the UV field, the NAUTILUS code provided a function of the form \(S\times 10^{8}\) photons cm\({}^{-2}\) s\({}^{-1}\)(Ruanot et al., 2016) where S is a scaling factor. We used a scaling factor of 1 to model a general interstellar radiation field. In following models, we modified both the CR ionisation rate and the UV scaling factor.
The volume densities of massive star-forming regions, on the scales that we present here, have been shown to have a centre-to-edge gradient of \(n\propto r^{-a}\) where \(\alpha\) ranges from 1.0 to 1.6 (van der Tak et al., 2000; Beuther et al., 2002). To account for such density gradients, we calculated models for three different densities: 10\({}^{4}\) which is appropriate for the cloud edges, 5 \(\times\) 10\({}^{4}\) an intermediate value, and 10\({}^{5}\) cm\({}^{-3}\) which simulates conditions in the cloud cores.
Figure 15 shows the results of these connected simulations. The horizontal dashed blue and red lines show the minimum and maximum CH\({}_{3}\)OH and CH\({}_{3}\)CCH abundances, respectively. In general, this simple simulation can produce the observed CH\({}_{3}\)CCH abundances for the majority of the cloud at about 10\({}^{6}\) years. This is true regardless of position - at the cloud edges where n = 10\({}^{4}\) or the cores where n = 10\({}^{5}\). The one caveat to this statement, however, occurs in the northern part of the map, where the CH\({}_{3}\)CCH abundances are the highest. If the density in this region is as low as 10\({}^{4}\) cm\({}^{-3}\), then our models cannot account for these high observed abundances. Figure 15 also shows that we cannot reproduce the observed CH\({}_{3}\)OH abundances at anytime for any density.
Given the poor match to our CH\({}_{3}\)OH observations, we modified the chemical model to attempt to restrict the conversion of CH\({}_{3}\)OH from the gas phase to the dust phase, or to help thermally desorb CH\({}_{3}\)OH off the dust grains. A variety of methods were attempted, including increasing the ambient ultraviolet flux and cosmic ray ionisation rates. Increasing the scaling factor of the UV flux to S = 1000 had no effect upon the chemistry of either species. This is expected since the UV reaction rates take the form \(k\propto e^{-S\,Av}\) and A\({}_{V}\) is generally so high that the UV field is completely attenuated. To simulate the effects of an external radiation field on the lower volume and column density cloud edges, we also ran a model with n = 10\({}^{4}\) and A\({}_{V}\) = 5. This simulation results in CH\({}_{3}\)OH and CH\({}_{3}\)CCH abundances that are respectively two and four orders of magnitude lower than those shown in Fig. 15 due to the increased rate of photodestruction of both molecules.
Increasing the cosmic ray ionisation rate by a factor of three increases the abundances of both species by a very small amount, but not enough to increase the CH\({}_{3}\)OH abundances to observable values. Continued increases in the cosmic ray ionisation rate, however, have the opposite effect and result in lower abundances than those seen in Fig. 15, again due to an increased destruction rate.
As thermal desorption mechanisms cannot produce the observed CH\({}_{3}\)OH abundances, we turn our attention to non-thermal ones. A shockwave passing through a cloud filled with dust may induce sputtering, which causes the dust grains to shatter (Jones et al., 1994), liberating the grain species back into the gas phase and causes a sharp increase in the gas phase abundances. An outflow, such as the ones seen in our sources, can produce such a shock (Zhang et al., 2005; Herbst & van Dishoeck, 2009; Palau et al., 2017). Since it only includes thermal desorption processes, NAUTILUS cannot simulate sputtering directly. We simulated the effects of shock induced, non-thermal desorption (i.e. sputtering) by implementing a three stage model.
The first stage, cold cloud evolution, is the same as our prior model. The second stage, a shock, simulated the passage of a shock wave. We took the abundances at the end of stage one, removed all the grain surface species and added them back into the gas phase. The shock stage begun with enhanced gas phase abundances of all species and no grain surface species. During the shock we utilised the density and temperature evolution of the gas calculated by Palau et al. (2017) for IRAS 20126 (see their Fig. 5, top). This shock model follows the parametric approximation of Jimenez-Serra et al. (2008) for C-type shocks, assuming a shock velocity of 40 km s\({}^{-1}\) and a pre-shock density of 10\({}^{4}\) cm\({}^{-3}\). In our chemical models of this stage, we fixed the dust temperature to 80 K. The duration of the shock is 10\({}^{4}\) years, during which the temperature sharply increased to 2500 K before decreasing to 50 K, while the density slowly increased from 10\({}^{4}\) to 8.2 \(\times\) 10\({}^{4}\) cm\({}^{-3}\), where it stabilised for the rest of the simulation. In the post shock stage, we returned the gas to the temperature used in the warm-up stage above, and allowed the cloud to evolve for a final 10\({}^{6}\) years at a constant gas density of 8.2 \(\times\) 10\({}^{4}\) cm\({}^{-3}\).
Figure 16 shows the results of the three stage chemical evolution model. The panel on the left shows the shock phase, and the panels on the right show the post shock evolution for temperatures of 8 K, appropriate for component two and component three of CH\({}_{3}\)OH, 18 K, appropriate for component one of CH\({}_{3}\)OH, and 30 K, appropriate for CH\({}_{3}\)CCH. As can be seen in the right panels, all of the observed abundances can be reached after about 10\({}^{5}\) years of post shock evolution. For CH\({}_{3}\)CCH, the higher abundance limit is reached much earlier in the simulation during the shock phase. In AFGL 2591, while the majority of the cloud can be modelled as a simple cold cloud followed by a warm up phase, as is appropriate for WCCC, the northern region may require a non-thermal desorption mechanism for CH\({}_{3}\)CCH if the density is around 10\({}^{4}\) cm\({}^{-3}\).
### H\({}_{2}\)CO and \(c\)-C\({}_{3}\)H\({}_{2}\)
To model H\({}_{2}\)CO and \(c\)-C\({}_{3}\)H\({}_{2}\) we first assumed that the physical conditions were identical to those for CH\({}_{3}\)CCH and CH\({}_{3}\)OH; namely that the cloud experienced a 10 K cold stage followed by a 30 K warm-up stage. We also used the H\({}_{2}\) column density map
calculated at 30 K to determine the H\({}_{2}\)CO and \(c\)-C\({}_{3}\)H\({}_{2}\) abundances. Under these conditions, the H\({}_{2}\)CO abundance ranges from \(3\times 10^{-9}\) in the cloud core to \(10^{-8}\) in the north western cloud edges. The \(c\)-C\({}_{3}\)H\({}_{2}\) abundances range from \(4\times 10^{-11}\) in the cloud core to \(2\times 10^{-10}\) along the western and northern edges. Under these conditions, the observed \(c\)-C\({}_{3}\)H\({}_{2}\) abundances are reached in the cold cloud stage and remain high well into the warm-up stage. Note that the same is true if we model the \(c\)-C\({}_{3}\)H\({}_{2}\) abundances using a temperature of 20 K to calculate the H\({}_{2}\) column density. H\({}_{2}\)CO, however, requires the 30 K warm-up stage in order to reach the observed abundances.
While an 'average' temperature of 30 K may be appropriate for \(c\)-C\({}_{3}\)H\({}_{2}\) (see Fig. 9), it is not for H\({}_{2}\)CO where the average temperature is about 40-50 K. For H\({}_{2}\)CO we used temperatures of 50 K to calculate the H\({}_{2}\) column density and then the abundance map. Given the higher temperature, the resultant H\({}_{2}\) column densities are lower, resulting in slightly increased H\({}_{2}\)CO abundances which now vary from \(5\times 10^{-9}\) to \(2\times 10^{-8}\). Under these conditions, the model cannot produce the observed abundances in the cold cloud stage, but can easily do so early in the 50 K warm-up stage. Therefore, no shock liberation of either molecule off the grain surfaces is required to produced the observed abundances.
#### 5.3.2 Iras 20126 chemical model results
#### 5.3.3 Ch\({}_{3}\)Cch and CH\({}_{3}\)Oh
In IRAS 20126, we only considered the main component of the cloud since, other than the cold 'foreground screen' described in Sect. 4.2.3, the CH\({}_{3}\)CCH and CH\({}_{3}\)OH LTE model is dominated by one main component. To produce the corresponding H\({}_{2}\) column density map, we used archival JCMT SCUBA-1 850 \(\mu m\) continuum observations (Proposal ID M98BU24). These maps were re-gridded to match the spatial resolution of the GBT and IRAM data. We divided the CH\({}_{3}\)CCH and CH\({}_{3}\)OH LTE model column density maps by the H\({}_{2}\) column density map and found abundances varying from \(10^{-10}\) and \(10^{-9}\) for both species.
We followed the same chemical modelling approach in NAUTILUS as was used for AFGL 2591. We allowed the cloud to evolve for \(10^{5}\) years under cold conditions (n= \(10^{4}\) cm\({}^{-3}\), T = 10 K) starting with general ISM abundances, and then warmed the gas to 30 K to evolve for another \(10^{6}\) years.
Figure 15 shows the results of these simulations, along with the results for AFGL 2591. The left panel displays the cold cloud stage with the parameters listed above and the right panel shows the warm-up stage for a pixel with T = 30 K, A\({}_{V}\) = 50 (i.e. N(H\({}_{2}\)) = \(10^{23}\) cm\({}^{-2}\)), and, from the top, \(n=10^{4}\) cm\({}^{-3}\), \(n=5\times 10^{4}\) cm\({}^{-3}\), and \(n=10^{5}\) cm\({}^{-3}\). The dotted black horizontal lines indicate the
Figure 15: Results from the NAUTILUS two stage chemical evolution model. The solid blue and red lines represent the time evolution of the CH\({}_{3}\)OH and CH\({}_{3}\)CCH abundances, respectively. The blue and red dashed horizontal lines indicate the minimum and maximum observed abundances for CH\({}_{3}\)OH and CH\({}_{3}\)CCH, respectively, in AFGL 2591. The dotted black line indicates the minimum and maximum observed abundances in IRAS 20126. The left panel shows the cold cloud evolution stage and the right panels show the warm-up stage for three different densities.
observed abundance limits of \(10^{-10}\) and \(10^{-9}\). Figure 15 shows that although the observed CH\({}_{3}\)CCH abundances can be reproduced after a few \(\times 10^{5}\) years in the warm up phase, the model CH\({}_{3}\)OH abundances never reach the observed values. While we show the results for one specific pixel, these results are generally true in each pixel across the entire source. The main reason the modelled CH\({}_{3}\)OH abundances are so low is due to freeze out of gas phase methanol onto the dust grains.
As discussed previously, a number of outflows and associated shocks have been identified and observed within IRAS 20126 (e.g. Shepherd et al. 2000; Zhang et al. 2001). Similarly to AFGL 2591, we simulated the effects of shock induced desorption with a three stage model. We included a shock stage in which all of the molecules frozen onto the dust (after the cold cloud evolution stage) are sputtered back into the gas phase, using the same gas density and temperature evolution. Then, after the shock stage, the cloud is allowed to continue to evolve at a density of \(8.2\times 10^{4}\) cm\({}^{-3}\) and at the currently observed temperatures derived from the LTE model.
Figure 16 presents the results from the shock on the left and post shock stages on the right, for our test pixel with a temperature of 30 K. The corresponding cold cloud stage is identical to Fig. 15. The dashed horizontal lines indicate the observed abundance limits of \(10^{-10}\) and \(10^{-9}\). Throughout the duration of the shock, and for the first \(10^{5}\) years of the post shock evolution, the CH\({}_{3}\)OH abundance remains above the observed values. After this, CH\({}_{3}\)OH begins tofreeze onto the dust grain surfaces and, in doing so, passes through the range of observed abundances. While CH\({}_{3}\)CCH does not require a shock to produce the observed abundances (see Fig. 15), the passage of a shock can also produce the observed abundances after \(10^{6}\) years.
Again, while the results presented are for the physical parameters in a single pixel, these results are generally true across the entire region. Shocks, which we suggest are produced by the observed outflows, are required to explain the observed CH\({}_{3}\)OH abundances over the entirety of IRAS 20126. They are not, however, needed to produce the observed abundances of CH\({}_{3}\)CCH. This is as seen in AFGL 2591, with the exception of the northern region. Our results for CH\({}_{3}\)CCH in IRAS 20126 are uniformly consistent with Warm Carbon Chain Chemistry.
### H\({}_{2}\)Co and \(c\)-C\({}_{3}\)H\({}_{2}\)
The modelling of H\({}_{2}\)CO and \(c\)-C\({}_{3}\)H\({}_{2}\) proceeded in a fashion similar to that done for AFGL 2591. We first assumed the same conditions as for CH\({}_{3}\)OH and CH\({}_{3}\)CCH, and then used the temperatures determined from our LTE model (see Fig. 10). With the H\({}_{2}\) column density map calculated at 30 K we derived H\({}_{2}\)CO abun
Figure 16: Results from the NAUTILUS three stage chemical evolution model. The solid blue and red lines represent the time evolution of the CH\({}_{3}\)OH and CH\({}_{3}\)CCH abundances, respectively. The blue and red dashed horizontal lines indicate the minimum and maximum observed abundances for CH\({}_{3}\)OH and CH\({}_{3}\)CCH, respectively, in AFGL 2591. The dotted black line indicates the minimum and maximum observed abundances in IRAS 20126. The left panel shows the shock stage and the right panels show the post shock stage for three different temperatures.
dance ranges from \(6\times 10^{-11}\) around the cloud edges to \(3\times 10^{-10}\) in the cloud core. The \(c\)-C\({}_{3}\)H\({}_{2}\) abundance ranges from \(3\times 10^{-11}\) in the cloud core and eastern edges to \(10^{-10}\) along the western edges. Under these conditions, the abundances of both molecules are reached in the cold cloud stage and remain high well into the warm-up stage.
From Fig. 10 we see that the temperature derived for \(c\)-C\({}_{3}\)H\({}_{2}\) is about 8 K and fairly uniform across the map. For H\({}_{2}\)CO, the temperature is about 80 K and also fairly uniform. We used these two temperatures to calculate the respective H\({}_{2}\) column density maps and found that, for T = 8 K, the \(c\)-C\({}_{3}\)H\({}_{2}\) abundance varies from \(4\times 10^{-12}\) in the cloud core and eastern edges to \(10^{-11}\) along the western edges. For T = 80 K, the H\({}_{2}\)CO abundance ranges from \(2\times 10^{-10}\) around the cloud edges to \(10^{-9}\) in the cloud core. The chemical models are able to produce these observed abundances as well. For \(c\)-C\({}_{3}\)H\({}_{2}\) at 8 K, the models match the observations early in the cold cloud stage. For H\({}_{2}\)CO at 80 K, the model matches the observations early in the 80 K warm-up stage - although, the lowest abundances can be reached in the cold cloud stage as well. Therefore, as with AFGL 2591, no shock liberation of either molecule off the grain surfaces is required to produce the observed abundances.
## 6 Summary
This paper covers a targeted survey of carbon chain molecules in two high-mass star-forming regions - AFGL 2591 and IRAS 20126 - towards the Cygnus X star-forming complex. With observations from the IRAM 30 m telescope and the GBT 100 m dish, we present:
1. detections of numerous rotational transition lines of two carbon chain molecules, CH\({}_{3}\)CCH and \(c\)-C\({}_{3}\)H\({}_{2}\), and two molecules related to the complex organic family, CH\({}_{3}\)OH and H\({}_{2}\)CO, in both sources.
2. a LTE model, developed in python to loop over several spectra in a map, that determines the physical environment the detected molecular species arise from.
3. findings of carbon chain and complex organic molecules originating from different gases. The excitation temperature of carbon chains is typically 20-30 K while for complex organics can range from 8-85 K at these large scales (smoothing out the hot core). The velocity structures of each molecular type vary, and trace known structures of the regions.
4. chemical evolution modelling with NAUTILUS that demonstrates that the observed abundances of CH\({}_{3}\)CCH can be produced in the warm-up stage, where the embedded protostar has warmed the cloud to temperatures of 20-30 K. This is consistent with Warm Carbon Chain Chemistry. CH\({}_{3}\)OH, on the other hand, requires a non-thermal desorption method to produce the observed abundances. A shock, sputtering the molecules off the dust grain surface, adequately does this. These mechanisms are found for both high-mass star-forming regions.
These results expand on current knowledge of carbon chain chemistry in high-mass star-forming regions. Warm carbon chain chemistry, and other envelope scale chemical mechanisms, can readily be studied with single dish telescopes. For an investigation of hot core chemistry, higher spatial resolution observations are required. These results also demonstrate the use of LTE models for mapping the physical environment using wide-band data cubes - the type of data many modern telescopes are capable of providing.
###### Acknowledgements.
This work is based on observations carried out under project numbers 021-20 and 122-20 with the 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. PF would like to thank Larry Morgan and Dave Frayer at the GBT/NRAO for their help developing observing scripts well as GRITD. scripts for ARGUS calibration and reduction. PF would also like to thank Pablo Torne and Monica Rodriguez at IRAM for their help observing with the IRAM telescope. PF and RP acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), through the Canada Graduate Scholarships - Doctoral, Michael Smith Foreign Study Supplement, and Discovery Grant programs. As researchers at the University of Calary, PF and RP acknowledge and pay tribute to the traditional territories of the peoples of Treaty 7, which include the Blackfoot Confederacy (comprised of the Siksika, the Pikani, and the Kainai First Nations), the Tsun'a First Nation, and the Stoney Nakoda (including Chinniki, Bearspaw, and Goodsburg First Nations). The City of Calary is also home to the Metis Nation of Alberta Region 3.
|
2302.12699 | Wall-and-chamber structures for finite-dimensional algebras and
$τ$-tilting theory | The wall-and-chamber structure is a geometric invariant that can be
associated to any algebra. In this notes we give the definition of this object
and we explain its relationship with torsion classes and $\tau$-tilting theory. | Maximilian Kaipel, Hipolito Treffinger | 2023-02-24T15:55:06Z | http://arxiv.org/abs/2302.12699v1 | [
###### Abstract
The _wall-and-chamber_ structure is a geometric invariant that can be associated to any algebra. In this notes we give the definition of this object and we explain its relationship with torsion classes and \(\tau\)-tilting theory.
[
]W.-and-chamber structures for finite-dimensional algebras and \(\tau\)-tilting theory
M. Kaipel]Maximilian Kaipel and H. Trefinger]H. Trefinger
Primary 16-06; Secondary 16G10, 16G20, 16G99, 16E20.
## 1 Introduction
The main aim of representation theory of finite-dimensional algebras is to understand the category of (finitely presented) modules over a given algebra. One of the founding results in this area is a result by Gabriel [21], which states that the module category of any finite-dimensional algebra over an algebraically closed field is equivalent, in the sense of Morita [28], to the category of representations of a quiver with relations (which depend on the original algebra). This was breakthrough in the field, not only because it reduces greatly the universe of algebras to be studied, but also because with the incorporation of quivers, Gabriel also allowed the introduction of important combinatorial tools that have played a central role in the theory ever since.
Several years later, at the turn of the century, Fomin and Zelevinski introduced _cluster algebras_[18] as a new approach to understand Lusztig's dual canonical bases. These are algebras that are defined from starting data, known as the _the initial seed_, and then the reminding data is constructed using a combinatorial process known as _mutation_. For a family of cluster algebras of particular importance, the _antisymmetric_ cluster algebras, both the initial data and the mutation process can be encoded in terms of quivers. As a consequence, many mathematicians started using the tools developed throughout the years in representation theory to understand cluster algebras and solve some of the standing conjectures in this new subject [2, 12, 13, 22].
Also, the study of cluster algebras made explicit certain patterns in the module category of every finite-dimensional algebra. It was in this context that notions which are |
2305.16458 | Effective Vaccination Strategies in Network-based SIR Model | Controlling and understanding epidemic outbreaks has recently drawn great
interest in a large spectrum of research communities. Vaccination is one of the
most well-established and effective strategies in order to contain an epidemic.
In the present study, we investigate a network-based virus-spreading model
building on the popular SIR model. Furthermore, we examine the efficacy of
various vaccination strategies in preventing the spread of infectious diseases
and maximizing the survival ratio. The experimented strategies exploit a wide
range of approaches such as relying on network structure centrality measures,
focusing on disease-spreading parameters, and a combination of both. Our
proposed hybrid algorithm, which combines network centrality and illness
factors, is found to perform better than previous strategies in terms of
lowering the final death ratio in the community on various real-world networks
and synthetic graph models. Our findings particularly emphasize the
significance of taking both network structure properties and disease
characteristics into account when devising effective vaccination strategies. | Sourin Chatterjee, Ahad N. Zehmakan | 2023-05-25T20:27:18Z | http://arxiv.org/abs/2305.16458v1 | # Effective Vaccination Strategies in Network-based SIR Model
###### Abstract
Controlling and understanding epidemic outbreaks has recently drawn great interest in a large spectrum of research communities. Vaccination is one of the most well-established and effective strategies in order to contain an epidemic. In the present study, we investigate a network-based virus-spreading model building on the popular SIR model. Furthermore, we examine the efficacy of various vaccination strategies in preventing the spread of infectious diseases and maximizing the survival ratio. The experimented strategies exploit a wide range of approaches such as relying on network structure centrality measures, focusing on disease-spreading parameters, and a combination of both. Our proposed hybrid algorithm, which combines network centrality and illness factors, is found to perform better than previous strategies in terms of lowering the final death ratio in the community on various real-world networks and synthetic graph models. Our findings particularly emphasize the significance of taking both network structure properties and disease characteristics into account when devising effective vaccination strategies.
## I Introduction
The spread of infectious diseases has long been a major public health concern. It has caused significant deaths, comorbidity, and economic losses throughout human history, as noted in [1]. It is a very complex phenomenon, and understanding the effects needs contribution from several fields of study such as epidemiology, public health, medicine, mathematics, sociology, and computer science. Mathematical modeling, which draws on disciplines such as non-linear dynamics [2], graph theory [3], and statistics [4], is key to understanding the negative impact of infectious diseases on the population. It can provide insights into the spread of the disease, help predict the course of epidemics, estimate the number of hospitalizations and deaths, and evaluate the effectiveness of different control measures. The emergence of global pandemics such as the COVID-19 [5], and the H1N1 Influenza [6] in recent years has highlighted the importance of understanding and designing measures to control the dynamics of infectious disease transmission.
Various disease modeling approaches have been attempted with a view to understanding the dynamics of the pandemic. Compartment models are the simplest way to study epidemics, where compartmentalizing populations is done on the basis of whether the disease generates immunity or not. The SIR (Susceptible-Infected-Recovered) model, developed by Kermack and McKendrick in 1927, [7] is one of the classic examples of this. However, when the heterogeneity of the data is large and several meta-data like contact information, and age have a role to play in the disease dynamics such models would fail to approximate the general principle of the spreading process to a reasonable extent.
A network disease model is a type of mathematical model that is used to investigate the transmission of infectious diseases through networks of social contacts. In a network, individuals are represented as nodes in a graph, and the edges between them represent their interactions, such as encounters at work or in public places. The weights assigned to the edges between nodes represent the probability that an infected person will transmit the disease to their connected neighbors. The weights depend on various parameters such as the strength of the relationship (i.e., the likelihood of encounter), the speed of transmission, and the length of the infectious period. In the building of a network model, parameters relevant to disease transmission should be established. This would permit studying different intervention strategies like vaccination or quarantine, in order to curb the disease, in a much more accurate and realistic fashion.
Though testing [8; 9], contact tracing [10] and quarantining are effective measures at the early stages of a pandemic [11], one of the most effective ways to prevent the spread of infectious diseases is vaccination. However, the optimal allocation of vaccines, especially with limited resources, is a complex and difficult matter. In light of COVID-19 and potential future pandemics, the availability of a vaccine prioritization method is, therefore, a critical concern, cf. [12]. Keeping in mind that future epidemics and other viruses may affect people differently, designing and understanding various effective vaccination strategies are very essential for the successful containment of potential pandemics in the future.
In the case of COVID-19 vaccination, most of the governments prioritized the elderly individuals, due to the proven higher fatality rate, and the healthcare workers, due to the higher likelihood of being exposed to the virus, cf. [13]. A good prioritization plan should balance the need to vaccinate vulnerable populations with the need to vaccinate those who are most likely to spread the disease. A successful vaccine allocation strategy needs to take a model of disease transmission and its epidemiological aspects, like transmission rate, mortality rate,
and recovery rate into account, as well as the underlying network structure.
Most prior works have disregarded the network structure and assumed that individuals are equally likely to interact with each other, for the sake of simplicity in the analysis [14]. In our approach, we take the structure of the network, capturing the connections among people, into account and study the virus spreading processes on different real-world and synthetic graph data. Furthermore, we model the spread of infectious diseases using the SIR model, taking various factors such as transmission probability and cure rates into account. This provides a valuable test bed for evaluating and comparing different vaccination strategies.
Our main goal is to devise effective and efficient vaccination strategies in this enriched set-up. We first show that finding an optimal strategy is computationally challenging (more precisely, it is NP-hard). Thus, as prior work [15], we resort to approximation and heuristic approaches. We develop a set of vaccination strategies that consider different properties and characteristics of the network as well as the disease to efficiently allocate scarce vaccine resources. This is unlike most prior work which focuses solely on network structure[12] or diseases characteristics [13]. Our strategies aim to minimize the number of deaths while maximizing the number of lives saved.
We compare our proposed strategies along with several centrality-based vaccination strategies that have received significant attention in recent years, cf. [15; 12]. Centrality-based vaccination strategies prioritize people based on their location on the network, with the goal of identifying those who have the greatest potential to infect others. Targeting these individuals can effectively reduce overall disease transmission. However, unlike our proposed hybrid approaches, these strategies rely solely on the structure of the graph and do not exploit any information regarding the spread of the virus. Through an extensive set of experiments, we evaluate the performance of various strategies for different levels of vaccination coverage. Our results highlight the strengths and weaknesses of each approach, providing valuable insights for public health officials and policymakers in their efforts to stem the spread of infectious diseases.
In short, the present work contributes to the growing literature on epidemic models and vaccination strategies by providing a robust simulation framework and introducing new effective approaches to vaccine allocation. Furthermore, our comparative analysis provides valuable insights into the effectiveness of different strategies, paving the way for more targeted and effective vaccination campaigns in the face of future epidemics and pandemics.
**Roadmap.** First, we overview some further related work in more detail in Section II. Then, we provide the exact formulation of our epidemic model and the problem of maximizing the survival ratio using vaccination in Section III. In section IV, we prove that finding an optimal vaccination strategy is NP-hard. To facilitate the necessary grounds for introducing and experimenting with different strategies, we describe the dataset and also the parameters on which the experiments were performed in Section V. Then in Section VI, we develop several vaccination algorithms. Finally, the outcome of our experiments and their analysis are provided in Sections VII and VIII, respectively.
## II Prior work
In this section, we overview some related works on various mathematical epidemic models and different control strategies, particularly vaccination.
### Epidemic Models
One of the oldest sets of epidemic models are the compartmental models, such as SIR [7; 16]. In this approach, one assumes a homogeneous mixing of the population, which permits utilizing differential equations to describe the flow of individuals between compartments over time. Though these models are usually completely deterministic and simple to analyze, they have some fundamental shortcomings and thus various extensions of them have been introduced. By considering stochastic techniques, authors of [17] showed that such stochastic models capture the uncertainty of the system and perform better in order to predict the spread of viral diseases. Additionally, several other techniques like agent-based modeling[18; 19], age-based models [2; 8], network models [1; 12], machine learning or deep learning models [20; 21] have been proven useful to model the epidemics more accurately and gain deep insights into virus spreading dynamics.
Moreno et al. [22] proposed an epidemic model on complex networks that incorporates the SIR model. By analyzing the effect of network topology, they argued that there is a threshold of spreading parameters that determines whether an epidemic will occur or not. In [23], the authors studied a simple variant of SIR on Random Geometric Graphs and showed a result of a similar flavor. They demonstrated that if \(\lambda_{\max}\leq\delta/\beta\), then the virus does not spread, where \(\delta\) is the recovery probability and \(\beta\) is the probability of an infectious node making a susceptible neighbor infectious and \(\lambda_{\max}\) is the highest eigenvalue for the adjacency matrix of the underlying network.
Researchers in the area of network science have extensively studied the ideal conditions for creating and selecting super-spreaders in various applications, such as information, virus, and fire propagation, cf. [24; 25]. Some prior work [26; 27] has attempted to address the difficulty of identifying and preventing the transmission of infections, viruses, and false information. They came to the conclusion that network density and spreading within communities are crucially associated. They ob
served that regardless of the size and configuration of the community, the inter-community edges play a fundamental role in the propagation of an epidemic.
In [2], the authors used an age-structured social contact matrix which has components like - school, household, work, and others. The authors in [1] have reconstructed the network from several databases and simulated the disease in the network. In [28], the authors experimentally studied some dynamic processes such as bond percolation and the SIR model on different network structures. They observed that randomly shuffling the inter-community edges changes the process significantly. However, randomly distributing the edges inside each community does not change the process substantially.
### Control Strategies
Social distancing, testing, quarantining, and vaccinations are some of the key strategies to control an epidemic. The network-based epidemic models allow the researchers to study the impact of network structure and evolution on disease spread and the effectiveness of interventions such as vaccination and social distancing [29]. In [8], the authors designed age-group targeted testing strategies to identify the infected ones and quarantine them which was demonstrated to be very useful to reduce the total number of infections. Contact tracing can be highly effective in a heterogeneous network, isolating fewer nodes in total but preventing more cases as shown in [10].
There have been numerous studies conducted to show the effectiveness of vaccination strategies as they generate herd immunity which essentially prevents the disease from spreading. In [30], the authors have shown how epidemics can be controlled and eliminated from the system by using impulsive vaccination, where the application of vaccine dose period is small compared to disease dynamics. In [14], the authors have shown that vaccine coverage of 60% (assuming life-long immunity) along with strict measures are enough to prevent the re-emergence of the Covid-19 pandemic.
Khansari et al. [15] compared the performance of several centrality-based and hybrid centrality-based algorithms like Betweenness-degree, Closeness-degree, and newly proposed algorithms like Katz-degree, and Radiality-degree on several random graphs models like Erdos-Reyni model [31], Barabasi-Albert model[32], and Watts-Strogatz model[33]. They evaluated the effectiveness of their method by measuring the largest connected component and how these measures reduce the largest eigenvalue of the adjacency matrix in the graph. Similarly, the authors of [12] showed that in different graph structures, how a vaccination strategy based on centrality measures is better than no vaccination or random vaccination.
## III Model Description
Let us first provide some basic graph definitions here.
**Definition 1**.: \(G=(V,E)\) _is said to be an undirected graph, where \(V\) is a finite non-empty set of objects named nodes and \(E\) is a collection of two-element subsets of \(V\) called edges._
Let us define \(n:=|V|\) and \(m:=|E|\).
**Definition 2**.: _For a node \(v\in V\), \(N\left(v\right):=\left\{v^{\prime}\in V:\left\{v^{\prime},v\right\}\in E \right\}\) is the neighborhood of \(v\). Furthermore, \(\hat{N}(v):=N(v)\cup\left\{v\right\}\) is the closed neighborhood of \(v\)._
**Definition 3**.: _Let \(d\left(v\right):=\left|N\left(v\right)\right|\) be the degree of \(v\) in \(G\). We also define \(d_{B}(v):=\left|N(v)\cap B\right|\) for a set \(B\subseteq V\)._
**Definition 4**.: _The Adjacency matrix \(A\) of an undirected graph \(G\) is an \(n\times n\)\(0\)-\(1\) matrix whose columns and rows represent the nodes of the graph and edges between them are represented by the entries. A value of 1 in the matrix at position \((u,v)\) indicates that there is an edge between nodes \(u\) and \(v\), while a value of 0 indicates that there is no edge. The matrix \(A\) is obviously symmetric._
To model the temporal dynamics of the epidemic outbreak, we used a discrete-time Markovian compartmental model [34] to simulate the spread of a disease on real-world networks. Our model is a generalization of the original SIR [7] which is arguably the most well-established epidemics model. The SIR model as its name suggests covers three compartments Susceptible(S), Infectious(I), and Recovered (R). We also consider two additional compartments of Dead and Vaccinated. In our model, we represent each individual through a node, so they can be in one of the following five states:
* _Susceptible_: A node that is not infectious, but may become infectious once in contact with an Infectious node.
* _Infectious_: A node that is Infectious and is capable of transmitting the disease to Susceptible nodes.
* _Recovered_: The nodes which have been Infectious and have recovered from the disease and are no longer susceptible to re-infection.
* _Dead_: The nodes which have been Infectious and have died from the disease.
* _Vaccinated_: A node that is not susceptible to disease due to prior immunity against the disease by vaccination. (We assume an Infectious node cannot be vaccinated.)
**Remark.** Note that the above definitions imply that once an individual is recovered/vaccinated, they do not become infected any longer. Most of our results would hold if we relax this assumption slightly, for example by
allowing a recovered/vaccinated individual to become infected with \(10\%\) of the original probability of infection. Studying the setup where the vaccines are not highly effective, or the recovered individuals can become infectious with a large probability are out of the scope of the present study.
In our graph-based SIR model, we consider a graph \(G\), where each of the \(n\) nodes represents an individual, and there is an edge if the corresponding two individuals are connected. A node (i.e., individual) at any given time can be in one of the five states. Then, in each discrete time round \(t\), all nodes update their state following the updating rule imposed by the virus spread dynamics.
Let \(S(t)\), \(I(t)\), \(R(t)\), \(D(t)\), and \(VC(t)\) respectively denote the set of Susceptible, Infectious, Recovered, Dead, and Vaccinated nodes in the \(t\)-th round of the process. Furthermore, let \(N_{S(t)}(v):=N(v)\cap S(t)\), \(N_{I(t)}(v):=N(v)\cap I(t)\), \(N_{R(t)}(v):=N(v)\cap R(t)\), \(N_{D(t)}(v):=N(v)\cap D(t)\), and \(N_{VC(t)}(v):=N(v)\cap VC(t)\) for a node \(v\in V\).
Starting from an initial configuration, where each node is in one of the aforementioned five states, in each discrete-time round \(t\in\mathbb{N}\), all nodes simultaneously update their state in the following manner, where the _infection rate_\(\beta\), _recovery rate_\(\gamma\), and the weight functions \(\omega(\cdot)\), \(\omega_{i}(\cdot)\), \(\omega_{r}(\cdot)\), and \(\omega_{d}(\cdot)\) are model's parameters and are explained below:
* A susceptible node \(v\) becomes infectious with probability \[\beta\times\omega_{i}(v)\times\frac{\sum_{u\in N_{I(t)}(v)}\omega(\{v,u\})} {\sum_{u\in N(v)}\omega(\{v,u\})}.\] (1)
* An infectious node \(v\) switches to Dead with probability \(\omega_{d}(v)\). If this does not happen, then it switches to recovered independently with probability \(\gamma\;\omega_{r}(v)\). Otherwise, it remains Infectious.
* A recovered, dead, or vaccinated node's status remains unchanged.
**Definition 5**.: _The weight function \(\omega:B\to[a,b]\) assigns a value between \(a\) and \(b\) to each element in \(B\). We are particularly interested in the case where \(B=E\) or \(B=V\)._
In the above description of the model we relied on the following weight functions:
* Infectious function: \(\omega_{i}:V\to[0,1]\)
* Recovery function: \(\omega_{r}:V\to[0,1]\)
* Death function: \(\omega_{d}:V\to[0,0.1]\).
* Edge weight function: \(\omega:E\to[0,1]\)
The weight \(\omega_{i}(v)\) for a node \(v\) indicates how susceptible the node is to the disease. Similarly, the weight \(\omega_{r}(v)\) for a node \(v\) indicates how well the node can recover once it is infected, and \(\omega_{d}(v)\) for a node indicates how likely the node is going to die from the disease while being infected. So, each node \(v\) has a _death probability_\(\omega_{d}(v)\). once it becomes infectious, it dies with probability \(\omega_{d}(v)\) in each time step and survives (i.e., as usual, remains infectious and eventually recovers) with probability \(1-\omega_{d}(v)\). These could be a function of different parameters, such as age or sex. The weight \(\omega(e)\) for an edge \(e=\{v,u\}\) models the probability of transmission between two nodes \(v\) and \(u\).
Our model is a generalization of the original SIR model [35]. If we set \(G\) to be the complete graph \(K_{n}\), let \(\omega(e)=1\) for every edge \(e\), and define \(\omega_{r}(v)=1\), \(\omega_{i}(v)=1\), and \(\omega_{d}(v)=0\) for every node \(v\), then the model is equivalent to the SIR model.
We observe that in our model, the process eventually reaches a configuration where there is no Infectious node, and thus no node will change its state anymore.
## IV Problem formulation and Inapproximability result
In this section, we first introduce the Vaccination Problem. Then, building on a reduction from the Densest Subgraph Problem, we prove in Theorem IV.3 that our vaccination problem is NP-hard.
Vaccination Problem
**Input**: A graph \(G=(V_{G},E_{G})\), weight functions \(\omega_{i}(v)\), \(\omega_{d}(v)\), and \(\omega_{r}(v)\) for every node \(v\in V_{G}\) and \(\omega(e)\) for every edge \(e\in E_{G}\), the state of each node (e.g., Susceptible, Infectious, or Recovered), and integers \(k,l^{\prime}\).
**Output**: Is the maximum expected number of survived nodes at the end of the process equal to \(l^{\prime}\) if only \(k\) nodes can be vaccinated?
Densest Subgraph Problem
**Input**: A connected graph \(H=(V_{H},E_{H})\) and two integers \(k,l\).
**Output**: Is the maximum number of edges in a subgraph induced by \(k\) nodes in \(G\) equal to \(l\).
**Theorem IV.1** ([36]).: _The Densest Subgraph Problem is NP-hard._
**Definition 6** (convertor).: _We convert a given graph \(H=(V_{H},E_{H})\), with \(V_{H}=\{v_{1},\cdots,v_{{}_{H}}\}\) and \(E_{H}:=\{e_{1},\cdots,e_{{}_{mH}}\}\), to a graph \(G=(V_{G},E_{G})\) and the weight functions \(\omega(\cdot)\), \(\omega_{i}(\cdot)\), \(\omega_{d}(\cdot)\), and \(\omega_{r}(\cdot)\) in the following way. We set_
* \(V_{G}:=X\cup Y\cup Z\) _for_ \(X:=\{x\}\)_,_ \(Y:=\{y_{1},\cdots,y_{{}_{H}}\}\)_, and_ \(Z:=\{z_{1},\cdots,z_{{}_{H}}\}\)_._
* \(E_{G}:=\{\{x,y_{i}\}:1\leq i\leq n_{H}\}\cup\{\{y_{i},z_{j}\}:v_{i}\in e_{j},1 \leq i\leq n_{H},1\leq j\leq m_{H}\}\)_._
* \(\omega(e)=1\) _for_ \(e\in E_{G}\)
* \(\omega_{i}(v)=1\) _for_ \(v\in V_{G}\)_._
* \(\omega_{r}(v)=0\) _for_ \(v\in V_{G}\)_._
* \(\omega_{d}(v)=0\) _for_ \(v\in X\cup Y\) _and_ \(\omega_{d}(v)=1\) _for_ \(v\in Z\)_._
_We observe that \(n_{G}:=|V_{G}|=n_{H}+m_{H}+1\) and \(m_{G}:=|E_{G}|=n_{H}+2m_{H}\). See Figure 1 for an example._
The convertor basically adds a node \(y_{i}\) for each node \(v_{i}\) in \(H\) and a node \(z_{j}\) for each edge \(e_{j}\) in \(H\). Then, if \(z_{j}\) corresponds to \(e_{j}=\{v_{i},v_{r}\}\), it adds an edge between \(y_{i}\) (similarly \(y_{i^{\prime}}\)) and \(z_{j}\). Finally, it adds a node \(x\) and connects it to every node \(y_{i}\).
**Definition 7**.: _The length of the shortest cycle (if any) in a graph \(H\) is called the girth of \(H\) and is denoted by \(g(H)\)._
**Lemma IV.2**.: _The Densest Subgraph Problem is polynomial time solvable in the following three cases:_
* \(k\geq n_{H}\)_._
* \(H\) _is a tree (i.e., has no cycle)._
* \(k<g(H)\)_._
Proof.: If \(k\geq n_{H}\), then all nodes can be selected, which will induce a subgraph with \(m_{H}\) edges. Thus, the answer is YES when \(l=m_{H}\) and NO otherwise.
If \(H\) is a tree, then no subgraph induced with \(k\) nodes can have more than \(k-1\) edges because otherwise there exists a cycle that is in contradiction with the definition of a tree. Furthermore, any connected subgraph with \(k\) nodes contains \(k-1\) edges. Thus, if \(l=k-1\), the answer is YES and NO otherwise.
In the third case also no induced subgraph on \(k<g(h)\) nodes can have more than \(k-1\) edges because otherwise, it contains a cycle of length \(k\) or smaller, which is in contradiction with \(k<g(H)\). Furthermore, any connected subgraph with \(k\) nodes has \(k-1\) edges. Thus, if \(l=k-1\), the answer is YES and NO otherwise.
**Theorem IV.3**.: _The Vaccination Problem is NP-hard._
Proof.: The idea is to apply a polynomial time reduction from the Densest Subgraph Problem to the Vaccination Problem and then use the hardness results from Theorem IV.1.
Let \(H=(V_{H},E_{H})\) and integers \(k,l\) be the input of the Densest Subgraph Problem, which do not satisfy any of the cases in Lemma IV.2. Define \(OPT_{H,k}\) to be the maximum number of edges in a subgraph induced by \(k\) nodes in \(H\). Then, we construct an instance of the Vaccination Problem using the converter in Definition 6 and let node \(x\) be Infectious and all other nodes be Susceptible. (See Figure 1.) Define \(OPT_{G,k}\) be the maximum expected number of alive nodes at the end of the process if we can vaccinate only \(k\) nodes.
**Claim 1.** In the above set-up, \(OPT_{G,k}=OPT_{H,k}+n_{H}+1\).
Let \(\mathcal{A}\) be a polynomial time algorithm for the Vaccination Problem. Then, we claim that there is a polynomial time algorithm for the Densest Subgraph Problem. Note that if the input of Densest Subgraph Problem satisfies one of the cases in Lemma IV.2, then we can efficiently solve the problem. Otherwise, we construct an instance of the Vaccination Problem using the converter in Definition 6 as described above. Then, according to Claim 1, the answer to the Densest Subgraph Problem is YES if and only if the answer to the Vaccination Problem is YES for the constructed instance and \(k\) and \(l^{\prime}=l+n_{H}+1\). Note that this would give a polynomial time solution to the Densest Subgraph Problem since the convertor clearly takes polynomial time in the input of the Densest Subgraph Problem and also generates an instance of polynomial time size.
It only remains to prove Claim 1. We observe that all nodes in \(X\cup Y\) will never die since \(\omega_{d}\) is equal to \(0\) for all these nodes. Furthermore, all these nodes, if not vaccinated, eventually become Infectious (\(x\) is already Infectious from the beginning) and remain Infectious because \(\omega_{r}(x)=0\). A node in \(Z\) never becomes Infectious if it is vaccinated or both of its neighbors in \(Y\) are vaccinated (recall that each node in \(Z\) has exactly two neighbors in \(Y\)).
We first prove that \(OPT_{G,k}\geq OPT_{H,k}+n_{H}+1\). Consider a set \(S\) of \(k\) nodes in \(H\), which induces a subgraph with \(OPT_{H,k}\) edges. Let us vaccinate the set \(S^{\prime}:=\{y_{i}:v_{i}\in S,1\leq i\leq n_{H}\}\). By construction, for each edge \(e_{j}\) whose endpoints are in \(S\), there is a node \(z_{j}\) in \(G\) whose both neighbors are in \(S^{\prime}\). Thus, all such nodes \(z_{j}\) would never become Infectious. Furthermore, all \(n_{H}+1\) nodes in \(X\cup Y\) never die. Thus, in the end, there will be at least \(OPT_{H,k}+n_{H}+1\) nodes alive.
Now, we prove that \(OPT_{G,k}\leq OPT_{H,k}+n_{H}+1\). Let \(S\) be a node set of size \(k\) in \(G\) such that if \(S\) is vaccinated \(OPT_{G,k}\) nodes will survive. We claim that we can transform \(S\) to a set \(S^{\prime}\) of the same size such that \(S^{\prime}\cap Z=S^{\prime}\cap X=\emptyset\). Since the only node in
Figure 1: An example graph \(H\) and the obtained graph \(G\) after applying convertor in Definition 6. In graph \(G\), red and white nodes correspond to Infectious and Susceptible nodes, respectively.
(i.e., node \(x\)) is Infectious, it cannot be vaccinated. Define \(S_{Y}:=S\cap Y\) and \(S_{Z}=S\cap Z\). There must be a set \(D\subset S_{Y}\) such that the nodes in \(\{v_{i}:y_{i}\in S_{Y}\}\) induce a subgraph with at least \(|S_{Y}|\) edges in \(H\) because otherwise, the number of nodes that survive is at most \(|X|+|Y|+|S_{Y}|-1+|S_{Z}|=1+n_{H}+k-1=n_{H}+k\). Recall that we proved \(OPT_{G,k}\geq OPT_{H,k}+n_{H}+1\) and since \(H\) has a cycle of length \(k\) or smaller (note we excluded the case of \(k<g(H)\) or \(H\) being a tree), we have \(OPT_{H,k}\geq k\) (any component including a cycle of length \(g(H)\leq k\) has at least \(k\) edges). This implies that \(OPT_{G,k}\geq k+n_{H}+1\), which results in a contradiction. Therefore, such a set \(D\) must exist. Let \(y_{s}\) be a node such that \(y_{s}\) has a neighbor in \(\{v_{i}:y_{i}\in D\}\) but is not in \(D\) (such node must exist since \(H\) is connected and \(|D|<n_{H}\). The latter is true because \(|D|\leq|S|=k\) and we excluded the case of \(k\geq n_{H}\)). Let \(z\) be a node in \(S_{Z}\). If we vaccinate \(y_{s}\) instead of \(z\), still at least as many nodes will survive. This is because vaccinating \(z\) will only save node \(z\), and vaccinating \(y_{s}\) will at least save a node \(z^{\prime}\) which is adjacent to \(y_{s}\) (and maybe even more nodes). (Note that if \(z^{\prime}\) is vaccinated, we would have chosen \(z\) to be \(z^{\prime}\)). So far we proved there is a set of vaccinated \(S^{\prime}\) such that \(S^{\prime}\cap(X\cup Z)=\emptyset\) and will result in the survival of \(OPT_{G,k}\) nodes. Since \(n_{H}+1\) nodes in \(X\cup Y\) will survive anyway. This means there are \(OPT_{k,G}-n_{H}-1\) nodes in \(Z\) whose both neighbors in \(Y\) are vaccinated. By construction, the node set \(\{v_{i}:y_{i}\in S^{\prime},1\leq i\leq n_{H}\}\) induces a subgraph with \(OPT_{k,G}-n_{H}-1\) edges in \(H\). This implies that \(OPT_{H,k}\geq OPT_{k,G}-n_{H}-1\) which is equivalent to \(OPT_{H,k}+n+1\geq OPT_{k,G}\). This finishes the proof.
## V Experimental setup
We have taken \(\omega_{i}(v)\), \(\omega_{d}(v)\) and \(\omega_{r}(v)\) from a uniform random distribution, as we have not made any assumption about the recovery, infection, and death rate distributions which are a function of the studied diseases. As will be discussed, our experiments consistently support certain patterns and observations, regardless of the random choices, which makes them relevant to most setups. However, it would be interesting to study settings tailored for a particular type of disease on our model in future work. Moreover, the value of \(\beta\) is set to \(2\), and \(\gamma\) is set to \(0.6\) in our simulations. These factors are also dependent on the disease's type [2; 6] and different values have been utilized in various setups [16; 18]. It is worth emphasizing that the transmission and recovery rates in our model are conceptually slightly different from previous models. More precisely, the effect of transmission and recovery rate is reduced as both parameters (\(\beta\) and \(\gamma\)) are multiplied by factors that lie between \(0\) and \(1\). Thus, we have chosen the values of these parameters to be aligned with the prior work, but taking the above observation into account.
For our experiments, we utilize both real-world graph data and synthetic graph models. For real-world networks, we rely on publically available data from SNAP [37]. In particular, we ran our simulations on the Facebook dataset and the Twitter dataset of which some description is given below.
* **Facebook** is a platform for social networking. All user-to-user connections from the Facebook network are represented in an undirected graph. An edge between two nodes \(v\) and \(u\) indicates that the corresponding individuals are friends. The network has 4039 nodes and 88234 edges.
* **Twitter** is a social networking and microblogging service. Users upload and engage with messages known as "tweets" on this platform. In a graph, a directed edge denotes a following relationship; for example, an edge from node \(v\) to node \(u\) suggests that user \(v\) follows user \(u\). We have converted this graph to an undirected one using the rule: an edge \(e\) exists between \(u\) and \(v\) if there is an edge from node \(v\) to node \(u\) or node \(u\) to node \(v\). It has 81306 nodes and 1299314 edges.
Most real-world networks are unweighted and one needs to introduce a meaningful procedure for weight assignment. Using the communication information of individuals on various real-world networks, the authors in [38] and [39] observed that there is a strong correlation between the number of shared friends of two individuals and their level of communication. Consequently, they proposed the usage of similarity measures, such as Jaccard-like parameters, to approximate the weights of connections between nodes. This is also aligned with the well-studied strength of weak ties hypothesis [40]. This line of research has inspired the choice of the Jaccard index in our model. Therefore, we assign the edge weights according to the Jaccard index [41] in our set-up. More precisely, we set
\[\omega\left(\{v,u\}\right)=\frac{|\hat{N}(v)\cap\hat{N}(u)|}{N(v)\cup N(v)}. \tag{2}\]
We use \(|\hat{N}(v)\cap\hat{N}(u)|\) instead of \(|N(v)\cap N(u)|\) in the numerator to ensure that the weight of an edge is never equal to zero.
We should emphasize that the graph data used from online social platforms do not perfectly match our use case since it is possible that two individuals are connected over an online social platform such as Facebook, but they never interact physically (and thus cannot infect each other). However, this choice can be justified by the following three reasons:
* Firstly, the graph data from online social platforms are available in abundance while the graph data from physical connections between people are much more scarce. Using online social network data permits us to conduct a much more extensive and comprehensive set of experiments.
* It is known, cf. [42], the real-world social networks regardless of their context (for example, the online social networks between people in a particular city, the physical interaction network between individuals in a certain profession, or the interest networks between fans of a particular movie genre) all share certain graph characteristics such as small diameter, scale-free degree distribution, and large clustering coefficient. Thus, while the graph data used does not match the real-world physical interactions perfectly, it still should be a very good approximation since it possess all such desired properties. This is also supported by our experiments on synthetic graph data that we explain later in this section.
* We assign the weights of the edges in the graph according to the Jaccard index. Consider two individuals which are connected online, but would never meet physically. In such scenarios, the two individuals perhaps are not in the small circle of friends and do not share many friends. Thus, the edge between them receives a small weight according to the Jaccard index and consequently, they are unlikely to interact in our virus-spreading process.
Another point that is worth stressing is that, of course, the Facebook and Twitter graphs used are a subgraph of the whole network. The graph data for the whole network (or even a large part of it) are not made available due to privacy reasons. Furthermore, it would not be computationally feasible to experiments on the full network even if available.
Different synthetic random graph models have been proposed to mimic real-world social networks, cf. [42]. Such models are usually tailored to possess fundamental properties consistently observed in real-world networks, such as small diameter and power-law degree distribution. We rely on the very well-established Hyperbolic Random Graph (HRG), which is a random graph model that generates complex networks with hyperbolic geometry. Nodes are embedded in hyperbolic space in the HRG model[43]. According to the HRG model, nodes are drawn near one another depending on their proximity to one another in geometric terms in the hyperbolic space.
We have generated HRGs such that the number of nodes and edges match with the experimented real-world networks (namely Facebook and Twitter graphs from above) using the Networkit Python Package [44].
To generate HRG, in addition to the number of nodes and edges, one needs to provide the exponent of the power-law degree distribution \(b\) and the temperature \(T\) as the input parameters. It is known that for HRG clustering is maximized at \(T=0\), minimized at \(T=\infty\), and goes through a phase transition at \(T=1\), such that for \(T<1\) the graph exhibits clustering behavior whereas for \(T>1\) the clustering goes to \(0\)[45]. Krioukov et al. [45] demonstrated that if we embed the internet graph into hyperbolic geometry it has temperature \(T=0.6\). Therefore, we also set \(T=0.6\) in our setup. Moreover, we let \(b=2.5\), as it has been empirically observed that in social networks \(2\leq b\leq 3\)[32].
For both Facebook (and Facebook HRG) and Twitter (and Twitter HRG), we assume that initially roughly \(0.5\%\) of nodes are infectious; more precisely, \(20\) randomly selected nodes in the case of Facebook and \(400\) randomly selected nodes in the case of Twitter. These numbers are chosen such that ensure that the virus almost surely spreads to a large part of the network; otherwise, it is not very meaningful to inject a vaccination strategy. Furthermore, for the Facebook graph (and HRG with Facebook parameters) we ran our experiments \(100\) times and used the average outcome and for the Twitter graph (and HRG with Twitter parameters), \(10\) repetitions were used due to its larger size. All the experiments were implemented using Python \(3\) and NetworkX library [46].
## VI Vaccination
The problem of finding efficient and effective vaccination strategies is very challenging. As we proved in Section IV, we cannot hope to obtain a polynomial time optimal algorithm for the Vaccination Problem. Thus, we resort to approximation and heuristic approaches, as most of prior work [15; 31]. In this section, we describe a large set of algorithms (some inspired by the classical centrality-based algorithms and some designed by us according to the virus spreading model) are presented.
We assume that we are given the budget to vaccinate up to \(\alpha\) percentage of the population (or equivalently \(k=\lfloor\alpha n\rfloor\) individuals) and the ultimate goal is to maximize the expected number of people alive at the end of the spread.
A natural approach is to use standard algorithms used to select the most "influential" nodes in a graph such as the highest degree, highest closeness, highest betweenness, cf. [12]. We can also consider the weighted version of these algorithms since our graph is weighted. Along with that in order to minimize the death, we have tried algorithms that consider vaccination according to higher death rates or higher death rates of neighbors. Furthermore, we propose three algorithms that rely on different formulations of the recovery, infection, and death rates. Finally, we suggest a hybrid algorithm that combines centrality measures along with disease spread parameters.
In our experiments, the \(\alpha\) percentage of nodes with the highest score, according to some scoring mechanism, are vaccinated (see Algorithm VI). Thus, below, we simply need to define what score function is used in each strategy. For example, in Degree algorithm, the score of a node is its degree. We should emphasize that none of our algorithms assumes any knowledge of the state of the network (i.e., which nodes are Infectious/Recovered/Dead). In other words, all algorithms are "source-agnostic".
All the 16 vaccination strategies, that have been put to test to maximize the final number of alive nodes are listed below along with their description:
1. **Random**: This algorithm randomly chooses nodes for vaccination (i.e., assigns a random score to each node).
2. **Degree**: It measures the number of nodes to which that node is connected (as defined in Definition 3): \[d\left(v\right):=\left|N\left(v\right)\right|.\] (For example, in this algorithm, Score\(\left(v\right)=d(v)\) and the \(\lfloor\alpha n\rfloor\) nodes with the highest degree are vaccinated.)
3. **Weighted Degree**: For a node \(v\), we measure the sum of the weights of all its adjacent edges. \[wd(v):=\sum_{u\in N(v)}\omega(\{v,u\}).\]
4. **Eigenvector**: A node's significance in a network can be determined by looking at how it is connected to the other significant nodes in the network. In other words, the sum of the centralities of a node's neighbors defines its centrality. More precisely, given the adjacency matrix \(A\) of a graph \(G=(V,E)\), we define \[x(v):=\frac{1}{\lambda}\sum_{u\in V}A_{u,v}x(u)\] Then, \(X=[x(v_{1}),x(v_{2}),...,x(v_{n})]^{T}\) is the solution of the equation \(AX=\lambda X\) and the \(i\)-th component of \(X\) will give eigenvector centrality score of node \(v_{i}\).
5. **Weighted Eigenvector**: Here, the significance is weighted by edge-weight, given by the function \(\omega\). To that end, we use the weighted adjacency matrix which is the same as the original one except that we use the weight \(\omega(\{v,u\})\) when there is an edge between \(v\) and \(u\) instead of 1.
6. **Closeness**: Based on the notion that a node is important if it is close to other nodes in the network, closeness centrality is a measure of a node's relevance in a network. The inverse of the sum of the shortest distances between a node and every other node in the network is then used to establish a node's centrality. Formally, the closeness centrality of a node is given by: \[c(v):=\frac{n-1}{\sum_{u\neq v}d(u,v)}\] where \(d(u,v)\) is the length of the shortest path between \(u\) and \(v\), disregarding the weights.
7. **Weighted Closeness**: In the weighted closeness, the distance between two nodes is adjusted with proper weights. As higher edge weights imply two nodes being closer, we have negated the edge weights from 1 to compute the weighted closeness centrality score. So, while measuring \(d(u,v)\) in place of all the edges being 1, the new weights are taken.
8. **Betweenness**: Betweenness centrality is a measure of a node's importance in a network based on the premise that a node is important if it lies on many shortest paths between other nodes in the network. The number of overlaps with the shortest paths between pairs of nodes is then used to determine how central a node is: \[b(v):=\sum_{s\neq v\neq u}\frac{\sigma_{su}(v)}{\sigma_{su}}\] where \(\sigma_{su}\) the total number of shortest paths from node \(s\) to node \(u\) and \(\sigma_{su}(v)\) is the number of those paths that pass through \(v\).
9. **Weighted Betweenness**: Similarly, in the case of weighted betweenness centrality we have considered the weighted shortest paths where the weight of an edge \(\{v,u\}\) is set to \(1-\omega(\{v,u\})\).
10. **Death**: The score of a node \(v\) is equal to \(\omega_{d}(v)\). Thus, nodes with the highest death rate are vaccinated.
11. **Neighbors' Death**: If a particular node \(v\) is infected then Susceptible nodes in its surroundings (i.e, \(u\in N_{S(t)}(v)\)) have a chance of becoming Infectious and subsequently would die according to their death probability. Hence, we try to identify the nodes whose neighbors have a higher death rate. So, we pick nodes with the highest value of \[nd(v):=\sum_{u\in N(v)}\omega_{d}(u).\]
12. **Weighted Neighbors' Death**: In this algorithm, we have modified the aforementioned algorithm with the weights on the edges between two nodes as they determine the probability of disease transmission. So, we use \[wnd(v):=\sum_{u\in N(v)}(\omega(\{v,u\})\cdot\omega_{d}(u)).\]
13. **Expected Fatality 1**: For a node \(v\) let us define the _expected fatality 1_ of node \(v\) to be \[ef_{1}(v):=\sum_{u\in N(v)}\frac{\omega(\{v,u\})\cdot\omega_{d}(u)}{\sum_{w\in N (u)}\omega(\{w,u\})}+\omega_{d}(v)\] Recall from Equation (1) that the probability of a Susceptible node \(u\) becoming Infectious is proportional to \[\frac{\sum_{v\in N_{I(t)}(u)}\omega(\{u,v\})}{\sum_{v\in N(u)}\omega(\{u,v\})}.\] Thus, the contribution of a node \(v\) to each neighbor \(u\)'s infection probability is in the form \(\frac{\omega(\{v,u\})}{\sum_{w\in N(u)}\omega(\{w,u\})}\). We multiply that with the death probability of \(\omega_{d}(u)\). Furthermore, we add the death probability of node \(v\) itself to the sum as well. Overall, \(ef_{1}(v)\) is meant to account for the expected death node \(v\) that could potentially cause in its closed neighborhood once it is Infectious.
14. **Expected Fatality 2**: For a node \(v\)_Expected Fatality 2_ is defined to be: \[ef_{2}(v):=\sum_{u\in N(v)}\frac{\omega(\{v,u\})\cdot\omega_{d}( u)}{\sum_{w\in N(u)}\omega(\{w,u\})}\] \[+1-\omega_{d}(v)-\gamma\cdot\omega_{r}(v).\] This is the same as \(ef_{1}(v)\), but we replace \(\omega_{d}(v)\) by \(1-\omega_{d}(v)-\gamma\cdot\omega_{r}(v)\), which is the probability that node \(v\) remains Infectious. This might be relevant since the longer it remains Infectious (without becoming recovered/dead), it could potentially infect/kill more nodes.
15. **Expected Fatality 3**: We define _Expected Fatality 3_ of node \(v\) to be : \[ef_{3}(v):=\] \[\sum_{u\in N(v)}\frac{\omega(\{v,u\})\cdot\omega_{d}(u)\cdot \omega_{i}(u)\cdot(1-\omega_{d}(v))}{\sum_{w\in N(u)}\omega(\{w,u\})}.\] This is again conceptually similar to \(ef_{1}(v)\); however, we multiply by \(\omega_{i}(u)\cdot(1-\omega_{d}(v))\). The probability \(\omega_{i}(u)\) accounts for neighbor \(u\) actually becoming Infectious. The probability \(1-\omega_{d}(v)\) emphasizes the importance of the spreader node \(v\) not dying and continuing to spread.
16. **Hybrid Algorithm**: In this algorithm, we give importance to network structure as well as the virus spreading model parameters. We rank nodes according to their rankings in the Betweenness score, \(b(v)\), and Expected Fatality 3, \(ef_{3}(v)\), and give both scores equal weights.
## VII Simulation Results
For each algorithm, after vaccinating the nodes, we have run the epidemic simulation and recorded the final number of deaths. We have found that some vaccination strategies like Random, Eigenvector, and Death do not perform well in any of the setups. On the other hand, Betweenness, Expected Fatality 3, and Hybrid Algorithm perform well in most of the scenarios. These results are summarized in Figures 2 and 3.
**Definition 8**.: _For a graph \(G\), the survival ratio is defined as the proportion of the nodes that have not died during the epidemic out of the number of initial nodes in the graph._
For each of the studied networks, the performance of different vaccination policies is discussed below in more detail:
* **Facebook**: When no vaccination is applied and we let the disease spread we found the survival ratio to be \(0.817\). From Figure 2, we can see that, in \(5\%-10\%\) vaccination, Closeness, Betweenness, and Weighted Betweenness performed very well. From \(15\%-60\%\) vaccination, the Hybrid Algorithm is the best performer. Also after \(20\%\) vaccination Expected Fatality 2 performed very well. After \(50\%\) vaccination we found Degree, Betweenness, Weighted Betweenness, and Expected Fatality 3 to perform well. Throughout, the Eigenvector, Weighted Eigenvector, and Random performed poorly. The performance of the Death algorithm increased significantly towards higher vaccination percentages.
* **Facebook HRG**: In case of no vaccination, we found the survival ratio to be \(0.877\). In the \(5\%-10\%\) vaccination range, Betweenness and Weighted Betweenness performed very well. The hybrid algorithm performed extremely well from \(15\%\) vaccination. The performance of Closeness improves significantly after \(35\%\) vaccination, while Expected Fatality 2 also starts to perform better after \(40\%\) vaccination. Towards \(55\%-60\%\) vaccination, almost every algorithm performs very well except Random, Weighted Eigenvector, and Death.
* **Twitter**: Without any vaccination, \(76.8\%\) nodes survived in the epidemic. The hybrid algorithm outperformed other algorithms in lower vaccine percentages as well as in higher vaccination percentages. From \(40\%\) we find Degree, Weighted Degree, Betweenness, Weighted Betweenness, Neighbors' Death, Weighted Neighbors' Death, Expected Fatality 1, Expected Fatality 2, Expected Fatality 3, and Hybrid Algorithm to perform almost equal to each other with a very high survival ratio. But, the Random and Death algorithms did not perform well.
* **Twitter HRG**: 66.4 % of nodes survived the epidemic when there was no vaccination. Initially, in the 5%-10% range Hybrid algorithm, Neighbors' Death and Betweenness performed very well. Although the Hybrid Algorithm is not consistently the best performer across all vaccine percentages, it performed very well and ranked among the top 3 algorithms in terms of survival ratio. We observe that Weighted Degree performs best in the 20%-30% vaccination range. Onwards, 40% vaccination range, we see Degree, Weighted Degree, Weighted Eigenvector, Betweenness, Neighbors' Death, Weighted Neighbors' Death, Expected Fatality 2, Expected Fatality 3, and Hybrid algorithm to perform almost similarly. On the other hand, the algorithms like Random, Eigenvector, and Death performed very badly.
From, the results we can say fairly that algorithms almost behave similarly in real networks and their HRG counterparts. Though the performance of the Weighted Eigenvector was not very good in other networks, it performed well on Twitter HRG. We found Expected Fatality 2 to perform very well consistently on Facebook and Twitter HRG and in the other two scenarios, we found a sharp increase in performance around the 35%-40% vaccination range. Moreover, we found that weighted algorithms mostly behaved similarly to the non-weighted ones. It is also expected that vaccinating important nodes is far better than Random vaccination that's why it performed very poorly. Also, vaccinating people with only the highest death rate allows the disease to spread to a larger population indicating other network and model parameters need to be considered for an effective strategy. The standard deviation of each result is mentioned in Appendix A.
## VIII Discussion
We have provided a thorough review of several vaccination tactics for limiting simulated outbreaks on real-world and synthetic graph data in this work. In order to determine the most effective method for stopping the spread of diseases, our research set out to build successful strategies and compare them with classical centrality measure-based and death rate-based algorithms. Our proposed algorithms turned out to be more successful at greater vaccination rates. However, closeness and betweenness measurements of centrality generally fared pretty well. These findings show that it is essential for effectiveness that these centrality metrics be included in vaccination plans. It is very important to note that these results are robust as experiments were performed multiple times, so the effect of randomness is minimal. Moreover, our vaccination strategy does not depend upon the information about the initially infected nodes.
One significant contribution is the hybrid algorithm's higher performance when our best model-based approach
Figure 2: Survival Ratio (along the y-axis) for different percentages of vaccination (along the x-axis) according to sixteen different algorithms in (a) Facebook and (b) Facebook HRG network.
and the betweenness centrality measure were combined. In most cases, this hybrid algorithm performed better than other approaches, demonstrating the potential advantages of combining various methods into a single, coherent approach. Our findings are in line with other research that discovered centrality indicators to be useful in immunization tactics [47]. The hybrid algorithm's effectiveness gives credibility to the fact that integrating several approaches can result in better epidemic containment [48; 49], especially if such approaches take into account both graph structure properties and fundamental characteristics of the virus spreading process.
An interesting by-product of our experiments is the observation that based on the epidemics spreading and from the performance of several vaccination strategies, we can say that the results in the synthetic graphs are quite similar to that of the real networks which approve the idea that the HRG graph models real-world networks up to a very good extent [43].
Although our study has shown how successful the hybrid algorithm is, there are certain drawbacks to be aware of. Firstly, due to a lack of information regarding the disease parameters, \(\omega_{i}(v)\), \(\omega_{d}(v)\), and \(\omega_{r}(v)\) were taken from a uniform distribution. However, in reality, these weights depend on various parameters such as age and sex and may vary substantially across different diseases. Another problem may be all these physiological factors are impossible to know beforehand, but that can be treated by associating them with chances of other diseases, age, and other observable parameters. Secondly, while real-world contact networks are frequently dynamic and ever-evolving, our study concentrated on static networks. Investigating how our suggested tactics perform in dynamic networks may offer insightful tips for developing more flexible and reliable immunization systems. Moreover, there might be differences between the real-world connection of people through which the epidemic spreads and social networks, cf. [50]. Thirdly, we have not considered re-infection or infection even after vaccination, which may be true in real-world scenarios [51]. To confirm the applicability of our findings in various setups, additional study is required.
The viability of applying these algorithms in realistic contexts must also be taken into account. The total effect of these tactics can be considerably impacted by elements including vaccine availability, logistical difficulties, and public acceptability of immunization programs. Future research should therefore focus on incorporating these factors into the formulation and assessment of vaccination regimens.
In conclusion, our research has shown the possibility of integrating different approaches in creating more potent vaccination schemes. In particular, the Hybrid vaccination algorithm showed encouraging outcomes in containing simulated epidemics. It is essential that we use the power of computational tools and network analysis to develop creative public health protection policies as we continue to face the threat of infectious illnesses. Taken together with testing, contact tracing, and quarantining this method can be proven powerful in containing future
Figure 3: Survival Ratio (along the y-axis) for different percentages of vaccination (along the x-axis) according to sixteen different algorithms in (a)Twitter and (b) Twitter HRG network.
pandemics.
## Appendix A Standard deviation of result
For different percentages of vaccination, we have reported the standard deviation of the Hybrid algorithm in Table 1. These results are in terms of the number of nodes where Facebook and Facebook HRG have 4039 nodes and Twitter and Twitter HRG have 81306 nodes. As one might expect the standard deviation decreases as the vaccination percentage increases. Similar behavior was observed for the other 15 algorithms; thus, to avoid redundancy, they are not included here.
## Appendix A Standard deviation of result
For different percentages of vaccination, we have reported the standard deviation of the Hybrid algorithm in Table 1. These results are in terms of the number of nodes where Facebook and Facebook HRG have 4039 nodes and Twitter and Twitter HRG have 81306 nodes. As one might expect the standard deviation decreases as the vaccination percentage increases. Similar behavior was observed for the other 15 algorithms; thus, to avoid redundancy, they are not included here.
## Appendix A Data availability
The data supporting this study are available on request.
|
2304.06562 | On the use of dielectric elements in axion searches with microwave
resonant cavities | This study explores the primary effects of dielectric materials in a resonant
cavity-based search for axion dark matter. While dielectrics prove beneficial
in numerous cases, their incorporation may lead to less-than-optimal
performance, especially for the lowest TM mode. Additionally, the stronger
confinement of the electric field inside the dielectrics can exacerbate mode
mixings, in particular for higher-order modes. Case studies have been carried
out using a combination of analytical solutions and numerical simulations. The
findings indicate dielectric cavities employing the $\text{TM}_{010}$ mode
experience a significant reduction in sensitivity when compared to a similar
search conducted in a cavity at equivalent frequency using no dielectrics. | Xiran Bai, Michael J. Jewell, Steve K. Lamoreaux, Reina H. Maruyama, Karl van Bibber | 2023-04-13T14:16:32Z | http://arxiv.org/abs/2304.06562v1 | # On the use of dielectric elements in axion searches with microwave resonant cavities
###### Abstract
This study explores the primary effects of dielectric materials in a resonant cavity-based search for axion dark matter. While dielectrics prove beneficial in numerous cases, their incorporation may lead to less-than-optimal performance, especially for the lowest TM mode. Additionally, the stronger confinement of the electric field inside the dielectrics can exacerbate mode mixings, in particular for higher-order modes. Case studies have been carried out using a combination of analytical solutions and numerical simulations. The findings indicate dielectric cavities employing the TM\({}_{010}\) mode experience a significant reduction in sensitivity when compared to a similar search conducted in a cavity at equivalent frequency using no dielectrics.
+
Footnote †: preprint:
## I Introduction
By introducing a dynamic phase term in the QCD Lagrangian, Peccei and Quinn provided an explanation of the Charge-Parity conservation in strong interactions [1; 2], which was further interpreted by Wilczek as implying a new pseudoscalar particle, the axion. The possibility of an axion mass at the weak scale was quickly experimentally ruled out, however there is no other direct standard model constraint on the lower mass limit. Sufficiently light axions are a natural candidate for dark matter, as originally proposed by Sikivie [3]. To detect dark matter axions, Sikivie proposed the haloscope technique, in which axions are coupled to the electromagnetic fields in a resonant cavity, through the inverse Primakoff process due to the application of a strong static magnetic field [4]. The frequency of the oscillating field is given by the axion mass \(m_{a}\) as \(\nu=m_{a}c^{2}/h\). The on-resonance axion conversion power in such a cavity is proportional to
\[P_{\rm ax}\propto g_{a\gamma\gamma}^{2}\frac{\rho_{a}}{m_{a}}B_{0}^{2}VC_{mnl }Q_{0} \tag{1}\]
where \(g_{a\gamma\gamma}\) is the axion-photon coupling constant, \(\rho_{a}\) is the local axion mass density, \(B_{0}\) is the strength of the static magnetic field, \(V\) is the effective volume of the cavity, and \(Q_{0}\) is the unloaded cavity quality factor. \(C_{mnl}\) is the normalized form factor describing the coupling of the axion to a specific mode labeled by \(mnl\), and is given by
\[C_{mnl}=\frac{|\int_{V}\mathbf{B_{0}}(\mathbf{x})\cdot\mathbf{E}(\mathbf{x})\,dx^{3}|^{2}}{B_ {0}^{2}V\int_{V}\epsilon_{0}\epsilon_{d}(\mathbf{x})|\mathbf{E}(\mathbf{x})|^{2}\,dx^{3}}. \tag{2}\]
where \(\mathbf{E}(\mathbf{x})\) is the oscillating electric field vector amplitude of the particular mode, \(\epsilon_{0}\) is the permittivity of free space, and \(\epsilon_{d}(\mathbf{x})\) represents the spatial dependence of the relative permittivity, which we will take as lumped elements with constant properties.
Because the axion's mass is _a priori_ unknown, a search for galactic halo axion dark matter requires tuning the cavity to different resonant frequencies. To achieve some specified sensitivity to \(g_{a\gamma\gamma}\) over some range of frequencies, the scan rate is proportional to
\[\frac{d\nu}{dt}\propto V^{2}C_{mnl}^{2}Q_{0}. \tag{3}\]
Because the goal is to scan as quickly as possible while attaining a useful sensitivity to \(g_{a\gamma\gamma}\), we define the frequency-dependent figure of merit for a cavity design,
\[\mathcal{F}(\nu)=V^{2}C_{mnl}^{2}Q_{0}\equiv\mathcal{F} \tag{4}\]
which needs to be maximized for an optimal search.
Recently, there has been a growing interest in the axion as a dark matter candidate, and a number of axion experiments represent the exploration of the use of dielectric materials [5; 6; 7; 8; 9; 10]. Compared to metallic tuning elements, which in a practical sense change the internal cavity dimension and hence its resonant frequency, dielectric tuning elements allow the experimental search to lower frequency axions with the same cavity. Furthermore, if placed strategically, dielectric materials can increase the quality factor by reducing the power loss along the cavity's metallic wall [7; 11]. Additionally, because TEM modes only exist in structures with a central conductor, replacing the metal tuning element with dielectrics can eliminate the TEM mode mixing [12]. Furthermore, in searches utilizing higher-order modes, dielectric materials are used to improve the form factor by suppressing the opposite-phase electric field components [8; 10; 13]. While dielectrics offer benefits in many cases, there are potential drawbacks to be considered. As we will show, introducing dielectrics causes energy concentration into the dielectrics, which may reduce the electric field that couples to the axion and exacerbate TE mode crossings. These factors need to be carefully taken into account when designing haloscope cavities.
Although most of this knowledge had been established in the 1980s, the complete and full effects of dielectrics have not been coherently and simultaneously addressed together in
the existing literature. In this study, we present an answer to a possible question, how much dielectric is too much? Pre-existing cavities can utilize dielectrics to move to a lower frequency range, but the impact of this has never been fully analyzed. Additionally, the role of the cavity is generally recognized as an impedance-matching system that, depending on the specific model, converts an "energy current" into electromagnetic field energy within the cavity; hence we want the highest impedance possible. This point has been recently studied [14] with a model slightly different from ours, however the full range of effect for cavity resonators was not addressed. With this in mind, the inclusion of extra capacitance in a resonator tends to lower its impedance (see Appendix B for a lumped circuit model). We will explore the following effects due to the inclusion of dielectrics in a cavity:
1. The energy density in a dielectric-loaded cavity tends to reside in the \(\mathbf{D}=\varepsilon\mathbf{E}\) field, for a given cavity energy density, which is given by the volume integral of \(\mathbf{D}\cdot\mathbf{E}\). The effect is contained in Eq. (2), which tends to reduce the cavity form factor using the lowest TM mode.
2. The presence of dielectrics tends to reduce the electric field for a given cavity energy, and because the axion couples to the electric field, this directly reduces the rate of axion conversion to electromagnetic photons. This effect is implicit in the definition of the form factor.
3. The electromagnetic field dispersion relationship is modified (lower velocity of light) with dielectrics, which leads to a decrease in the resonant frequency for a fixed cavity volume or, conversely, a reduction in the cavity volume for a specific resonant frequency.
4. The inclusion of dielectrics can increase \(Q_{0}\) by lowering the resistive loss at the cavity wall if placed deliberately.
5. Spatially distributed dielectrics increase susceptibility to spurious TE modes. The stronger confinement of the electric fields inside the dielectric can exacerbate TE mode mixing, resulting in degraded axion sensitivity in those regions of the frequency space.
In the following, we will analyze the above effects, first separately then all together. The organization of this work is as follows. The discussion of a limiting case to illustrate the effect of the dielectric in a resonant cavity is presented in Sec.II. Sec.III explores more realistic use cases of dielectrics: the field solutions for a dielectric tuning rod cavity and a dielectric shell cavity are formulated and compared with a metal tuning rod cavity. The resulting comparison on the figure of merit is provided in Sec.IV. Finally, design considerations for using dielectrics in haloscopes are summarized in Sec.V.
## II The effects of dielectrics: a limiting case
To understand how dielectrics in a haloscope impact the axion scan rate, one needs to consider the effects of dielectrics on \(V\), \(C_{mnl}\), and \(Q_{0}\) in Eq.4. These effects are best illustrated by considering a simple limiting case where the entire cylindrical metal cavity is uniformly filled with dielectrics with a relative permittivity of \(\varepsilon_{d}>1\).
We will first consider the effect on cavity size by determining the cavity length and radius scaling required to keep the resonant frequency constant as the dielectric is added to the cavity. The cavity that contains dielectric will necessarily have a lower volume. The E-field of the TM\({}_{00n}\) in a cylindrical cavity can be described by the Bessel function of the first kind \(J_{0}(\rho)\). Assuming the gap between the dielectric and the cavity wall is small and almost negligible, the field is determined by the boundary condition that it vanishes outside the dielectric at the metal surface, namely \(\mathbf{E}(\rho=\rho_{c})=0\), where \(\rho_{c}\) is the radius of the cavity, and is given by
\[\rho_{c}=\frac{X_{0n}}{\sqrt{\varepsilon_{d}}k_{0}} \tag{5}\]
where
\[k_{0}=\sqrt{\mu_{0}\varepsilon_{0}}\nu=\nu/c\]
is the free-space wave number, with \(\mu_{0}\) and \(\varepsilon_{0}\) the magnetic permeability and electric permittivity of the vacuum and \(X_{0n}\) is the \(n\)th root of the zeroth order Bessel function.
To calculate how much volume is lost for the lowest TM mode in this limiting case, it can be seen from Eq.5 that the dielectric material reduces the radius by a factor of \(\varepsilon_{d}^{-1/2}\). While the length does not directly affect the frequency of the TM\({}_{010}\) mode, let us assume a finite cavity length L and scale it and the radius together. As shown in Appendix A, this restriction limits the number of mode crossings which increase with the aspect ratio \(L/\rho_{c}\) and cause regions of insensitivity. Although there is no tuning in this limiting case, we adopt a \(L/\rho_{c}\) as a general rule as it will be critical to later studies. The resulting reduction in volume is
\[\frac{\rho_{c}^{2}L}{\rho_{0}^{2}L_{0}}\sim\varepsilon_{d}^{-3/2} \tag{6}\]
with \(\rho_{0}\) and \(L_{0}\) denoting the radius and length of an empty cavity with the same resonant frequency. Dielectric materials such as sapphire exhibit an extremely low \(\tan\delta\) (\(<10^{-6}\)) at cryogenic temperatures and are often used in dielectric resonators [15]. For a sapphire-filled cavity with \(\varepsilon_{d}\) of 10, the volume has a factor of \(\sim 32\) reduction compared to the larger empty cavity operating at the same frequency.
Furthermore, a reduction in the form factor of the TM\({}_{010}\) mode occurs due to the electric field being reduced for a given energy density in the cavity. It is evident from Eq. (2) that the form factor of the dielectric-filled cavity is reduced by a factor \(\varepsilon_{d}\). In the case of sapphire, this results in a reduction in form factor by 10.
Finally, to calculate the unloaded quality factor, we use
\[Q_{0}=\omega\frac{U}{P_{\text{loss}}} \tag{7}\]
which relates the energy stored in the cavity to the power loss (\(P_{\text{loss}}\)). For the purpose of this limiting case, we will ignore the
dielectric loss from high-quality materials for now and only consider the metallic surface loss, which is given by
\[P_{\text{loss}}\approx P_{\text{surface}}=\frac{1}{2\sigma\delta}\int_{S}\left|H _{\parallel}\right|^{2}d^{2}x \tag{8}\]
where \(\sigma\) is the surface conductivity, \(\delta\) is the skin depth, and \(H_{\parallel}\) is the magnetic field parallel to the metal surfaces. Assuming the background electric field from axion conversion is represented by \(\mathbf{E_{0}}\), we now calculate both the stored energy and the energy loss rate for that field. As we have assumed the cavity length is very long, the energy density per unit length is approximately constant. To keep the frequency constant, the cavity length \(L_{d}\) and radius \(\rho_{c}\) each scale as \(\varepsilon_{d}^{-1/2}\) as already discussed.
From Maxwell's equations in the presence of dielectrics, we have, for the dielectric-filled cavity,
\[\mathbf{\nabla}\times\mathbf{E}=-i\omega\mathbf{B}. \tag{9}\]
Due to the large extent of the cavity, we assume all components of the electric field are zero except for the z component given by \(E_{z}(\rho)=E_{z0}J_{0}\left(X_{01}\rho/\rho_{c}\right)\). Because \(E_{\parallel}\) is continuous across the dielectric boundary this gives
\[\frac{i}{\omega}\mathbf{\nabla}\times\mathbf{E}=\frac{i}{\omega}E_{0z}\frac{X_{01}J_{1 }\left(X_{01}\right)}{\rho_{c}}=\mathbf{B}=\frac{1}{\mu_{0}}H_{\parallel}. \tag{10}\]
Thus, when \(\rho=\rho_{c}\), \(H_{\parallel}\) is constant, the surface integral from Eq. 8 gives
\[\int_{S}\left|H_{\parallel}\right|^{2}d^{2}x\propto E_{0z}^{2}2\pi\rho_{c}^{- 1}L_{d}\left(X_{01}J_{1}\left(X_{01}\right)\right)^{2}. \tag{11}\]
\(U\) is determined by the volume integral of \(\mathbf{E}\cdot\mathbf{D}=\varepsilon_{d}E_{z}^{2}\), (taking \(\varepsilon_{d}=1\) for the empty cavity) as
\[U =\frac{\varepsilon_{0}\varepsilon_{d}}{2}\int_{V}E_{0z}^{2}J_{0} \left(X_{01}\rho/\rho_{c}\right)^{2}2\pi\rho d\rho\] \[=\pi\varepsilon_{0}\varepsilon_{d}E_{0z}^{2}\pi L_{d}\rho_{c}^{2 }\int_{0}^{1}J_{0}^{2}\left(X_{01}x\right)xdx. \tag{12}\]
We therefore find that
\[Q_{0}=\omega\frac{U}{P_{surface}}\propto\varepsilon_{d}\rho_{c}^{3}\propto \varepsilon_{d}^{-1/2}. \tag{13}\]
which shows there is also a reduction in \(Q_{0}\) which scales like \(\varepsilon_{d}^{-1/2}\). While dielectrics have the potential to increase the Q, it is not true in this limiting case because of the confined field in the dielectrics being too close to the cavity metal wall.
Altogether, we find that in this limiting case, the addition of a dielectric dramatically suppresses the figure of merit \(\mathcal{F}\) (hence also the scan rate) from Eq. (4) by a factor of
\[\mathcal{F}\left(C_{010}^{2}V^{2}Q_{0}\right)\rightarrow\varepsilon_{d}^{-2} \varepsilon_{d}^{-3}\varepsilon_{d}^{-1/2}\mathcal{F}=\varepsilon_{d}^{-11/2} \mathcal{F} \tag{14}\]
indicating a significant reduction, with the loss increasing for dielectrics with larger \(\varepsilon_{d}\), which is a factor of \(\sim 316,000\) reduction in \(\mathcal{F}\) if sapphire (\(\varepsilon_{d}\sim 10\)) is used as the filling material. While this simplified example is not meant as a realistic haloscope design, it demonstrates the main challenges which can arise when dielectric materials are used in cavity experiments. Alternatively, the reduction from dielectrics can be understood through impedance matching to the axion field. In this case, the addition of dielectrics makes the impedance matching between the cavity and the axion source less efficient as shown in Appendix B.
Having examined the lowest-order mode in a dielectric-filled cavity, let us briefly shift our focus to higher-order modes. In these cases, dielectrics can offer unique benefits: as mentioned in the previous section, the field concentration within dielectrics can augment the form factor by suppressing the opposite-phase electric field components typically associated with higher-order modes. However, higher-order modes are more susceptible to mode crossings with spurious TE modes, and the presence of dielectrics may intensify the issue through mode confinement and radius reduction. The problem of mode mixing further worsens for cavities with larger longitudinal aspect ratios (See Appendix A for more details). In this limiting case of a dielectric-filled cavity, this necessitates a longitudinal aspect ratio of:
\[\frac{L}{\rho_{0}}\lesssim\frac{\pi^{2}Q_{L}}{600X_{0n}^{3}}\varepsilon_{0}^{-1 /2}. \tag{15}\]
where the \(Q_{L}\) is the loaded quality factor. A wave that has wavelength given by the cavity length \(L\) would be the frequency of the first spurious mode and as the cavity is made longer, more enter the region of interest. This restriction is more stringent for higher-order modes as \(L\) is inversely proportional to \(X_{0n}\) which grows with increasing \(n\). Additionally, for this limiting case, a radius reduction scaling with \(\varepsilon_{0}^{-1/2}\) further exacerbates the ratio, potentially resulting in a diminished detection volume.
In summary, in this limiting case, the performance of both the lowest-order mode and the higher-order modes are negatively affected by the presence of dielectric fillings. The former results in direct losses in signal power and scan rate, while the latter suffers from increased mode densities.
## III Partially filled cavities
While the uniformly filled cavity highlights the main effects of dielectric on the scan rate, a more careful study is needed to understand the impact on more realistic cavity designs. For more practical use of dielectrics in resonant cavities, we consider cavities that are partially filled with dielectric materials. This study will focus on the lowest-order TM mode of two cavity geometries that make use of dielectrics and we will compare their performance to that of a comparable search using a metal tuning rod. This section lays out the model for the three cases as well as the analytical field solutions used to compute \(Q_{0}\), \(C\), and \(V\) for each.
#### ii.1.1 Metal Tuning Rod Cavity
The benchmark design for comparison is a cylindrical metal cavity with a single metal tuning rod, which has been widely adopted in traditional haloscope experiments such as HAYSTAC. As shown in Fig.1(a), a metal tuning rod with radius \(\rho_{m}\) is placed in the center of a cylindrical metal cavity with radius \(\rho_{c}\). The geometry is simplified with the rod concentric with the cavity, allowing for the field solution to be found analytically and the gaps between the end caps and the rod are ignored. The metal used for both the cavity's main body and the rod is oxygen-free high thermal conductivity (OFHC) copper and the thickness is assumed to be larger than its skin depth.
For the TM\({}_{010}\) mode in a cylindrical cavity, the field solutions in the vacuum are described as a combination of the Bessel function of the first and second kind, \(J_{0}\) and \(N_{0}\) respectively, whereas the field inside the metal rod is 0. The electric fields in the longitudinal direction are thus given by
\[E(\rho)=\begin{cases}0&0<\rho\leq\rho_{m}\\ B_{m}J_{0}(k_{0}\rho)+C_{m}N_{0}(k_{0}\rho)&\rho_{m}<\rho\leq\rho_{c}\end{cases} \tag{16}\]
Coefficients \(B_{m}\) and \(C_{m}\) are solved by enforcing boundary conditions that the field vanishes at the metal boundary, namely \(E(\rho_{m})=E(\rho_{c})=0\).
The primary source of power loss in a metal rod cavity comes from the current induced by the magnetic field within the skin depth of the resistive metallic surfaces, including the cavity wall, the end caps, and the rod. Using Eqs.16 and 8 and the boundary condition, we obtain the \(Q_{0}\), \(C_{010}\), and \(V\) of the metal tuning rod cavity.
#### ii.1.2 Dielectric Tuning Rod Cavity
One common use for dielectrics in haloscopes is to tune the resonant frequency. In this geometry, a solid dielectric tuning rod with radius \(\rho_{d}\) is placed in the center of a cylindrical metal cavity with radius \(\rho_{c}\) as shown in Fig.1(a).
For the TM\({}_{010}\) mode, the electric fields in the longitudinal direction are again described by \(J_{0}\) and \(N_{0}\). Because \(N_{0}\) diverges at 0, the field equations are therefore given by
\[E(\rho)=\begin{cases}A_{d}J_{0}(\sqrt{\epsilon_{d}}k_{0}\rho)&0<\rho\leq\rho_ {d}\\ B_{d}J_{0}(k_{0}\rho)+C_{d}N_{0}(k_{0}\rho)&\rho_{d}<\rho\leq\rho_{c}\end{cases} \tag{17}\]
where \(A_{d}\), \(B_{d}\), and \(C_{d}\) are coefficients to be solved using the
Figure 1: Geometries of partially filled cavities. (a): A tuning rod with radius \(\rho_{d}\) (a dielectric rod) or \(\rho_{m}\) (a metal rod) is placed in the center of the cavity with radius \(\rho_{c}\). (b): A dielectric shell cavity, where a concentric shell of thickness \(t\) is located inside the cavity at radius \(\rho_{s}\). Both cavity walls are made of OFHC copper and their length over radius ratios are kept at \(L/\rho_{c}=5\) as detailed in the Appendix A. The end caps for both cavities are not shown for simplicity.
Figure 2: Example calculated E-fields as a function of the radial position within the cavity at different frequencies for (a) the dielectric tuning rod cavity, (b) the dielectric shell cavity. The red line indicates the dielectric region and the black dash line indicates the rest of the cavity (vacuum). The ratio of the rod and the shell thickness to the cavity radius (\(\rho_{d}/\rho_{c}\) and \(t/\rho_{c}\)) are both 1/10, and the ratio of the shell position to the cavity radius (\(\rho_{s}/\rho_{c}\)) is 1/8, which are the optimal ratios selected in Sec.IV.1.
matching conditions at the boundary, which are given by
\[E_{d1}(\rho_{d}) =E_{d2}(\rho_{d})\] \[\frac{\partial E_{d1}}{\partial\rho}|_{\rho_{d}} =\frac{\partial E_{d2}}{\partial\rho}|_{\rho_{d}}\] \[E_{d2}(\rho_{c}) =0\]
Although having a dielectric rod reduces the total metallic surface area, some additional power loss will come from the dielectric material itself. The dielectric loss is given by
\[P_{\mathrm{volume}}=\frac{1}{2}\omega\varepsilon_{0}\varepsilon_{d}\tan\delta \int_{V}|E|^{2}\,d^{3}x \tag{18}\]
where the \(\tan\delta\) is the loss tangent of the dielectric material. In order to reduce the loss, the dielectric therefore needs to have a sufficiently low loss, such that the increased loss in the volume of the dielectric is lower than the removal of surface losses. Example field solutions are plotted in Fig.2(a).
#### iii.2.3 Dielectric Shell Cavity
Another application of dielectrics involves reducing power loss at the cavity wall; One such example is surrounding the wall with a dielectric shell, consequently leading to an increase in the Q factor. This model is shown in Fig.1(b), where a dielectric shell with thickness \(t\) and inner radius \(\rho_{s}\) is located concentrically with the cavity of radius \(\rho_{c}\). It should be noted that the tuning aspect of this design will not be addressed, as it entails developing specialized tuning mechanisms that fall beyond the scope of this study. For the TM\({}_{010}\) mode, the electric field in the longitudinal direction is given by
\[E(\rho)=\begin{cases}A_{s}J_{0}(k_{0}\rho)&0<\rho\leq\rho_{s}\\ B_{s}J_{0}(\sqrt{\varepsilon_{d}}k_{0}\rho)+C_{s}N_{0}(\sqrt{\varepsilon_{d}}k_ {0}\rho)&\rho_{s}<\rho\leq\rho_{s}+t\\ D_{s}J_{0}(k_{0}\rho)+E_{s}N_{0}(k_{0}\rho)&\rho_{s}+t<\rho\leq\rho_{c}\end{cases} \tag{19}\]
where \(A_{s}\) through \(E_{s}\) are coefficients to be solved using the matching conditions at the boundary, which are given by
\[E_{s1}(\rho_{s})=E_{s2}(\rho_{s}),\;E_{s2}(\rho_{s}+t) =E_{s3}(\rho_{s}+t)\] \[\frac{\partial E_{s1}}{\partial\rho}|_{\rho=\rho_{s}}=\frac{ \partial E_{s2}}{\partial\rho}|_{\rho=\rho_{s}},\;\frac{\partial E_{s2}}{ \partial\rho}|_{\rho=\rho_{s}+t} =\frac{\partial E_{s3}}{\partial\rho}|_{\rho=\rho_{s}+t}\] \[E_{s3}(\rho_{s}+t) =0\]
Similar to the dielectric rod cavity, the power loss through the dielectric is calculated using Eq.18 and the shell is again made of sapphire. Example field solutions are plotted in Fig.2(b).
## IV Results
### Optimal Radial Aspect Ratio
Because each of the cases described in Sec.III has two degrees of freedom in their geometry, size of dielectric element and cavity radii, a given resonant frequency can be achieved with multiple configurations. To ensure a fair comparison between cases, only the optimal geometry, defined by the aspect ratio between the radii (\(\rho_{m}/\rho_{c}\), \(\rho_{d}/\rho_{c}\), and \(\rho_{s}/\rho_{c}\)), for each case is used.
To find the optimal aspect ratio of each configuration, we start by maximizing each component of the figure of merit \(\mathcal{F}\) defined in Eq.4. For the metal tuning rod cavity, this is straightforward as only the volume depends on the aspect ratio and \(\mathcal{F}\), increases as \(\rho_{m}/\rho_{c}\). For the same frequency, one easily finds that \(\rho_{c}-\rho_{m}\) is a constant, so the effective volume for a metal rod cavity is proportional to
\[V_{\mathrm{eff}}\propto\frac{1+\rho_{m}/\rho_{c}}{1-\rho_{m}/\rho_{c}} \tag{20}\]
where \(\frac{1+\rho_{m}/\rho_{c}}{1-\rho_{m}/\rho_{c}}\) monotonically increases in the range of \(0\leq\rho_{m}/\rho_{c}<1\). However, the ratio is limited by the need to maintain fine-scale tuning resolution to search for axions of various masses. A large metal rod that is too close to the cavity wall decreases the tuning resolution, leaving insufficient overlap between each spectrum. In addition, the ratio is also limited by the physical scale of the experiment, such as the bore size of the magnet, which constrains the overall size of \(\rho_{c}\). we conservatively set \(\rho_{m}/\rho_{c}=1/2\) to match the relative size of the HAYSTAC experiment [16].
The optimal aspect ratio for the dielectric tuning rod cavity can also be found by minimizing the amount of energy inside dielectrics as suggested in Sec.II. Therefore, for the same detection frequency, a smaller rod-cavity ratio is preferred. To verify, we compute the field solution of Eq.17 by fixing the resonant frequency and varying \(\rho_{d}/\rho_{c}\). For the same resonant frequency, \(Q_{0}\), \(C_{010}\), \(V\), and the dielectric energy ratio are computed as a function of \(\rho_{d}/\rho_{c}\). The dielectric energy ratio is defined as the ratio of the field energy in the dielectric to the total energy.
\[U_{\mathrm{ratio}} =\frac{U_{d}}{U_{\mathrm{total}}}\] \[=\frac{\int_{V_{d}}\varepsilon_{0}\varepsilon_{d}|\mathbf{E}_{d} |^{2}\,d^{3}x}{\int_{V_{d}}\varepsilon_{0}\varepsilon_{d}|\mathbf{E}_{d}|^{2} \,d^{3}x+\int_{V_{\nu}}\varepsilon_{0}|\mathbf{E}_{\nu}|^{2}\,d^{3}x} \tag{21}\]
where \(V_{d}\), \(\mathbf{E}_{d}\) are the volume and the E-field inside the dielectric, and \(V_{\nu}\), \(\mathbf{E}_{\nu}\) are the volume and the E-field of the rest of the cavity, assumed to be vacuum. The results are shown in Fig.3 and 4. As the magnetic field amplitude peaks at the dielectric boundary and diminishes towards the cavity wall, employing dielectrics to divert the field away from the wall can help minimize surface loss. An optimal \(Q_{0}\) therefore exists where the rod radius is large enough to draw the field inside and suppress the field outside, but small enough to separate the large magnetic field inside the dielectric from the cavity wall. It can be seen from Fig.3(a) that the aspect ratio that gives the optimal \(Q_{0}\) is \(\rho_{d}/\rho_{c}\sim 1/6\). On the other hand, the volume and form factor \(C_{010}\) do not have a natural optimum as they increase arbitrarily with less dielectric material as shown in Fig.3(b). However, because the introduction of the dielectric
is meant to tune the cavity's mode frequency over a range of possible axion masses, a practical limit on the rod size is imposed from the desired tuning range. To achieve a balance between the dynamic tuning range and \(\mathcal{F}\), \(\rho_{d}/\rho_{c}\) should be at least 1/10, which yields a tuning range on the order of 100 MHz for a GHz frequency search and a higher \(\mathcal{F}\) according to Fig.3(b).
Similarly, the optimal aspect ratio of a dielectric shell cavity is directly related to its dielectric energy ratio. As shown in Figs.3(d) and 4, the closer the shell is to the cavity wall, the less energy is contained in the dielectric, resulting in better cavity performance. It is also important to keep the shell's thickness small to minimize the energy inside. Because this design is meant to optimize the \(Q_{0}\), we will not worry about the tunability for this study. Instead, we choose the thickness ratio to be \(t/\rho_{c}=1/10\), and the position of the shell-to-cavity ratio to be \(\rho_{s}/\rho_{c}=7/8\) according to Fig.3(d), which is close enough to the wall but far enough to accommodate the shell's thickness.
### Performance Comparison
Using the optimal aspect ratio selected in Sec.IV.1, we compute \(\mathcal{F}\) for all three cases. While not explicitly shown in Eq.4, \(\mathcal{F}(\nu)\) scales with frequency roughly as \(\nu^{-20/3}\) due to two effects. First, since the resonant TM mode is related to the cavity's radius, the volume of the cavity scales as \(\nu^{-3}\) for a fixed aspect ratio. Second, at the temperatures needed to minimize thermal noise, axion detectors are operated in the
Figure 3: (a),(b): To select the optimal radial aspect ratio, \(Q_{0}\), \(C\), \(V\) is plotted against the radial aspect ratio, \(\rho_{d}/\rho_{c}\) (\(\rho_{s}/\rho_{c}\)), for the dielectric tuning rod cavity (the dielectric shell cavity) while maintaining a constant resonant frequency at 5.8 GHz (the specific frequency choice does not affect the result). The data is cut off before a ratio of 0.1 because it is the minimum ratio for tuning as discussed in Sec.IV.1. (c),(d): The figure of merit \(\mathcal{F}\) as a function of \(\rho_{d}/\rho_{c}\) (\(\rho_{s}/\rho_{c}\)) for the dielectric tuning rod cavity (the dielectric shell cavity) while maintaining a constant resonant frequency at 5.8 GHz. The data is cut off after a ratio of around 0.9 to accommodate for the shell thickness. The energy ratio is over-plotted to show that the less energy resides in the dielectric, the better the cavity performs. To reflect the relative changes, all of the quantities are normalized such that the maximum is 1.
anomalous skin effects regime [17], where \(Q_{0}\propto\nu^{-2/3}\). Thus, the resonant frequency should be kept the same when evaluating cavity configurations. We choose to evaluate in a frequency range between \(\sim 0.5\) to 9.5 GHz because other non-resonant cavities detection schemes are generally preferred beyond this range [18; 19; 20]. The resulting figure of merit is plotted in Fig.5. It shows that cavities with dielectrics underperform the metal rod cavity by as much as a factor of \(\sim\)1600 in scan rate. The main effect is the drop of \(\sim\)530 from the volume, with the form factor and quality factor accounting for the remaining factor. As discussed in Sec.II, the energy per unit length inside a cavity is roughly constant for the same axion frequency. The high permittivity causes the energy to concentrate in one place, thus degrading the performance of the cavity with volume reduction as the leading effect, followed by a decrease in form factor. In Fig.6, the \(\mathscr{F}\) ratio of the dielectric rod to the metal rod cavity is plotted against the volume ratio of dielectrics to the total volume. This further reinforces the notion that an increased proportion of dielectrics leads to a decline in
Figure 4: (top): The form factor \(C_{010}\) of the dielectric rod cavity as a function of the radial aspect ratio. As the rod diameter \(\rho_{d}\) gets smaller, the \(C_{010}\) approaches 0.69 for an empty cylindrical cavity. As the rod gets larger, the \(C_{010}\) approaches 0.069 for a uniformly-filled dielectric cavity. (bottom): The form factor \(C_{010}\) of the dielectric shell cavity as a function of the radial aspect ratio. As the shell moves closer to the wall, \(C_{010}\) approaches 0.69 for an empty cavity. For both cases, the vertical dashed line corresponds to the selected aspect ratio for the study, resulting in \(C_{010}\sim 0.4\) and \(C_{010}\sim 0.66\) for the dielectric rod and dielectric shell respectively. The data for the dielectric shell cavity is cut off after a ratio of around 0.9 to accommodate for the shell thickness.
Figure 5: The ratio of figure of merits as a function of resonant frequency, shown for both the dielectric rod (solid black) and dielectric shell (dashed blue) relative to the equivalent metal rod at the same frequency. The optimal ratios selected in Sec.IV.1 are used when comparing performances (\(\rho_{m}/\rho_{c}=1/2\), \(\rho_{d}/\rho_{c}=1/10\), \(\rho_{s}/\rho_{c}=7/8\)). The metal tuning rod cavity constantly outperforms both dielectric configurations between \(\sim 0.5\) to 9.5 GHz. The small fluctuations in the plots are the result of rounding errors when calculating the matching conditions.
Figure 6: At the same resonance frequency of 5.8 GHz, the ratio of figure of merit (dielectric rod cavity over metal rod cavity) as a function of dielectric volume. In the absence of dielectrics, an empty cavity with a vacuum has a smaller size than a metal rod cavity at the same resonant frequency, which explains the initial difference. As the proportion of dielectrics increases, the cavity performance deteriorates.
cavity performance.
## V Discussions and conclusions
This work provides an in-depth analysis of the main effects of dielectric materials in a resonant cavity search for axion dark matter. While dielectrics are useful in many cases, such as increasing Q and shifting the frequency of the TM\({}_{010}\) to lower frequencies, their use can result in a sub-optimal performance relative to a similar search performed with a cavity devoid of dielectrics. In the simple cases studied in Sec.III, this can result in an average of \(\sim\)1500 reduction in the scan rate over the frequency range of interest for cavity haloscopes. This effect is largely due to the concentration of the field energy in the dielectric, requiring a substantial reduction in volume in order to search at the same frequency achievable with a cavity devoid of dielectrics. This also results in a reduction of the form factor, which can mostly cancel the potential gain in the quality factor from dielectric placement. As such, generally, a cavity devoid of dielectric material is a more favorable configuration for a search using the lowest-order TM mode. However, this ignores the practical limits of lowering the mode frequency of a metal-only cavity which can result in cavity dimensions exceeding the infrastructure used to house the experiment such as the magnet bore size or fridge cooling power. In the case that a metal cavity at the chosen frequency is not possible, the use of dielectrics, while sub-optimal, may offer the exploration of frequencies not accessible otherwise. In the case where there is no preferred frequency target, the best strategy for extending the frequency range of a pre-existing cavity, however, is to move up in frequency with metal elements rather than down in frequency with dielectrics.
Note that although the models introduced in Sec.III have been enhanced with more details, they remain relatively simple. Certain aspects, such as the gap between the dielectric and the cavity caps, are not considered. Nevertheless, this level of simplicity is adequate for providing an order-of-magnitude comparison to demonstrate the primary effect. To further simplify the problem, the tuning of the rod has not been fully accounted for. However, given the limited tuning range of the dielectric rod and the fact that the metal rod is positioned in the middle, representing the worst-case scenario, this comparison remains valid and unbiased.
As discussed before, in searches utilizing higher-order modes, dielectric materials are used to improve the form factor by suppressing the opposite-phase electric field components. The utilization of dielectrics in higher-order modes is particularly beneficial for high-frequency searches, offering increased volume relative to the lowest mode without a substantial reduction in form factor. While more emphasis is given to the lowest mode in this work, we show that high-order modes have a more stringent longitudinal aspect ratio requirement for limiting the mode crossings, which warrants careful attention in cavity design.
## Acknowledgements
The authors thank Samantha Lewis, Sumita Ghosh and Eleanor Graham for helpful discussions and comments on the manuscript. This work is supported by the National Science Foundation under Grant No. PHY-2011357. Michael. J. Jewell and Reina H. Maruyama are also supported in part by the Department of Energy under Grant No. DE-AC02-07CH11359.
## Appendix A longitudinal aspect ratio
In this appendix, we derive the optimal longitudinal aspect ratio of the cavity which is applied throughout our study. The longitudinal aspect ratio is defined as the ratio of cavity height L over the radius of the confinement area, which is usually the cavity radius \(\rho\) for a cylindrical cavity. A certain longitudinal aspect ratio needs to be respected because a higher aspect ratio increases the risk of mode crossings. It suggests that extending the height of the cavity arbitrarily is not an efficient strategy for maximizing the detection volume.
Mode crossings occur when the TM mode of interest is at a frequency where it becomes degenerate with a TE or TEM mode, resulting in a loss of sensitivity to the axion signal. The mixing of TM modes with other modes degrades the \(Q\) of the mode of interest and leaves gaps in the cavity's scan range. For example, in the HAYSTAC experiment, as much as 15% of the available frequency range contains significant mode mixing [21]. The longitudinal symmetry breaking within the cavities, such as gaps at the rod ends and tilt of the rods, is responsible for the mode crossings [22]. In practice, perfect longitudinal symmetry cannot be achieved due to machining and assembly tolerances, which makes mode crossings inevitable. However, we can minimize them by limiting the density of the intruder modes [23].
Consider the number of TE modes as a function of the wave vector \(k,N(k)\). In the longitudinal direction, the mode density is \(dN_{z}/dk_{z}=L/\pi\). In the transverse direction, using the approximation of the Bessel functions of the first kind, we obtain the density of TM or TE modes
\[\frac{dN_{t}}{dk_{t}}\approx\frac{2\rho^{2}k_{t}}{\pi} \tag{10}\]
Integrating \(N_{t}\) and \(N_{z}\) over k, with constraint \(k^{2}=k_{t}^{2}+k_{z}^{2}\), one finds the total number of modes to be
\[N(k)=\frac{2\rho^{2}Lk^{3}}{3\pi^{2}} \tag{11}\]
Recasting \(N(k)\) as \(N(\nu)\) with \(k=2\pi\nu/c\) (set \(c=1\)) and taking the derivative respective to \(\nu\), we now obtain the TE mode density as a function of frequency \(\nu\)
\[n(\nu)=\frac{dN}{d\nu}\approx 16\pi\rho^{2}L\nu^{2} \tag{12}\]
We now define excessive mode crossings as more than 15% of the total cavity tuning range. For example, a haloscope experiment such as the HAYSTAC has a tuning range of about 2 GHz, and 15% of that corresponds to one major mode crossing every 300 MHz 1. Taking a cavity with a loaded quality factor of \(Q_{L}\sim 5000\) and a bandwidth of \(\Delta\nu\sim 1\)MHz as an example [16], this requires tuning about 300 times the cavity bandwidth before hitting a mode crossing, which is a reasonable goal for a practical experiment that wants to cover a wide range. This can be written as
Footnote 1: In practice, the interval of mode crossings given by Eq.A5 will certainly be worse because of the longitudinal symmetry breaking, such as imperfect alignment of the rods, machining flatness tolerance, etc.
\[n(\nu)(300\Delta\nu)=n(\nu)(300\frac{\nu}{Q_{L}})\lesssim 1 \tag{10}\]
Near the resonant frequency of the TM\({}_{00n}\) mode, \(\nu\approx X_{0n}/(2\pi\rho)\), where \(X_{0n}\) is the \(n\)th zero of the Bessel function \(J_{0}(x)\) with \(X_{01}\approx 2.4048\), \(X_{02}\approx 5.5201\), \(X_{03}\approx 8.6537\), etc. Using Eq.A3 and 10, we found for an experiment using TM\({}_{010}\), the aspect ratio requirement for limiting mode crossings is
\[\frac{L}{\rho}\lesssim 5,\text{ for }n=1 \tag{11}\]
Similarly for higher-order mode searches, assuming the cavity can tune more than 300 times the cavity bandwidth before hitting a mode crossing, the longitudinal aspect ratio can be expressed in a more general format
\[\frac{L}{\rho}\lesssim\frac{\pi^{2}Q_{L}}{600\chi_{0n}^{3}}. \tag{12}\]
While neither Eq.A5 nor Eq.12 is intended to serve as strict requirements, in the context of this paper, we treat them as approximate guidelines for cavity designs.
## Appendix B Cavity Equivalent Circuit Model
The impedance-matching approach is an alternative to the energy approach shown in Sec.II for understanding the effect of dielectrics in a cavity for axion searches. To illustrate impedance-matching, it is convenient to represent the properties of a cavity by its equivalent RLC circuit as shown in Fig.7. The inductor \(L\) and the capacitor \(C\) represent the conduction current path and the displacement current path of a cavity respectively2. The resistor \(R\) models the loss of the cavity, including resistive loss of the cavity wall, dielectric loss, leakage through antenna holes, etc. The axion source at \(\omega_{a}\) is represented as an AC voltage source \(V_{a}e^{i\omega_{a}t}\) and provides an ideal current \(l_{a}\). The axion source impedance is given by \(Z_{a}\), which satisfies \(\text{Re}(Z_{a})\gg R\) because of the small axion-photon coupling and it also prevents back conversion of the axion. The axion conversion power is therefore given by
Footnote 2: Please be aware that the symbols \(L\) and \(C\) used throughout this appendix have different meanings compared to their usage in the main text.
\[P_{ax}=I_{a}^{2}Re(Z_{c}) \tag{13}\]
where \(Z_{c}\) is the equivalent complex impedance of the cavity. A simple analysis of the circuit shows that the susceptance is \(Y_{c}=1/Z_{c}\) is given by
\[Y_{c}=\frac{1}{R+i\omega L}+i\omega C. \tag{14}\]
The resonance occurs when the imaginary part goes to zero,
\[\omega_{a}=\sqrt{\frac{1}{LC}-\frac{R^{2}}{L}}\approx\frac{1}{\sqrt{LC}} \tag{15}\]
and the approximation is valid with a high \(Q\), hence low loss inductance, with \(R\ll\omega_{a}L\). The real part of the susceptance is then
\[Y_{c}=\frac{R}{R^{2}+(\omega_{a}L)^{2}}\approx\frac{R}{(\omega_{a}L)^{2}} \tag{16}\]
or
\[Z_{c}\approx\frac{1}{R}\frac{L}{C}. \tag{17}\]
For a fixed axion frequency, \(LC\) is constant. Inserting a dielectric with relative permittivity \(\varepsilon_{d}\) increases to \(\varepsilon_{d}C=C^{\prime}\) therefore we must reduce \(L\), as \(LC=L^{\prime}C^{\prime}=(\varepsilon C)(L/\varepsilon)=C\). We then obtain \(Z_{c}\propto\varepsilon_{d}^{-2}\). From Eq.13 and taking into account the reduction in electric field from the dielectric material, the axion conversion power is \(P_{ax}\propto\varepsilon_{d}^{-3}\), which matches the result from Sec.II. (We could have also reduced the cross-sectional area of the capacitor to increase the resonance frequency, which reduces the volume by a factor of \(\varepsilon\), resulting in \(\varepsilon_{d}^{2}\) reduction in conversion power; however keeping the inductance constant does not model the change in cavity properties by introducing the dielectric.)
When \(L\) is reduced to keep the frequency fixed, in the above circuit analysis we left out an important effect. As most of \(R\) is due to the surface resistance of the inductance, if we reduce \(L\) by removing turns from a coil, noting \(L\propto n^{2}\) where \(n\) is the number of turns, we can surmise that \(R\) scales as \(n\), the length
Figure 7: Equivalent circuit of a resonant mode of a cavity. The axion source is represented by an AC voltage source with impedance \(Z_{a}\).
of wire in the coil. Because \(L\) must be reduced by a factor \(1/\varepsilon_{d}\), then \(R\to R/\varepsilon_{d}\). Therefore,
\[Z_{c}\rightarrow\frac{\sqrt{\varepsilon_{d}}}{R}\frac{L}{\varepsilon_{d}^{2}C} \propto\varepsilon_{d}^{-3/2} \tag{20}\]
Including the reduction in the electric field, \(P_{ax}\propto\varepsilon_{d}^{-5/2}\) which now does not correspond to the result in Sec.II.
The circuit model provides a physical understanding of the origin of the reduction in \(\mathcal{F}\) caused by the presence of the dielectric material in the cavity. In essence, that presence decreases the efficiency of the impedance matching of a cavity to axions by increasing \(C\), reducing \(L\), but is compensated to some degree by a decrease in \(R\) that comes with reducing \(L\). Though there is no exact mapping between the lumped elements and the physical cavity, the results are nonetheless compelling.
|
2305.09578 | Deep Fourier Residual method for solving time-harmonic Maxwell's
equations | Solving PDEs with machine learning techniques has become a popular
alternative to conventional methods. In this context, Neural networks (NNs) are
among the most commonly used machine learning tools, and in those models, the
choice of an appropriate loss function is critical. In general, the main goal
is to guarantee that minimizing the loss during training translates to
minimizing the error in the solution at the same rate. In this work, we focus
on the time-harmonic Maxwell's equations, whose weak formulation takes H(curl)
as the space of test functions. We propose a NN in which the loss function is a
computable approximation of the dual norm of the weak-form PDE residual. To
that end, we employ the Helmholtz decomposition of the space H(curl) and
construct an orthonormal basis for this space in two and three spatial
dimensions. Here, we use the Discrete Sine/Cosine Transform to accurately and
efficiently compute the discrete version of our proposed loss function.
Moreover, in the numerical examples we show a high correlation between the
proposed loss function and the H(curl)-norm of the error, even in problems with
low-regularity solutions. | Jamie M. Taylor, Manuela Bastidas, David Pardo, Ignacio Muga | 2023-05-16T16:24:30Z | http://arxiv.org/abs/2305.09578v1 | # Deep Fourier Residual method for solving time-harmonic Maxwell's equations
###### Abstract
Solving PDEs with machine learning techniques has become a popular alternative to conventional methods. In this context, Neural networks (NNs) are among the most commonly used machine learning tools, and in those models, the choice of an appropriate loss function is critical. In general, the main goal is to guarantee that minimizing the loss during training translates to minimizing the error in the solution at the same rate. In this work, we focus on the time-harmonic Maxwell's equations, whose weak formulation takes \(H_{0}(\mathrm{curl},\Omega)\) as the space of test functions. We propose a NN in which the loss function is a computable approximation of the dual norm of the weak-form PDE residual. To that end, we employ the Helmholtz decomposition of the space \(H_{0}(\mathrm{curl},\Omega)\) and construct an orthonormal basis for this space in two and three spatial dimensions. Here, we use the Discrete Sine/Cosine Transform to accurately and efficiently compute the discrete version of our proposed loss function. Moreover, in the numerical examples we show a high correlation between the proposed loss function and the \(H(\mathrm{curl})\)-norm of the error, even in problems with low-regularity solutions.
## 1 Introduction
Partial Differential Equations (PDEs) are essential tools for modeling and simulating various scientific and industrial problems, in particular, they form the backbone of modern physics. Herein, we concentrate our attention on Maxwell's equations. This set of equations describes the propagation of electromagnetic waves through different media. Commonly used methods to calculate the solution of Maxwell's problems range from exact methods [19, 21] to numerical approximations, such as Finite Differences, Finite Elements (FEM), and Discontinuous Galerkin [8, 9, 26, 29, 36]. Nevertheless, exact solutions can only be obtained in rare scenarios, and proposing a proper numerical technique that is conformal, accurate, and efficient is often challenging.
Using machine learning techniques to approximate the solution of Maxwell's equations is an attractive alternative to classical approaches. Some of the most used machine learning tools in the context of PDEs are Neural Networks (NNs) [6, 13, 27, 28, 30, 33]. These architectures have shown promising results when tackling complex nonlinear systems of equations that underlay physical phenomena. For instance, we highlight their potential to solve parametric PDEs [20, 25], enhance classical numerical methods [2] and solve problems in presence of singularities or sharp gradients [7, 35].
Some popular methods like PINNs (Physics-Informed Neural Networks) [22, 30] enforce a NN to satisfy the strong formulation of a PDE by implementing a numerically tractable norm of the strong-form residual as a loss function. Even though PINNs can approximate the solutions of many physical problems, they are inaccurate when the weak solution does not satisfy the strong-form equation. As a result, PINNs are severely
constrained in many applications that naturally produce low-regularity solutions. Maxwell's equations, for example, may admit a solution in the functional space \(H(\mathrm{curl},\Omega)\setminus H^{2}(\Omega)\) (or even in \(H(\mathrm{curl},\Omega)\setminus H^{1}(\Omega)\) as shown in, e.g., [32]) since the smoothness of the solution depends on the regularity of the domain, the sources, and the boundary conditions. In these situations, PINNs may not be applicable.
In contrast, VPINNs (Variational Physics-Informed Neural Networks) [23, 24] use the residual of the weak formulation of a PDE in the loss function. In this approach, it is essential to select an appropriate set of test functions and define a computable loss that controls the error of the solution, which is generally non-trivial.
An ideal loss function would be the energy norm of the error. Since the error function is unavailable in practice (one would need the exact solution), one typically resorts to minimizing the dual norm of the weak-form PDE residual, that is \(\mathcal{R}:H\to H^{\prime}\), where \(H\) is a Hilbert space of test functions and \(H^{\prime}\) its dual. In many practical cases of linear PDEs, if \(u^{*}\) is the exact solution of a PDE, then, there exist constants \(0<\gamma<M\), such that
\[\frac{1}{M}\|\mathcal{R}(u)\|_{H^{\prime}}\leq\|u-u^{*}\|_{H}\leq\frac{1}{ \gamma}\|\mathcal{R}(u)\|_{H^{\prime}}, \tag{1.1}\]
where \(u\) is an approximation to the solution \(u^{*}\)[37]. It is clear from (1.1) that the dual norm of the residual \(\|\mathcal{R}(u)\|_{H^{\prime}}\) is equivalent to the energy norm of the error, and so it is an appropriate choice of loss function, i.e., \(\mathcal{L}(u):=\|\mathcal{R}(u)\|_{H^{\prime}}\). Unfortunately, it is in general very challenging to evaluate \(\|\cdot\|_{H^{\prime}}\).
For similar problems based on \(H^{1}\) test function spaces, the authors of [37] proposed the Deep Fourier Residual (DFR) method. They employed a numerical method for approximating the dual norm of residuals corresponding to PDEs with \(H^{1}\) test function spaces via a spectral representation of the dual norm, which may be implemented using the Fast Fourier Transform (FFT).
This paper follows the same spirit; we extend the ideas of [37] and numerically implement the dual norm as a loss function to solve PDEs with \(H_{0}(\mathrm{curl},\Omega)\) test function spaces, motivated by the time-harmonic Maxwell's equations. Similarly to the DFR method for PDEs with \(H^{1}\) test function spaces, here the key challenge is to construct an appropriate orthonormal basis for \(H_{0}(\mathrm{curl},\Omega)\). To find such a basis, we use the classical Helmholtz decomposition of a vector field [29] and construct a complete set of basis functions for the space \(H_{0}(\mathrm{curl},\Omega)\) on product domains in two and three spatial dimensions. The strength of the DFR method lies in solving problems with low regularity, where PINNs-like methods based on strong formulations fail. Moreover, our choice of loss function bounds the \(H(\mathrm{curl})\)-norm of the error of the solution. In this way, minimizing the loss during training implies the reduction of the error solution at the same rate. We provide numerical examples that demonstrate the correlation between the proposed loss function and the \(H(\mathrm{curl})\)-norm of the error. In the specific case of Maxwell's equations, the DFR method produces accurate results on heterogeneous and discontinuous media, as well as strong correlations between the loss and \(H(\mathrm{curl})\)-norm of the error during training.
Notice that, besides our work, other studies employ the combination of Fourier basis functions and machine learning techniques. This is usually called Physics-informed Spectral Learning (PiSL) [15, 16]. Nevertheless, these techniques are associated with data analysis and in contrast to our study, they are situated within the framework of physics-informed statistical learning.
Our proposed DFR method faces significant challenges that are similar to those encountered in [37]. Here, we only consider product domains, to profit from the results in [10], and with Dirichlet-type boundary conditions. Different strategies must be used to tackle problems with non-trivial geometries and general boundary conditions. Specifically, using the DFR method when the problem involves general geometries is challenging since constructing an orthonormal basis for the space \(H_{0}(\mathrm{curl},\Omega)\) is not straightforward. Another important limitation of our technique is that the constants bounding the norm of the error, in (1.1), diverge in certain cases, e.g., when one considers materials with a large variation in parameters or frequencies close to a resonant frequency of the system. These technical complications are seen as stability issues intrinsic to the PDE similar to those encountered in traditional approaches such as FEM [12], where the accuracy of the error estimates deteriorate under the same conditions.
The remainder of this work is organized as follows. In Section 2, we state Maxwell's equations and the weak formulation of the problem. There, we also motivate the choice of the dual norm of the residual operator as a natural loss function for training NNs in the case of Maxwell's equations. Later, we discuss the relevance of constructing an appropriate orthonormal basis for the space \(H_{0}(\mathrm{curl},\Omega)\). The details of generating such an orthonormal basis, using Helmholtz decomposition in two and three spatial dimensions,
are described in Section 3, with the technical details deferred to Appendix A. In Section 3, we also present the test functions in 2D and 3D for the simple case of squared and cubic domains and the corresponding calculations are presented in the Appendix B. In Section 4, we describe the architecture of a NN and explain the core of the DFR method, which consists of constructing a computable discretized loss function. Finally, in Sections 5 and 6, we present numerical experiments, conclusions, and directions for future research.
## 2 Problem statement
Consider a domain \(\Omega\subset\mathbb{R}^{n}\), with \(n=2\) or \(3\), whose boundary \(\Gamma:=\partial\Omega\) is polyhedral and connected. Given an impressed field \(\mathbf{E}^{I}\), and an electric density current source \(\mathbf{J}\), we look for an electric field \(\mathbf{E}\), and a magnetic field \(\mathbf{H}\) solving the so-called macroscopic linear Maxwell's equations in a time-harmonic form
\[\mathrm{curl}(\mathbf{E})-i\omega\mu\mathbf{H} =0\] in \[\Omega,\] (Faraday's Law), \[\mathrm{curl}(\mathbf{H})+i\omega\epsilon\mathbf{E} =\mathbf{J}\] in \[\Omega,\] (Ampere's Law), (2.1) \[\mathbf{E}\times\mathbf{n} =\mathbf{E}^{I}\] on \[\Gamma,\]
where \(i\) is the imaginary unit, \(\mathbf{n}\) denotes the outward unit normal vector of \(\Omega\), \(\omega\in\mathbb{R}\) is the angular frequency, and \(\mu\) and \(\epsilon\) are space-dependent functions standing for the magnetic permeability and electrical permittivity, respectively. These functions generally may be tensor-valued, but we consider here only the scalar case.
The formulation (2.1) arises by considering the time-dependent Maxwell's equations under the following Ansatz on the electric and magnetic fields
\[\mathbf{E}(x,t)=\mathfrak{R}(e^{i\omega t}\mathbf{E}(x)),\text{ and }\mathbf{H}(x,t)= \mathfrak{R}(e^{i\omega t}\mathbf{H}(x)).\]
In (2.1), the dependency of the time \(t>0\) is implicit. The model (2.1) is completed with the following Gauss' Laws for magnetic and electric fields
\[\mathrm{div}(\mu\mathbf{H})=0\] in \[\Omega\] (2.2) \[\mathrm{div}(\epsilon\mathbf{E})=\rho\] in \[\Omega\],
where \(\rho\) is the density of free charge.
For the sake of simplicity, in this work we only consider the case when \(\omega\) is non-zero. Notice that when \(\omega\neq 0\) the equations in (2.2) are consequences of (2.1) and the continuity equation
\[i\omega\rho+\mathrm{div}(\mathbf{J})=0,\]
which relates the rate of change of the charge density to the divergence of the current density.
We point out that we have only considered Dirichlet-type boundary conditions. Other strategies must be implemented for different boundary conditions. Whilst the following discussion will only be limited to the case of homogeneous boundary conditions, as our approach is based on an analysis on the space of test functions, rather than trial functions, the extension to non-homogeneous boundary conditions is straightforward as the test function space remains unchanged.
### Preliminaries
We start by defining relevant operators and function spaces. Herein, the definitions are regarded as classical and can be found in [18]. First, we denote \(\mathcal{D}(\Omega)\) the space of smooth functions with compact support in \(\Omega\) and \(\mathcal{D}^{\prime}(\Omega)\) is the space of distributions. We let \(L^{p}(\Omega)\) be the space of the \(p\)-integrable real-valued functions equipped with the usual norm.
For smooth functions, we define the divergence, gradient and curl according to their usual definitions (see [34]). In 2D, we define the curl and its adjoint as
\[\mathrm{curl}: [\mathcal{D}(\Omega)]^{2}\ni\mathbf{\phi} \mapsto \partial_{y}\mathbf{\phi}_{1}-\partial_{x}\mathbf{\phi}_{2}\in\mathcal{D}( \Omega), \tag{2.3}\] \[\mathrm{curl}^{*}: \mathcal{D}(\Omega)\ni\phi \mapsto \begin{pmatrix}-\partial_{y}\phi\\ \partial_{x}\phi\end{pmatrix}\in[\mathcal{D}(\Omega)]^{2}.\]
The curl operator in 2D and 3D can then be extended as mappings from appropriate \(L^{2}\) spaces to \(\mathcal{D}^{\prime}(\Omega)\) by duality. We then define the function space \(H(\mathrm{curl},\Omega)\) consisting of functions in \([L^{2}(\Omega)]^{n}\) whose curl, interpreted in the sense of distributions, is in \([L^{2}(\Omega)]^{n^{\prime}}\), i.e.,
\[H(\mathrm{curl},\Omega):=\{\mathbf{u}\in[L^{2}(\Omega)]^{n}:\mathrm{curl}(\mathbf{u}) \in[L^{2}(\Omega)]^{n^{\prime}}\},\]
where \(n^{\prime}=1\) if \(n=2\) and \(n^{\prime}=3\) if \(n=3\). The space \(H(\mathrm{curl},\Omega)\) is a Hilbert space with inner product given by
\[(\mathbf{u},\mathbf{v})_{H(\mathrm{curl},\Omega)}=\int_{\Omega}\mathrm{curl}(\mathbf{u}) \cdot\mathrm{curl}(\mathbf{v})+\mathbf{u}\cdot\mathbf{v}\,d\mathbf{x}\qquad\forall\mathbf{u},\mathbf{v }\in H(\mathrm{curl},\Omega). \tag{2.4}\]
For any bounded Lipschitz domain \(\Omega\subset\mathbb{R}^{n}\) with boundary \(\Gamma\) and outward normal \(\mathbf{n}\), the mapping \(\gamma_{t}:\mathcal{C}^{1}(\bar{\Omega})\to L^{2}(\Gamma)\) with \(\gamma_{t}(\mathbf{u})=\mathbf{u}|_{\Gamma}\times\mathbf{n}\) can be uniquely extended to the continuous tangential trace operator, \(\gamma_{t}:H(\mathrm{curl},\Omega)\to H^{-\frac{1}{2}}(\Gamma,\mathbb{R}^{d})\) (see [5])1. We then define the space
Footnote 1: In two dimensions, the cross product is interpreted as the scalar product \(\mathbf{u}\times\mathbf{v}=\mathbf{v}_{2}\mathbf{u}_{1}-\mathbf{u}_{2}\mathbf{v}_{1}\).
\[H_{0}(\mathrm{curl},\Omega):= \{\mathbf{u}\in H(\mathrm{curl},\Omega):\,\gamma_{t}(\mathbf{u})=0\}.\]
### Weak formulation
There are multiple weak formulations for the Maxwell system, all based on the general idea of minimizing the functional that represents the energy of an electromagnetic field. Notice that in (2.1) one can eliminate \(\mathbf{H}\) or \(\mathbf{E}\) from each of the equations. Assuming \(\mu\) and \(\epsilon\) are real-valued, bounded and non-zero functions, \(\mathbf{J}\in[L^{2}(\Omega)]^{n}\), and \(\mathbf{E}^{I}=\mathbf{0}\), the weak formulation corresponding to the electric field in the problem (2.1) is: Find \(\mathbf{E}\in H_{0}(\mathrm{curl},\Omega)\) satisfying
\[\int_{\Omega}\mu^{-1}\mathrm{curl}(\mathbf{E})\cdot\mathrm{curl}(\mathbf{\phi})- \omega^{2}\epsilon\mathbf{E}\cdot\mathbf{\phi}\,d\mathbf{x}=\int_{\Omega}i\omega\mathbf{J} \cdot\mathbf{\phi}\,d\mathbf{x}\qquad\forall\mathbf{\phi}\in H_{0}(\mathrm{curl},\Omega). \tag{2.5}\]
An analogous weak form exists for the magnetic field \(\mathbf{H}\). Notice that, Gauss' Laws in (2.2) are satisfied weakly, by considering test functions \(\mathbf{\phi}=\nabla u\) for \(u\in H_{0}^{1}(\Omega)\) in (2.5).
### Residual minimization
The residual operator corresponding with the weak form (2.5) is \(\mathcal{R}:H\to H^{\prime}\) with \(H=H_{0}(\mathrm{curl},\Omega)\) and \(H^{\prime}\) being its dual. This weak residual operator may be expressed in the general form
\[\langle\mathcal{R}(\mathbf{E}),\mathbf{\phi}\rangle_{H^{\prime}\times H}=b(\mathbf{E},\bm {\phi})-\ell(\mathbf{\phi}), \tag{2.6}\]
where \(\ell\in H^{\prime}\) is
\[\ell(\mathbf{\phi})=\int_{\Omega}i\omega\mathbf{J}\cdot\mathbf{\phi}\,d\mathbf{x}\qquad\forall \mathbf{\phi}\in H,\]
and \(b:H\times H\to\mathbb{R}\) is the following bilinear form
\[b(\mathbf{E},\mathbf{\phi})=\int_{\Omega}\mu^{-1}\mathrm{curl}(\mathbf{E})\cdot\mathrm{ curl}(\mathbf{\phi})-\omega^{2}\epsilon\mathbf{E}\cdot\mathbf{\phi}\,d\mathbf{x}\qquad\forall\mathbf{ \phi}\in H. \tag{2.7}\]
The existence and uniqueness of a solution of (2.5), for \(\omega\) outside of a countable set of resonant frequencies, is proved in, e.g. [29, Chapter 4] and [26, Theorem 4.32]. Moreover, using the reasoning in [14, Section 25.3], we know that the solution for the variational problem exists and is unique if and only if the following bounds apply
\[\begin{split}\|\mathcal{R}(\mathbf{E})\|_{H^{\prime}}^{2}=\sup_{\mathbf{ \phi}\in H\backslash\{0\}}\frac{|\langle\mathcal{R}(\mathbf{E}),\mathbf{\phi}\rangle_{ H^{\prime}\times H}|}{\|\mathbf{\phi}\|_{H}}&\leq M\|\mathbf{E}-\mathbf{E}^{*}\|_{H}, \\ \|\mathcal{R}(\mathbf{E})\|_{H^{\prime}}^{2}=\sup_{\mathbf{\phi}\in H \backslash\{0\}}\frac{|\langle\mathcal{R}(\mathbf{E}),\mathbf{\phi}\rangle_{H^{\prime} \times H}|}{\|\mathbf{\phi}\|_{H}}&\geq\gamma\|\mathbf{E}-\mathbf{E}^{*}\|_{H}, \end{split} \tag{2.8}\]
where \(\gamma\) and \(M\) are positive constants depending on \(\mu\), \(\omega\) and \(\epsilon\), and \(\mathbf{E}^{*}\) denotes the exact solution of (2.5). We emphasize that the coercive case of \(\epsilon<0\), is mathematically interesting, as it implies that the bilinear form (2.7) becomes equivalent to the inner product on \(H(\operatorname{curl},\Omega)\).
On the other hand, for any \(\mathcal{F}\in H^{\prime}\), by the Riesz representation theorem, there exists some \(\mathbf{u}_{\mathcal{F}}\in H\) with \(\mathcal{F}(\mathbf{v})=(\mathbf{u}_{\mathcal{F}},\mathbf{v})_{H}\) for all \(\mathbf{v}\in H\) and \(\|\mathcal{F}\|_{H^{\prime}}=\|\mathbf{u}_{\mathcal{F}}\|_{H}\). Furthermore, if \((\Phi_{k})_{k\in\mathcal{I}}\) is an orthonormal basis of \(H\), with \(\mathcal{I}\) denoting a set of indices, then, by using the generalized Parseval's identity, we have that the dual norm of any \(\mathcal{F}\in H^{\prime}\) can be expressed as
\[\|\mathcal{F}\|_{H^{\prime}}^{2}=\|\mathbf{u}_{\mathcal{F}}\|_{H}^{2}\overset{ \text{Parseval}}{=}\sum_{k\in\mathcal{I}}(\mathbf{u}_{\mathcal{F}},\Phi_{k})_{H}^{ 2}=\sum_{k\in\mathcal{I}}\mathcal{F}(\Phi_{k})^{2}. \tag{2.9}\]
From (2.6) and (2.9) we obtain an expression for the dual norm of the residual as
\[\|\mathcal{R}(\mathbf{E})\|_{H^{\prime}}^{2}=\sum_{k\in\mathcal{I}}\langle \mathcal{R}(\mathbf{E}),\Phi_{k}\rangle_{H^{\prime}\times H}^{2}, \tag{2.10}\]
According to (2.10), determining a set of orthonormal basis functions \(\Phi_{k}\) is all that is required to calculate the dual norm of the residual. Whilst this is generally a non-trivial task, in the following section, we will construct such a set of basis functions in simplified geometries.
## 3 A set of basis functions for \(H_{0}(\operatorname{curl},\Omega)\)
In order to find basis functions for \(H_{0}(\operatorname{curl},\Omega)\), we seek an orthonormal eigenbasis for the differential operator corresponding to the inner product (2.4). More details of these ideas can be found in [26, Chapter 4]. The inner product (2.4) is naturally associated with the differential operator \((1+\operatorname{curl}\)-\(\operatorname{curl}\)). So, we consider the weak eigenpairs \((\lambda_{k},\Phi_{k})\in\mathbb{R}\times H_{0}(\operatorname{curl},\Omega)\) solving
\[\begin{split}\int_{\Omega}\operatorname{curl}(\Phi_{k})\cdot \operatorname{curl}(\mathbf{v})+\Phi_{k}\cdot\mathbf{v}\,d\mathbf{x}&= \lambda_{k}\int_{\Omega}\Phi_{k}\cdot\mathbf{v}\,d\mathbf{x}\qquad\forall\mathbf{v}\in H_{ 0}(\operatorname{curl},\Omega),\\ \|\Phi_{k}\|_{H(\operatorname{curl},\Omega)}&=1.\end{split} \tag{3.1}\]
In strong form, we may write
\[(1+\operatorname{curl}\text{-}\text{curl})\Phi_{k} =\lambda_{k}\Phi_{k}\quad\text{ in }\Omega, \tag{3.2}\] \[\Phi_{k}\times\mathbf{n} =0\quad\quad\text{ on }\Gamma.\]
If the inverse of \((1+\operatorname{curl}\)-\(\operatorname{curl})\) were to be compact and self-adjoint on \(H_{0}(\operatorname{curl},\Omega)\), then, the application of the Hilbert-Schmidt theorem to the inverse would provide the existence of an eigenbasis for the operator itself and a complete eigenbasis of \(H_{0}(\operatorname{curl},\Omega)\). Nevertheless, we notice that, when restricted to functions of the form \(\mathbf{v}=\nabla u\) for \(u\in H_{0}^{1}(\Omega)\), the inverse of the operator \((1+\operatorname{curl}\)-\(\operatorname{curl})\) is the identity and thus not compact. However, this technical issue can be resolved using the Helmholtz decomposition of the space \(H_{0}(\operatorname{curl},\Omega)\), i.e., we decompose the problem (3.1) into two sub-problems. We omit the specifics of this reasoning and refer to [3, Chapter 6] and [11] for more details.
From (3.1), one obtains that the eigenvalue \(\lambda_{k}=1\) has an infinite-dimensional eigenspace, that is, the null space of the curl operator. In simply connected domains, we have that \(\operatorname{curl}(\mathbf{v})=0\) implies \(\mathbf{v}=\nabla u\) for some \(u\in H^{1}(\Omega)\). Moreover, if \(u\in H_{0}^{1}(\Omega)\), then \(\nabla u\) is parallel to the unit normal vector \(\mathbf{n}\) on \(\partial\Omega\), which means that \(\nabla u\in H_{0}(\operatorname{curl},\Omega)\). Thus, we identify a large space of eigenvectors with eigenvalue \(\lambda_{k}=1\), \(\nabla H_{0}^{1}(\Omega)\subset H_{0}(\operatorname{curl},\Omega)\) defined as
\[\nabla H_{0}^{1}(\Omega):=\{\nabla u:u\in H_{0}^{1}(\Omega)\}.\]
We note that equipping \(H_{0}^{1}(\Omega)\) with the inner product \((u,v)_{H_{0}^{1}(\Omega)}=\int_{\Omega}\nabla u\cdot\nabla v\,d\mathbf{x}\), the space \(\nabla H_{0}^{1}(\Omega)\), as a subspace of \(H_{0}(\operatorname{curl},\Omega)\), is isometric to \(H_{0}^{1}(\Omega)\), i.e., \((\nabla u,\nabla v)_{H(\operatorname{curl},\Omega)}=(u,v)_{H_{0}^{1}(\Omega)}\). Consequently, the space \(\nabla H_{0}^{1}(\Omega)\) forms a closed subspace of \(H_{0}(\operatorname{curl},\Omega)\) (see [26, Lemma 4.20]), and we can employ the following orthogonal decomposition of \(H_{0}(\operatorname{curl},\Omega)\)
\[H_{0}(\operatorname{curl},\Omega)=X_{0}(\Omega)\oplus\nabla H_{0}^{1}(\Omega), \tag{3.3}\]
where \(X_{0}(\Omega):=(\nabla H^{1}_{0}(\Omega))^{\perp}\). Notice that for a function \(\mathbf{v}\in X_{0}(\Omega)\), one necessarily has that
\[(\mathbf{v},\nabla u)_{H(\mathrm{curl},\Omega)}=0\qquad\forall u\in H^{1}_{0}( \Omega),\]
meaning \(\mathrm{div}(\mathbf{v})=0\) weakly. That is to say that vector fields in \(H_{0}(\mathrm{curl},\Omega)\) can be decomposed into two parts: a curl-free component and a divergence-free component. Finding an orthonormal basis for \(H_{0}(\mathrm{curl},\Omega)\) reduces to finding an orthonormal basis for each component. We consider each of these in turn.
### A set of basis functions for \(\nabla H^{1}_{0}(\Omega)\)
To find a basis for the space \(\nabla H^{1}_{0}(\Omega)\), we use the fact that the differential operator \(\nabla\) defines an isometry between \(H^{1}_{0}(\Omega)\) and \(\nabla H^{1}_{0}(\Omega)\), viewed as a subset of \(H_{0}(\mathrm{curl},\Omega)\). The gradients of any orthonormal basis of \(H^{1}_{0}(\Omega)\) thus define an orthonormal basis of \(\nabla H^{1}_{0}(\Omega)\subset H_{0}(\mathrm{curl},\Omega)\).
From classical spectral theory, the following proposition defines an orthonormal basis for \(\nabla H^{1}_{0}(\Omega)\) in terms of the homogeneous-Dirichlet eigenvectors of \(-\Delta\) in \(\Omega\).
**Proposition 3.1**.: _Let \(\Omega\) be a bounded, simply connected, and Lipschitz domain in \(\mathbb{R}^{n}\), where \(n=2\) or \(3\). There exists an orthonormal basis of \(H^{1}(\Omega)\), consisting of non-zero homogeneous-Dirichlet eigenvectors of \(-\Delta\) in \(\Omega\), \((\phi_{k})_{k\in\mathcal{I}}\), for a countable index set \(\mathcal{I}\). Then, the sequence \((\mathbf{\phi}_{k})_{k\in\mathcal{I}}\), defined as_
\[\mathbf{\phi}_{k}:=\frac{\nabla\phi_{k}}{\|\nabla\phi_{k}\|_{[L^{2}(\Omega)]^{n}}},\]
_forms an orthonormal basis for \(\nabla H^{1}_{0}(\Omega)\subset H_{0}(\mathrm{curl},\Omega)\)._
Appendix A contains the proof of Proposition 3.1.
### A set of basis functions for \(X_{0}(\Omega)\)
For constructing the eigenbasis for the space \(X_{0}(\Omega)\), we resume the prior discussion on the eigenvectors of the operator \((1+\mathrm{curl}\)-\(\mathrm{curl})\). We recall that \(X_{0}(\Omega)\) is compactly embedded into \(L^{2}(\Omega)\) (see [26, Theorem 4.23]), which ensures that the \((1+\mathrm{curl}\)-\(\mathrm{curl})\) operator admits a compact and self-adjoint inverse when restricted to \(X_{0}(\Omega)\). Then, the eigenbasis of the operator forms a complete and orthonormal basis of the space \(X_{0}(\Omega)\). Moreover, when restricted to divergence-free vector fields, the \((1+\mathrm{curl}\)-\(\mathrm{curl})\) operator reduces to \((1-\Delta)\), which is suggestive of the fact that one may construct eigenvectors of \((1+\mathrm{curl}\)-\(\mathrm{curl})\) via eigenvectors of the negative Laplacian, which we perform in the following propositions. Here, we divide the construction of this eigenbasis into two scenarios based on the dimensionality of \(\Omega\).
**Proposition 3.2**.: _Let \(\Omega\) be a bounded, simply connected, and Lipschitz domain in \(\mathbb{R}^{2}\). There exists an orthonormal basis of \(H^{1}(\Omega)\), consisting of non-constant homogeneous-Neumann eigenvectors of \(-\Delta\) in \(\Omega\), \((\mathbf{\psi}_{k})_{k\in\mathcal{I}}\), for a countable index set \(\mathcal{I}\). Then, the sequence \((\mathbf{\psi}_{k})_{k\in\mathcal{I}}\), defined as_
\[\mathbf{\psi}_{k}:=\frac{\mathrm{curl}^{*}(\phi_{k})}{\|\mathrm{curl}^{*}(\phi_{k} )\|_{H(\mathrm{curl},\Omega)}},\]
_where \(\mathrm{curl}^{*}\) is as defined in (2.3), forms an orthonormal basis for \(X_{0}(\Omega)\subset H_{0}(\mathrm{curl},\Omega)\)._
The proof of Proposition 3.2 is detailed in Appendix A.
Finding an orthonormal basis for the space \(X_{0}(\Omega)\) in 3D is more complex than in the 2D case. Here, we restrict the construction of basis functions for the space \(X_{0}(\Omega)\) to Cartesian product domains, i.e., domains defined as the Cartesian product of a simply connected domain in \(\mathbb{R}^{2}\) and a closed interval in \(\mathbb{R}\). In this particular case, we can construct eigenvectors of the \((1+\mathrm{curl}\)-\(\mathrm{curl})\) operator in a similar fashion, according to differential operators acting upon scalar-valued eigenfunctions of the Laplacian with appropriate boundary conditions.
In such geometries, the basis functions of \(X_{0}(\Omega)\) in three dimensions come in two distinct modes, called TM (Transverse Magnetic) and TE (Transverse Electric) modes [26]. The construction of such an eigenbasis was developed in [10]. There, the authors demonstrate that the TE and TM modes defined below constitute a complete and orthonormal basis of \(X_{0}(\Omega)\).
Without loss of generality, we state that a Cartesian product domain \(\Omega\in\mathbb{R}^{3}\) is of the form \(\Omega^{*}\times I\), with \(\Omega^{*}\subset\mathbb{R}^{2}\) simply connected and \(I\subset\mathbb{R}\) an interval. We refer to the coordinate direction corresponding to the interval \(I\) as the distinguished direction. For this specific definition of \(\Omega\), the \(z\)-coordinate is the distinguished axis of the Cartesian product domain, but it is worth noting that the following derivation is direction-independent.
Considering the scope of this work, we refer to [10] for a more detailed explanation and in Appendix A we give further details of the properties of the basis functions for \(X_{0}(\Omega)\) in 3D.
**Proposition 3.3**.: _Let \(\Omega\subset\mathbb{R}^{2}\) be a Cartesian product domain such that \(\Omega=\Omega^{*}\times I\), where \(\Omega^{*}\subset\mathbb{R}^{2}\) is a simply connected domain, and \(I\subset\mathbb{R}\) is an interval. Given two sets of indices \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\), consider the following sets of functions:_
* _The non-zero functions_ \((p_{k})_{k\in\mathcal{I}_{1}}\) _forming a complete set of eigenvectors of_ \(-\Delta\) _in_ \(\Omega\) _with homogeneous-Dirichlet boundary conditions on_ \(\overline{\Omega^{*}}\times\partial I\) _and homogeneous-Neumann boundary conditions on_ \(\overline{I}\times\partial\Omega^{*}\)_._
* _The non-zero functions_ \((q_{h})_{h\in\mathcal{I}_{2}}\) _forming a complete set of eigenvectors of_ \(-\Delta\) _in_ \(\Omega\) _with homogeneous-Neumann boundary conditions on_ \(\overline{\Omega^{*}}\times\partial I\) _and homogeneous-Dirichlet boundary conditions on_ \(\overline{I}\times\partial\Omega^{*}\)_._
_Then, we define the sequences of vector fields \((\boldsymbol{\psi}_{k}^{\rm TM})_{k\in\mathcal{I}_{1}}\) and \((\boldsymbol{\psi}_{h}^{\rm TE})_{h\in\mathcal{I}_{2}}\in X_{0}(\Omega)\), via_
\[\boldsymbol{\psi}_{k}^{\rm TM}:=\frac{\mathrm{curl}(p_{k}\boldsymbol{e})}{ \|\mathrm{curl}(p_{k}\boldsymbol{e})\|_{H(\mathrm{curl},\Omega)}},\text{ and }\quad \boldsymbol{\psi}_{h}^{\rm TE}:=\frac{\mathrm{curl}(\mathrm{curl}(q_{h} \boldsymbol{e}))}{\|\mathrm{curl}(\mathrm{curl}(q_{h}\boldsymbol{e}))\|_{H( \mathrm{curl},\Omega)}},\]
_for each \(k\in\mathcal{I}_{1}\) and \(h\in\mathcal{I}_{2}\), and where \(\boldsymbol{e}\) is the unit vector in the distinguished direction. Then, the union of the two sequences \((\boldsymbol{\psi}_{k}^{\rm TM})_{k\in\mathcal{I}_{1}}\) and \((\boldsymbol{\psi}_{h}^{\rm TE})_{h\in\mathcal{I}_{2}}\) forms an orthonormal basis for \(X_{0}(\Omega)\subset H_{0}(\mathrm{curl},\Omega)\)._
Note that the building of basis functions for the space \(X_{0}(\Omega)\) is valid, for instance, in three-dimensional rectangular, cubic, or cylindrical domains, provided the eigenbasis of the Laplacian in \(\Omega^{*}\). The calculations are similar in all of these cases. In Tables 1 and 2 we show the resulting basis functions for the spaces \(\nabla H_{0}^{1}(\Omega)\) and \(X_{0}(\Omega)\) on \(n\)-dimensional cubes \(\Omega=[0,\pi]^{n}\) for \(n=2,3\). For clarity, we detail the construction of the eigenbasis on \(n\)-dimensional cubes in Appendix B. An extension to more general rectangular domains with distinct side lengths is trivial, requiring only straightforward yet tedious calculations and it is therefore omitted.
## 4 The DFR method
In this section we outline the structure of the Neural Networks that we will employ and the fundamental principle underlying the DFR method, which is the construction of a discretized and computable loss function.
### Neural Networks
A neural network is a mathematical model comprising multiple compositions of simple functions, called layers. Specifically, a NN is a non-linear function \(\mathcal{N}(\boldsymbol{x};\boldsymbol{W},\boldsymbol{b})\) parametrized by a set of weights \(\boldsymbol{W}\) and
biases \(\mathbf{b}\), with \(\mathbf{x}\in\mathbb{R}^{d}\) being the input vector. We restrict our attention to fully-connected feed-forward NNs. In this architecture, the weights \(\mathbf{W}\) are represented as a collection of dense matrices \(\mathbf{W}_{1},\mathbf{W}_{2},...,\mathbf{W}_{L}\), where \(\mathbf{W}_{\text{j}}\in\mathbb{R}^{d_{\text{j}}\times d_{\text{j}-1}}\) is the weight matrix for the layer j and \(d_{\text{j}}\) denotes the number of nodes on each layer. And the set of biases \(\mathbf{b}\) is a collection of vectors \(\mathbf{b}_{1},\mathbf{b}_{2},...,\mathbf{b}_{L}\), where \(\mathbf{b}_{\text{j}}\in\mathbb{R}^{d_{\text{j}}}\) is the bias vector for the layer j.
The output of each layer of the network is computed as
\[\mathbf{A}_{\text{j}}=\sigma_{\text{j}}(\mathbf{W}_{\text{j}}\mathbf{A}_{\text{j}-1}+\mathbf{ b}_{\text{j}})\]
where \(\sigma_{\text{j}}\) is a nonlinear activation function, \(\mathbf{A}_{0}=\mathbf{x}\) and \(\sigma_{L}\) is the identity function. The weights and biases are obtained via a gradient-based optimization algorithm applied to a loss function, whose gradients are efficiently calculated via backpropagation. This process adjusts the parameters to minimize a specified loss function, which in our case is a discretized and computable approximation to the dual norm of the PDE residual, as defined in the following section.
We impose homogeneous Dirichlet-type boundary conditions on our candidate solutions by using a cutoff function \(\xi:\overline{\Omega}\to\mathbb{R}^{d\times d}\). Specifically, we define the approximation of the solution of (2.1) as \(\mathbf{E}=\xi\mathbf{\tilde{E}}\), where \(\mathbf{\tilde{E}}\) is the output of the fully-connected feed-forward neural network \(\mathcal{N}(\mathbf{x};\mathbf{W},\mathbf{b})\) and \(\xi\) is smooth and non-trainable. The function \(\xi\) gives matrices that are positive definite in \(\Omega\), and enforces the tangential operator to be zero on the boundary, i.e., the constraint \(\gamma_{t}(\mathbf{E})=0\) on \(\Gamma\).
### The discretized loss
In this section, we construct a discretized and computable loss function \(\mathcal{L}\) as an approximation of the dual norm of the residual (2.10). For the sake of simplicity and exposition, we restrict the details in this section to squares, i.e., \(\Omega=[0,\pi]^{2}\) using the basis functions in Table 1. The upcoming calculations are similar for other Cartesian product domains in 2D and 3D.
Notice that calculating the integrals in \(\langle\mathcal{R}(\mathbf{E}),\Phi_{k}\rangle_{H^{\prime}\times H}\) may be costly since it requires the evaluation of many functions and their derivatives. However, we underline that the basis functions for the space \(H_{0}(\operatorname{curl},\Omega)\) are specified in terms of Laplacian's eigenbasis and thus both the basis functions and their derivatives are described in terms of sines and cosines. This feature motivates the use of efficient methods for approximating such integrals, which are naturally based on the Fast Fourier Transform (FFT). In this regard, applying Discrete Sine/Cosine transformations (DST/DCT) is a well-known strategy that significantly reduces computational complexity when applying a \(N\) point midpoint rule for \(N\) basis functions. Specifically, using DST/DCT as a quadrature rule reduces the amount of required calculations from \(O(N^{2})\) to \(O(N\log(N))\). In our case, the Discrete Sine/Cosine transforms appear naturally when one applies the
mid-point integration rule to the integrals appearing in the residual. Here, we use the type II Sine/Cosine transforms defined in [4, Section 4.2]. Each transform is represented by an \(N\times N\) matrix as
\[(S_{N}^{II})_{\mathrm{ij}} :=\sqrt{\frac{2}{N}}\sigma_{\mathrm{i}}\sin\left(\frac{\pi}{N} \left(\mathrm{j}+\frac{1}{2}\right)(\mathrm{i}+1)\right)\text{ with }\sigma_{\mathrm{i}}=\left\{\begin{array}{cc} \frac{1}{\sqrt{2}}&\mathrm{i}=N-1,\\ 1&\mathrm{i}\neq N-1,\end{array}\right.\] \[(C_{N}^{II})_{\mathrm{ij}} :=\sqrt{\frac{2}{N}}\sigma_{\mathrm{i}}^{\prime}\cos\left(\frac{ \pi}{N}\left(\mathrm{j}+\frac{1}{2}\right)\mathrm{i}\right)\text{ with }\sigma_{\mathrm{i}}^{\prime}=\left\{ \begin{array}{cc}\frac{1}{\sqrt{2}}&\mathrm{i}=0,\\ 1&\mathrm{i}\neq 0,\end{array}\right.\]
where \(\mathrm{i},\mathrm{j}=0,\ldots,N-1\).
Given cutoff frequencies (Fourier modes) \(N>0\) and \(M>0\), we aim to approximate \(\langle\mathcal{R}(\mathbf{E}),\Phi_{k}\rangle_{H^{\prime}\times H}\), where \(k=(k_{1},k_{2})\) is such that \(0\leq k_{1}\leq N\) and \(0\leq k_{2}\leq M\). Moreover, the indices \(k\) satisfy the conditions in Table 1, and we let \(\mathcal{I}\) denote the set of all appropriate indices. For each \(k\in\mathcal{I}\), the basis functions \(\Phi_{k}\) are \(\Phi_{k}=\mathbf{\phi}_{k}+\mathbf{\psi}_{k}\), with \(\mathbf{\phi}_{k}\) and \(\mathbf{\psi}_{k}\) as defined in Table 1. Next, we consider the mid-point integration rule, that is,
\[\int_{\Omega}f\,d\mathbf{x}\approx\frac{\pi^{2}}{NM}\sum_{\mathrm{i}=0}^{N-1}\sum _{\mathrm{j}=0}^{M-1}f(x_{\mathrm{i}},y_{\mathrm{j}}), \tag{4.1}\]
where the integration points are \(x_{\mathrm{i}}=\frac{2\mathrm{i}+1}{2N}\pi\) and \(y_{\mathrm{j}}=\frac{2\mathrm{i}+1}{2M}\pi\). Applying (4.1) in (2.6), we approximate the integrals appearing in the residual as
\[\begin{split}\int_{\Omega}\mu^{-1}\mathrm{curl}(\mathbf{E})\cdot \mathrm{curl}(\Phi_{k})\,d\mathbf{x}&\approx\sum_{\mathrm{i}=0}^{N-1 }\sum_{\mathrm{j}=0}^{M-1}\frac{\pi^{2}}{\sqrt{NM}}\mu^{-1}\mathrm{curl}(\mathbf{E} )(x_{\mathrm{i}},y_{\mathrm{j}})\,\alpha_{k}\left(C_{N}^{II}\right)_{k_{1}- \mathrm{i}\mathrm{j}}\left(C_{M}^{II}\right)_{k_{2}-\mathrm{i}\mathrm{j}}\\ &=:\mathcal{R}_{1k}(\mu^{-1}\mathrm{curl}(\mathbf{E}))\\ \int_{\Omega}\omega^{2}\epsilon\mathbf{E}\cdot\Phi_{k}\,d\mathbf{x}& \approx\sum_{\mathrm{i}=0}^{N-1}\sum_{\mathrm{j}=0}^{M-1}\frac{\pi^{2}}{\sqrt{ NM}}\omega^{2}\epsilon\,\mathbf{E}(x_{\mathrm{i}},y_{\mathrm{j}})\cdot\mathbf{\alpha}_{k}^{ \prime}\,\mathbf{C}_{\mathrm{ij}}^{k}\mathbf{S}_{\mathrm{ij}}^{k}\\ &=:\mathcal{R}_{2k}(\omega^{2}\epsilon\,\mathbf{E})\\ \int_{\Omega}i\omega\mathbf{J}\cdot\Phi_{k}\,d\mathbf{x}& \approx\sum_{\mathrm{i}=0}^{N-1}\sum_{\mathrm{j}=0}^{M-1}\frac{\pi^{2}}{\sqrt{ NM}}i\omega\mathbf{J}(x_{\mathrm{i}},y_{\mathrm{j}})\cdot\mathbf{\alpha}_{k}^{\prime} \,\mathbf{C}_{\mathrm{ij}}^{k}\mathbf{S}_{\mathrm{ij}}^{k}\\ &=\mathcal{R}_{2k}(i\omega\mathbf{J})\end{split} \tag{4.2}\]
where \(\alpha_{k}=\frac{c_{k}(k_{1}k_{2}-k_{1}^{2})}{\sqrt{|k|^{4}+|k|^{2}}}\) and \(\mathbf{\alpha}_{k}^{\prime}=\left(\frac{2k_{1}}{\pi|k|}+\frac{c_{k}k_{2}}{\sqrt{| k|^{4}+|k|^{2}}},\frac{2k_{2}}{\pi|k|}-\frac{c_{k}k_{1}}{\sqrt{|k|^{4}+|k|^{2}}} \right)^{\!t}\mathbb{I}\), with \(\mathbb{I}\) being the identity matrix. The matrices \(\mathbf{C}_{\mathrm{ij}}^{k}\) and \(\mathbf{S}_{\mathrm{ij}}^{k}\) contain the cosine and sine transformations as follows
\[\mathbf{C}_{\mathrm{ij}}^{k}=\begin{pmatrix}\left(C_{N}^{II}\right)_{k_{1}- \mathrm{i}}&0\\ 0&\left(C_{M}^{II}\right)_{k_{2}-\mathrm{i}\mathrm{j}}\end{pmatrix}\text{ and }\mathbf{S}_{\mathrm{ij}}^{k}=\begin{pmatrix}\left(S_{M}^{II}\right)_{k_{2}- \mathrm{i}}&0\\ 0&\left(S_{N}^{II}\right)_{k_{1}-\mathrm{i}\mathrm{j}}\end{pmatrix}.\]
Notice that, to evaluate the curl of the candidate solution \(\mathbf{E}\) one needs to use automatic differentiation as described in [1]. Finally, our discretized loss is
\[\mathcal{L}(\mathbf{E}):=\sqrt{\sum_{k\in\mathcal{I}}|\mathcal{R}_{1k}(\mu^{-1} \mathrm{curl}(\mathbf{E}))-\mathcal{R}_{2k}(\omega^{2}\epsilon\,\mathbf{E})-\mathcal{R }_{2k}(i\omega\mathbf{J})|}. \tag{4.3}\]
We point out that it is also possible to employ a different number of integration points and basis functions (modes), but we omit the details here. Notice that the use of less integration points that Fourier modes is expected to reduce integration errors.
## 5 Numerical experiments
In this section, we present four numerical experiments that illustrate the main capabilities and some of the limitations of the DFR method. We use _Tensorflow 2.8_ and implement a feed-forward fully-connected
NN that consists of five hidden layers, each with 20 neurons with a tanh activation function. In our implementation, Adam serves as the optimizer. Through the use of a _Callback_, we allow the optimiser to dynamically modify the learning rate based on the decay of the loss and reject iteration steps that result in an increase in loss (see [38]). We choose an starting learning rate of \(10^{-4}\). In addition, we use a validation set and compare the training and validation losses every iteration. This method aids in detecting overfitting, as mentioned in [31].
### Case 1. Smooth solution in 2D
Let \(\Omega=[0,\pi]^{2}\) and consider the variational form: find \(\mathbf{E}\in H_{0}(\mathrm{curl},\Omega)\) satisfying
\[\int_{\Omega}\mathrm{curl}(\mathbf{E})\cdot\mathrm{curl}(\mathbf{\phi})+\mathbf{E}\cdot \mathbf{\phi}\,d\mathbf{x}=\int_{\Omega}\tilde{\mathbf{J}}\cdot\mathbf{\phi}\,d\mathbf{x}\qquad \forall\mathbf{\phi}\in H_{0}(\mathrm{curl},\Omega). \tag{5.1}\]
Here, \(\tilde{\mathbf{J}}\) is chosen such that the exact solution is \(\mathbf{E}^{*}(x,y)=(xy(y-\pi),xy(x-\pi))^{t}\). Notice that, the bilinear form in (5.1) is precisely the inner product on \(H(\mathrm{curl},\Omega)\) and thus the dual norm of the PDE residual and the \(H(\mathrm{curl})\)-norm of the error are equal, in the sense that the constants \(M\) and \(\gamma\) in (2.8) are equal to 1. Whilst this physically does not correspond to the standard Maxwell's equations, we include this as a test of the mathematical accuracy of the method.
We take \(N=M=100\) integration points for training the NN and 117 integration points on each direction for validation. Moreover, we also select 100 modes in both training and validation.
In Figure 0(a) we show the evolution of the loss on the training and validation data sets. After \(10^{4}\) iterations, both losses stabilize and converge to a limiting value. On the other hand, we show in Figure 0(b) the contribution of the spaces \(\nabla H_{0}^{1}(\Omega)\) and \(X_{0}(\Omega)\) to the squared loss on the training set. We let \(\mathcal{L}_{\nabla H_{0}^{1}}\) and \(\mathcal{L}_{X_{0}}\) denote the parts of \(\mathcal{L}(\mathbf{E})^{2}\) computed by using only the basis functions of the corresponding subspaces.
In Figure 2 we show the relationship between the losses and the relative error of the solution during training and validation. We define the relative error in terms of the \(H(\mathrm{curl})\)-norm of the error as:
\[\mathcal{E}(\mathbf{E}):=\frac{\|\mathbf{E}-\mathbf{E}^{*}\|_{H(\mathrm{curl},\Omega)}}{ \|\mathbf{E}^{*}\|_{H(\mathrm{curl},\Omega)}}\]
and we always measure this error on the validation set. In Figure 2 we include an straight line with slope one and highlight the linear relationship between the loss and the relative error, as expected in view of (2.8). We conclude that the proposed loss is an accurate approximation of the \(H(\mathrm{curl})\)-norm of the error.
Figure 3 shows the obtained solution, the curl of the solution, and the corresponding errors calculated pointwise.
Figure 1: The evolution of the loss in Case 1.
Figure 3: Solution and errors for the model Case 1.
Figure 2: The correlation between the loss \(\mathcal{L}(\mathbf{E})\) and the relative error of the solution \(\mathcal{E}(\mathbf{E})\) during training and validation in Case 1.
### Case 2. Discontinuous parameters in 2D
We take \(\Omega=[0,\pi]^{2}\). We define \(\Omega_{0}=\{(x,y):f_{+}(x,y)<1\}\) with \(f_{+}(x,y)=\left(x-\frac{\pi}{2}\right)^{2}+\left(y-\frac{\pi}{2}\right)^{2}\), and select \(\mu\) and \(\epsilon\) to be piecewise continuous, such that
\[\mu(x,y)= \mu_{1}\mathbf{1}_{\Omega_{0}}+\mu_{2}(1-\mathbf{1}_{\Omega_{0}}),\] \[\epsilon(x,y)= \epsilon_{1}\mathbf{1}_{\Omega_{0}}+\epsilon_{2}(1-\mathbf{1}_{ \Omega_{0}}),\]
with \(\mu_{1}=3,\mu_{2}=1\), \(\epsilon_{1}=1\) and \(\epsilon_{2}=3\). With these particular parameters, we will consider two PDEs, one corresponding the coercive variational form, and a particular case of Maxwell's equations. In both cases, we seek for an exact solution \(\mathbf{E}^{*}=(E_{1}^{*},E_{2}^{*})^{t}\) defined as:
\[E_{1}^{*}(x,y)= \left\{\begin{array}{cc}-\mu_{1}k_{1}\left(1-f_{-}(x,y)\right) \left(y-\frac{\pi}{2}\right)&(x,y)\in\Omega_{0}\\ -\mu_{2}k_{2}\left(1-f_{-}(x,y)\right)\left(r^{2}-f_{-}(x,y)\right)\left(y- \frac{\pi}{2}\right)&\text{else}\end{array}\right. \tag{5.2}\] \[E_{2}^{*}(x,y)= \left\{\begin{array}{cc}-\mu_{1}k_{1}\left(1-f_{-}(x,y)\right) \left(x-\frac{\pi}{2}\right)&(x,y)\in\Omega_{0}\\ -\mu_{2}k_{2}\left(1-f_{-}(x,y)\right)\left(r^{2}-f_{-}(x,y)\right)\left(x- \frac{\pi}{2}\right)&\text{else}\end{array}\right.\]
where \(f_{-}(x,y)=\left(x-\frac{\pi}{2}\right)^{2}-\left(y-\frac{\pi}{2}\right)^{2}\), \(k_{1}=1,k_{2}=35\) and \(r=6\). In particular, we note that the solution admits discontinuities in both the vector field itself and the curl across the \(\partial\Omega_{0}\), and thus the solution is in \(H(\text{curl},\Omega)\setminus H^{1}(\Omega)\).
### Case 2.1. Coercive variational form
First, we consider the variational form: find \(\mathbf{E}\in H_{0}(\text{curl},\Omega)\) satisfying
\[\int_{\Omega}\mu^{-1}\text{curl}(\mathbf{E})\cdot\text{curl}(\mathbf{\phi})+\epsilon \mathbf{E}\cdot\mathbf{\phi}\,d\mathbf{x}=\int_{\Omega}\tilde{\mathbf{J}}\cdot\mathbf{\phi}\,d\bm {x}\qquad\forall\mathbf{\phi}\in H_{0}(\text{curl},\Omega). \tag{5.3}\]
Here, \(\tilde{\mathbf{J}}\) is chosen such that the exact solution is (5.2). Notice that the exact solution does not satisfy the strong form of the equation due to interface conditions at the discontinuities.
In this case, as the loss requires the integration of discontinuous functions, we expect more significant integration errors during training. Thus, we employ a larger number of integration points than modes to mitigate this problem. Specifically, we take \(N=M=200\) integration points for training the NN and 234 points on each direction for validation. In both cases, we use 150 modes.
The evolution of the loss on the training and validation data sets is shown in Figure 3(a). Figure 3(b) shows the contribution of the spaces \(\nabla H_{0}^{1}(\Omega)\) and \(X_{0}(\Omega)\) to the squared loss on the training set, denoted \(\mathcal{L}_{\nabla H_{0}^{1}}\) and \(\mathcal{L}_{X_{0}}\), respectively. The largest loss is attributed to the space \(X_{0}(\Omega)\). This means that any enhancements
Figure 4: The evolution of the loss in Case 2.1.
to the basis functions in this domain will have a major impact on lowering the overall loss. Further research in this area could include parameterizing the number of basis functions related to each space separately.
Figure 5 shows the obtained solution, the curl of the solution and the corresponding errors. As expected, the maximum errors are located close to the discontinuities of the parameters, which is inevitable in our implementation as we are using smooth NNs to approximate discontinuous functions.
The relationship between losses and the relative error in the solution at each iteration is shown in Figure 6. In the asymptotic regime, we obtain a linear relationship between the loss and the error. Since (5.3) relates with the inner product of \(H(\operatorname{curl},\Omega)\), in this case we can use the Riesz representation theorem to estimate the equivalence constants \(M\) and \(\gamma\) in (2.8). A simple calculation give us \(\frac{1}{M}=\frac{1}{3}\) and \(\frac{1}{\gamma}=3\). Figure 6 exhibits the expected oscillation between these two parallel lines.
Now, we remark the importance of accurately calculating the integrals in (4.3). Figure 7 shows the loss evolution and the correlation between the \(H(\operatorname{curl})\)-norm of the error and the loss when the number of integration points is equal to the number of integration points in the Case 2.1. There we use \(N=M=100\) integration points for training the NN and \(120\) points on each direction for validation. In Figure 6(a) we observe a divergence in the training and validation losses after roughly \(2000\) iterations, owing to integration errors, highlighting the need for accurate integration when solutions are of lower regularity.
Figure 5: Solution and errors for the model Case 2.1.
### Case 2.2. The physical variational form
Now, we modify Case 2 and seek for \(\mathbf{E}\in H_{0}(\mathrm{curl},\Omega)\) satisfying
\[\int_{\Omega}\mu^{-1}\mathrm{curl}(\mathbf{E})\cdot\mathrm{curl}(\mathbf{\phi})-\omega^ {2}\epsilon\mathbf{E}\cdot\mathbf{\phi}\,d\mathbf{x}=\int_{\Omega}\tilde{\mathbf{J}}\cdot\mathbf{ \phi}\,d\mathbf{x},\]
with \(\omega=1.25\). This modification corresponds to the weak formulation of the time-harmonic Maxwell's equations in (2.5) with positive discontinuous parameters \(\mu\) and \(\epsilon\).
In contrast to the Case 2.1, in Figure 8 one notices that the evolution of the loss decreases slower and the relation between the loss and the error is not linear. This effect is a clear consequence of the inclusion of the frequency \(\omega\) that modifies the bounds of the error in (2.8).
### Case 3. Smooth solution in 3D
Take \(\Omega=[0,\pi]^{3}\). We consider the variational form: find \(\mathbf{E}\in H_{0}(\mathrm{curl},\Omega)\) satisfying
\[\int_{\Omega}\mu^{-1}\mathrm{curl}(\mathbf{E})\cdot\mathrm{curl}(\mathbf{\phi})- \omega^{2}\epsilon\mathbf{E}\cdot\mathbf{\phi}\,d\mathbf{x}=\int_{\Omega}\tilde{\mathbf{J}} \cdot\mathbf{\phi}\,d\mathbf{x}\qquad\forall\mathbf{\phi}\in H_{0}(\mathrm{curl},\Omega). \tag{5.4}\]
Figure 6: The correlation between the loss \(\mathcal{L}(\mathbf{E})\) and the relative error in the solution \(\mathcal{E}(\mathbf{E})\) during training and validation in Case 2.1.
Figure 7: The evolution of the loss and the correlation between the loss \(\mathcal{L}(\mathbf{E})\) and the relative error in the solution \(\mathcal{E}(\mathbf{E})\) during training and validation in Case 2.1 when using an inaccurate integration rule.
We choose \(\mu\) and \(\epsilon\) to be constant equal to \(1\) and \(\omega=1.5\). Here, \(\tilde{\mathbf{J}}\) is such that the exact solution is
\[\mathbf{E}^{*}(x,y,z)=\begin{bmatrix}\sin(y)\sin(z)\sin(\omega x)\\ \sin(x)\sin(z)\sin(\omega y)\\ \sin(x)\sin(y)\sin(\omega z)\end{bmatrix}. \tag{5.5}\]
Here, we impose homogeneous-Dirichlet boundary conditions on \(\partial\Omega\). We take a partition of \(50\) integration points on each direction for training the NN and \(60\) points for validation. Moreover, we use \(50\) modes in both training and validation.
Similarly to the 2D cases, Figure 8(a) shows the evolution of the loss on the training and validation data sets. After \(10^{5}\) iterations, the losses stabilize and reach values upto \(10^{-3}\). Figure 8(b) shows the contribution of the spaces \(\nabla H^{1}_{0}(\Omega)\) and \(X_{0}(\Omega)\) to the squared loss on the training set.
Finally, we illustrate the correlation between the loss and the relative error in the solution during training and validation in Figure 10. In the asymptotic regime, the relationship between the loss and the error is linear.
Figure 8: The evolution of the loss and the correlation between the loss \(\mathcal{L}(\mathbf{E})\) and the relative error in the solution \(\mathcal{E}(\mathbf{E})\) during training and validation in Case 2.2.
Figure 9: The evolution of the loss in Case 3.
## 6 Conclusions
We extended the principles in [37] and implemented the DFR method for solving Maxwell's equations. In this case, we select the dual norm of the residual as the loss function of a NN solving Maxwell's problem. We rely on the weak formulation of Maxwell's electric field and the Helmholtz decomposition of the space \(H_{0}(\text{curl},\Omega)\). We proposed orthonormal basis functions for each sub-space and used them to construct a computable loss function. To lower the computational cost of estimating integrals, we apply DST/DCT transformations in our discretized loss function. In two and three spatial dimensions, the numerical examples show a linear association between the loss and the \(H(\text{curl})\)-norm of the error. We included examples with discontinuous parameters an noticed the importance of avoiding overfitting and integration errors.
We note that the DFR discussed here suffers the curse of dimensionality and that the implementation differs in 2D and 3D. Additionally, the DFR in 2D can only be applied to general domains if the eigenbasis of the Laplacian on those geometries is known. The 3D version is restricted to Cartesian product domains, which are rectangular or cylindrical domains based on the Laplacian's eigenbasis in 2D, which again must be known in order to implement the method. Future work will look at the method's scalability and investigate the use of subdomain-based local test functions. By doing so, we will consider more general geometries.
## 7 Acknowledgements
Jamie M. Taylor is supported by the Basque Government through the BERC 2018-2021 program and by the Spanish State Agency of Research through "BCAM Severo Ochoa" accreditation of excellence SEV-2017-0718 and through the project (PID2020-114189RB-I00 / AEI / 10.13039 / 501100011033). David Pardo has received funding from: the Spanish Ministry of Science and Innovation projects with references TED2021-132783B-I00, PID2019-108111RB-I00 (FEDER/AEI) and PDC2021-121093-I00 (MCIN / AEI / 10.13039/501100011033/Next Generation EU), the "BCAM Severo Ochoa" accreditation of excellence CEX2021-001142-S / MICIN / AEI / 10.13039/ 501100011033; the Spanish Ministry of Economic and Digital Transformation with Misiones Project IA4TES (MIA.2021.M04.008 / NextGenerationEU PRTR); and the Basque Government through the BERC 2022-2025 program, the Elkartek project SIGZE (KK-2021/00095), and the Consolidated Research Group MATHMODE (IT1456-22) given by the Department of Education. Ignacio Muga is supported by the Chilean National Agency for Research & Development through the Fondecyt Project #1230091.
|
2306.08336 | Global-Local Processing in Convolutional Neural Networks | Convolutional Neural Networks (CNNs) have achieved outstanding performance on
image processing challenges. Actually, CNNs imitate the typically developed
human brain structures at the micro-level (Artificial neurons). At the same
time, they distance themselves from imitating natural visual perception in
humans at the macro architectures (high-level cognition). Recently it has been
investigated that CNNs are highly biased toward local features and fail to
detect the global aspects of their input. Nevertheless, the literature offers
limited clues on this problem. To this end, we propose a simple yet effective
solution inspired by the unconscious behavior of the human pupil. We devise a
simple module called Global Advantage Stream (GAS) to learn and capture the
holistic features of input samples (i.e., the global features). Then, the GAS
features were combined with a CNN network as a plug-and-play component called
the Global/Local Processing (GLP) model. The experimental results confirm that
this stream improves the accuracy with an insignificant additional
computational/temporal load and makes the network more robust to adversarial
attacks. Furthermore, investigating the interpretation of the model shows that
it learns a more holistic representation similar to the perceptual system of
healthy humans | Zahra Rezvani, Soroor Shekarizeh, Mohammad Sabokrou | 2023-06-14T08:08:08Z | http://arxiv.org/abs/2306.08336v1 | # Global-Local Processing in
###### Abstract
Convolutional Neural Networks (CNNs) have achieved outstanding performance on image processing challenges. Actually, CNNs imitate the typically developed human brain structures at the micro-level (Artificial neurons). At the same time, they distance themselves from imitating natural visual perception in humans at the macro architectures (high-level cognition). Recently it has been investigated that CNNs are highly biased toward local features and fail to detect the global aspects of their input. Nevertheless, the literature offers limited clues on this problem. To this end, we propose a simple yet effective solution inspired by the unconscious behavior of the human pupil. We devise a simple module called Global Advantage Stream (GAS) (\(\mathcal{G}\)) to learn and capture the holistic features of input samples (i.e., the global features). Then, the GAS features were combined with a CNN network as a plug-and-play component called the Global/Local Processing (GLP) model. The experimental results confirm that this stream improves the accuracy with an insignificant additional computational/temporal load and makes the network more robust to adversarial attacks. Furthermore, investigating the interpretation of the model shows that it learns a more holistic representation similar to the perceptual system of healthy humans 1.
Footnote 1: Source code is available here
## 1 Introduction
Deep learning methods, as a cutting edge of artificial intelligence, are trained by filtering information through multiple hidden layers. The current DNNs can mimic the human brain at the micro-level (Neuronal level) but fails to deal with macro-level network behavior( Cognitive level).
[Baker et al., 2018] shows that deep convolutional neural networks have more tendency to use texture information over the general shape. They have tried
to train CNNs to categorize images using artificial images with misleading textures and suggest that texture plays a vital role in CNNs. They claimed that deep learning systems have no sensitivity to the overall shape of images and show that benchmark CNNs can not distinguish bounding contours of objects. While it is investigated that humans attend to the features of the general shape before considering the local features such as texture [Hermann et al., 2020]. The stimuli in the natural world have inherently hierarchical architecture: general form or global level and detail texture or local level. Global/Local processing (GLP) is one of the important early debates in psychology about the human perceptual system throughout the past four decades and is still an ongoing challenge. Global Precedence Effect (GPE), as a modern version of Gestalt theory, claims that individuals more readily, process global features faster than local details [Navon, 1977]. GPE has been investigated in a series of experiments with hierarchical compound stimuli consisting of a global letter/shape formed by the configuration of local letters/shapes (See Fig. 1), which are independent in local and global levels of the stimuli. This phenomenon is responsible for the ability to generalize in humans. So, it helps us perceive the forest before the trees and categorize them correctly, despite the differences in the details of the objects[Navon, 1977].
Pupil diameter is subconsciously controlled by the brain due to environmental stimuli. As the pupil shrinks, the reflection of images is focused on the fovea (located in the center of the Retina), which is made up mostly of Cones photoreceptor cells. These cells are responsible for receiving high-level features or details. Instead, the area around the Fovea is filled with Rodes cells. These cells are responsible for low-level features and have a low spatial acuity. As the pupil opens, the reflection of the environment is received by both cell types. The pupil is primarily regulated by prevailing light levels but is also modulated by perceptual and attentional factors. [Sabatino DiCriscio et al., 2018] found through psychophysical experiments with hierarchical stimuli that individuals have a characteristic constriction of the pupil waveform during the selection of local information relative to global information. They indicate that pupil changes may serve as a visual filtering mechanism important for attentional
Figure 1: Examples of hierarchical compound stimuli, that are commonly used to evaluate the ability to detect at the general and partial levels separately in diagnostic applications. They were used to design the global advantage layer in experiment 1.
selection. This work represented the first characterization of pupil response in the context of selective attention and suggested that mechanisms underlying the earliest stages of visual processes could be relevant for perception and visual selection. Also, it has been observed that children with ASD showed pupil constriction as a response to images of faces (Anderson et al., 2006). However, neurotypical children showed pupil dilation in response to the same stimuli (de Vries et al., 2021). There is other evidence that pupillometry reliably tracks inter-individual differences in perceptual style as a biomarker, and individuals with typically developed perception distribute attention to both surfaces in a more global, holistic style (Turi et al., 2018). Recently it has been investigated that Vision Transformer(ViT) has a better performance on modeling the holistic features of images(Dosovitskiy et al., 2020). They split the images into fixed-size patches, embedding them, and feeding them to a Transformer Encoder (TE) (Dosovitskiy et al., 2020).TE was inspired by vanilla transformers introduced for NLP tasks (Vaswani et al., 2017). (Aldahdoodi et al., 2021) assessed such models as more robust to adversarial examples. Nevertheless, the ViT models are computationally expensive.
In this paper, Global Advantage Stream (GAS) was added to increase the accuracy and robustness of common CNNs. The purpose of this stream is to provide a holistic view of these networks, which not only increased their accuracy in categorizing images but also their resistance to common attacks dramatically improved. The novelty of this study is that, unlike previous state-of-the-art research, the design of GAS is completely inspired by the subconscious function of the human pupils. Also, The function of this stream is very similar to early therapeutic intervention methods using robots in educating autistic children to facilitate decoding of the overall features of the perceptual environment by removing details and helping them to return back to the right track base on GPE in normal individuals. The main contributions of this paper are:
* The presented model, unlike CNNs, can consider global features in addition to local ones. The method inspired by the subconscious function of the human pupil extracts both sets of features simultaneously. Feature sets were concatenated to classify the images based on both global form and detail texture. The existence of global features in the feature bank empowers the model to follow the top-down attention strategy in addition to the bottom-up attention approach.
* It has been shown that the proposed method is both more accurate and more robust than CNNs. However, the proposed model imposes an insignificant computational time load on the CNN model. Also, the proposed method has better interpretability rather than CNNs. It has been shown that the proposed model has a better performance. Also, because of its holistic view, it is more resistant to common adversarial attacks. Furthermore, better explainability according to the XAI method confirms better localization of the whole object in the images instead of focusing on a local detailed part.
Method
The main objective of this paper is, inspired by human behavior unconsciousness, forcing the deep neural network to learn both global and local representation. This makes the model more accurate and robust. The new model called the Global/Local Processing (GLP) model is composed of two main components (i.e., streams) that are concatenated in parallel: (1) The local stream (\(\mathcal{L}\)) and (2) The quick global stream (\(\mathcal{G}\)). \(\mathcal{L}\) is a conventional CNN that inherently learns the local features and complex local patterns. To competence the inability of such models to learn the holistic features, we introduce the \(\mathcal{G}\) stream. \(\mathcal{G}\) is composed of the GAS module, which is described below. In short, this module is made of a smart filter followed by two convolutional layers (feature maps). GAS is responsible for capturing global features inspiring the subconscious function of the human pupil.
Fig. 2 shows the overall schema of the proposed method. The inside information of \(\mathcal{G}\) and \(\mathcal{L}\) components and training procedure is explained in the following subsections.
### GAS: Global Advantage Stream
The goal of GAS is to extract global features. In fact, this stream is inspired by the subconscious function of the human pupil. During focus, the environment projects only on Fovea (populated exclusively by Cones with high spatial acuity), but in normal situations with dilated pupils, most of the ambient light is received
Figure 2: Overview of the proposed method (GLP model). Global and local features are extracted through separated streams and then all the features concatenate to classify the images.
by Rodes with low spatial acuity. Fig. 3 (a) displays the frequency distribution of these two cell types relative to the distance from the Fovea.
In the GAS layer, firstly, a smart low-pass filter in the frequency domain has applied to the image input aiming to attenuate high-frequency noises. The most important point about the layer is the automatic cut-off parameter (\(\alpha\)) setting according to each input. To find the proper value for alpha smartly, we used the Entropy criterion. The amount of uncertainty in an entire probability distribution is quantified using the Shannon entropy. The entropy is calculated from the following Equ. 1.\(\mathcal{I}(\mathcal{X})\) is defined as self-information of an event \(\mathcal{X}=x\)[Goodfellow et al., 2016]. It is investigated that by increasing the radius of the Gaussian low-pass filter, the image entropy also will slightly increase. But after this phase, the routine continues in reverse. With increasing this parameter, the entropy changes in the opposite direction. According to this finding, as a next step, we find the value of \(\alpha\) that maximize the Entropy of the input (filtered image) based on Equ. 2. Interestingly, by smart filtering selectively with this value, all the local information has faded, and instead, the global structure of the image is more readily detected. The value of the optimum \(\alpha\) varies based on each image structure and size (see Fig. 3 (b)). In GAS, after removing the local details smartly, there are two feature map layers followed by two layers for pooling and batch normalization( see Fig. 2 ).
\[H(x)=\mathbb{E}_{x\sim P}[\mathcal{I}(\mathcal{X})]=-\,\mathbb{E}_{x\sim P}[P (x)] \tag{1}\]
\[\alpha^{*}=\arg\max_{\alpha}H(filteredImage(X,\alpha)) \tag{2}\]
### Global/Local processing Model
The GLP Model is designed in such a way that global features are obtained from one stream, and local features are achieved from another stream. Afterward, they concatenated with each other as an ultimate feature bank. Finally, there is a fully connected layer to map the features to the output category. As shown in Fig. 2, the local stream consists of a pre-trained CNN. As we discussed before in the Introduction section 1, CNNs extract features based on local details in images. So, most of them could be considered as the local stream (\(\mathcal{L}\)) in our model. On the other hand, the global stream (\(\mathcal{G}\)) is made up of a GAS module.
\(\mathcal{G}+\mathcal{L}\)**Training:** The training of this hybrid model is done in several steps as follows:
1. For \(\mathcal{L}\) layer weights, we use the pre-trained weights of common CNN models, and local features are calculated using them.
2. The GAS module is trained on training data to extract global features.
3. Then, the two feature sets are concatenated, and the feature bank has been completed.
4. Finally, the fully connected layer is trained so that it can perform the classification with the best accuracy.
In the following section, we will show how GAS works to extract global features in the first experiment 3.1. Then we represent how this new stream improves the accuracy and robustness of the benchmark models in the next section in the second experiment 3.2. After that, we demonstrate that the interpretability of the model increases via an Explainable AI method (see Fig. 5).
## 3 Results
To evaluate the proposed method, we have designed two experiments. Firstly, we investigated how GAS extracts global shape using the Navon dataset. Also, We showed that the common-used CNNs couldn't succeed in this simple task. Then,
Figure 3: a) Constricted pupil reflexes all the ambient light in Fovea covered by Cones receptors with high acuity. But when the pupil dilates, image reflexes are received simultaneously by Rodes receptors with low acuity based on the cells’ distribution in the human eyes. Rodes cells are more responsible for peripheral vision [Wandell, 1995]. b) Examples of Smart filtering visualization. **First column:** input images, **Second column:** diagrams of changes in the amount of entropy by increasing the \(\alpha\), **Third column:** the optimum \(\alpha\) that maximizes the value of entropy in the filtered image, and **last column:** outputs of applying the proposed smart filtering using the optimum \(\alpha\).
using an XAI algorithm presented how the GAS extracts exactly the global shape, but the others only focus on local details.
We also evaluated the GLP model accuracy using the Caltech101 dataset [Fei-Fei et al., 2004]. Furthermore, we evaluated the robustness of our model against adversarial attacks. Moreover, to demonstrate the interpretability of our method, visualizations of feature maps were presented.
We have exploited the Gradient-weighted Class Activation Mapping (Grad-CAM) [Selvaraju et al., 2017] to showcase the interpretability of the models in both experiments. Grad-CAM highlights the most important areas in the image by making the gradient of the classification score regarding the final convolutional layer.
All experiments are implemented using PyTorch [Paszke et al., 2019] and performed using an NVIDIA GeForce RTX 2060 GPU.
### GAS module design and comparison
In this experiment, we have compared the performance of the GAS model with two common-use CNN networks, Resnet18 [He et al., 2016] and InceptionV3 [Szegedy et al., 2015] to classify the Navon compound stimuli dataset [Navon, 1977] based on the global/local shape. We trained the GAS network and fine-tuned pre-trained Resnet18 and Inception-v3 models with simple augmented shape images (3000 images in two categories) to recognize the shape of circles and squares. Then, we evaluate all the networks with the Navon compound stimuli dataset based on the local/ global level. The initial learning rate for training the GAS network and fine-tuning the pre-trained models is equal to \(2e^{-3}\) with a decay rate. Also, we used stochastic gradient descent (SGD) optimizer with a momentum equal to 0.9 and a batch size of 64 for all the models.All the information about the computational and time load has been summarized in Appendix A.
**Navon Dataset**[Navon, 1977] Navon is a set of compound stimuli with independent information at the local vs. global level (see 1). This experiment used geometrical shapes (circle vs. square) for both levels in different sizes, sparsity, and line width (4152 hierarchical images). The dataset has been used to test the ability to detect global/local shape detection. The Navon dataset and dataset of simple shape images have been included in the supplementary material.
**Results**. The results of this experiment have been summarized in Table 1. This listed the top 1 model accuracy on the Navon dataset based on both Local / Global image shapes. As illustrated, the GAS outperforms two other CNNs in Global shape detection. While Resnet18 and InceptionV3 obtain better performance for local shape detection.
**Visualization**. Fig. 4 represented the visualized feature maps in the global shape detection task. These tables confirm the power of GAS as a global feature extractor. This is while the CNN models failed to localize the global shape and only heat the local features.
**Discussion**. In this experiment, we confirm that
1. The commonly used heavy CNNs behave weakly in global form detection instead more powerfully in local shape detection and detail texture extraction.
2. The GAS module, despite the higher speed (according to Appendix 5), is more powerful in extracting global features inspires by the subconscious function of the human pupil.
3. Visualizations confirm the global detection ability in GAS, unlike the other CNNs.
In the next experiment, we improved the CNNs by combining GAS module as an extra parallel stream to them. Then evaluate the new hybrid model in accuracy and robustness.
### GLP model evaluation and explainability
In the second experiment, we aimed to evaluate the GLP model's classification accuracy and its robustness in facing adversarial attacks compared to the mentioned CNNs on the Caltech101 dataset. For comparison, we train a GLP model (called GA-Resnet) using a pre-trained Resnet18 as the \(\mathcal{L}\) stream, and our pre-trained GAS model (GAS-299) on the Caltech101 dataset as the \(\mathcal{G}\) stream, also in the same way for comparing our GLP model (called GA-Inception) with the pre-trained InceptionV3. Similar to the previous experiment, we trained the GLP network and fine-tuned pre-trained Resnet18 and Inception-v3 models by the initial Learning rate \(2e^{-3}\) with a decay rate, the SGD optimizer, a batch size of 8, and with the standard input images size \(299\times 299\) for InceptionV3, GLP-299, and GA-Inception models, also \(224\times 224\) for Resnet18, GLP-224, and GA-Resnet models.
**Caltech101**[Fei-Fei et al., 2004] is a well-known dataset for object classification which consists of \(\sim 9K\) images belonging to 101 classes (e.g., "starfish", "dolphin" and "umbrella" etc.) and a background clutter class that contains different objects from the 101 categories. To evaluate our approach, we did not use the background images and split the rest of the 101 classes of images into the train(60%), validation(20%), and test(20%).
**Adversarial Attacks** For evaluating robustness we apply two common adversarial attacks called Fast Gradient Sign Method (FGSM) [Goodfellow et al., 2014]
\begin{table}
\begin{tabular}{l l l} \hline Model & Acc on Local & Acc on Global \\ \hline Resnet18 & **85.24** & 53.65 \\ \hline InceptionV3 & **75.6** & 62.15 \\ \hline Global Advantage Stream & 56.42 & **86.28** \\ \hline \end{tabular}
\end{table}
Table 1: Top1 Accuracy (%) of the models in global/local shape detection tasks on Navon dataset
and Projected Gradient Descent (PGD) (Kurakin et al., 2018). To employ attacks, we used the CleverHans (Papernot et al., 2016) which is a Python library for adversarial attacks. Both FGSM and PGD are categorized in White box attacks which means that the attacker has access to the model's parameters.
FGSM attack, which first is introduced by (Goodfellow et al., 2014) is a simple yet effective method by using the gradients of a CNN to generate adversarial images. As Equ. 3 is defined, for an input image \(x\), FGSM computes the loss of the model prediction regarding the actual class label, then calculates the gradients of the loss with respect to the input image, and uses the sign of the gradients to create the new adversarial image \(Adv(x)\) which maximizes the loss. For a given input image \(x\), the adversarial image is generated as follows:
\[Adv(x)=x+\epsilon*sing(\bigtriangledown_{x}\mathcal{J}(\theta,x,y)) \tag{3}\]
where \(y\) is the input actual label, \(\epsilon\) is used to ensure the perturbations are small enough to can not detect by human eyes but large enough to fool the CNN, \(\mathcal{J}\) is the model loss function, and \(\theta\) is the model parameters.
Figure 4: Comparison of visualization on the Navon dataset global test images. The top row shows input images, and the rest of the rows depict the visualization results of GAS, InceptionV3, and Resnet18, respectively
PGD attack generates new adversarial images in an iterative scheme. Following Equ. 4, PGD tries to maximize the loss of the CNN model on an input image \(x\) while finding a perturbation smaller than \(\epsilon\). Besides defining \(\epsilon\) as the maximum perturbation size, it is required to determine a metric to calculate the distance from the adversarial image \(Adv(x)\) to the input image \(x\). This metric ensures that the output adversarial example is not perceptibly different from humans. Among the various \(L_{p}\) norms \(p=2\) or \(p=\infty\) are the most common-use (Carlini and Wagner, 2017). The PGD attack is formulated as follows:
\[Adv(x)_{i}=CLIP_{x,\epsilon}(x_{i-1}+\lambda sing\bigtriangledown_{x}(\mathcal{ J}(.)));Adv(x)_{0}=x_{original}, \tag{4}\]
where \(i\) defined the number of iterations, \(CLIP\) is an operation that clips \(x\) back to the permissible set, \(\lambda\) is the step size and \(\mathcal{J}\) is the model loss function.
To evaluate the robustness of the GLP model, we investigated the accuracy changes by increasing the perturbation size for the FGSM attack(\(\epsilon\)) and the maximum perturbation size(\(\epsilon\)) for the PGD attack. Following (Reddy et al., 2020), we reported the accuracy of our GLP model compared to the fine-tuned InceptionV3 and Resnet18 models on various \(\epsilon=[0,0.001,0.005,0.01,0.05,0.1,0.15,0.5]\) for both attacks. We calculated the accuracy as 1 - (naturally misclassified images + adversarial misclassified examples) since we run the adversarial attacks only on images that were not naturally misclassified. All the experiments are repeated for five iterations and the average accuracy was reported. For the PGD attack, we report \(L_{\infty}\) PGD results in our experiments. Also, we set the step size \(\lambda\) to \(\epsilon/3\) since it allows the PGD attack to reach the edge of the permissible set and explore the boundary as much as having a reasonable computation time.
**Results**. For training the GLP model, first, the GAS module was trained.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline & 0 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1 & 0.15 & 0.5 \\ \hline Resnet18 & 86.41 & 74.72 & 32.89 & 17.56 & 7.55 & 7.89 & 9.45 & 10.02 \\ \hline GAS-Resnet18 & **91.24** & **86.52** & **61.12** & **41.30** & **25.27** & **24.54** & **24.77** & **15.44** \\ \hline InceptionV3 & 90.50 & 88.13 & 72.24 & 61.06 & 42.86 & 41.61 & 42.74 & 36.22 \\ \hline GAS-InceptionV3 & **92.40** & **91.76** & **83.12** & **71.37** & **53.51** & **48.47** & **47.01** & **40.67** \\ \hline \end{tabular}
\end{table}
Table 2: Top1 Accuracy (%) on FGSM attack for different \(\epsilon\)
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline & 0 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1 & 0.15 & 0.5 \\ \hline Resnet18 & 86.41 & 74.72 & 32.89 & 17.56 & 7.55 & 7.89 & 9.45 & 10.02 \\ \hline GAS-Resnet18 & **91.24** & **86.52** & **61.12** & **41.30** & **25.27** & **24.54** & **24.77** & **15.44** \\ \hline InceptionV3 & 90.50 & 88.13 & 72.24 & 61.06 & 42.86 & 41.61 & 42.74 & 36.22 \\ \hline GAS-InceptionV3 & **92.40** & **91.76** & **83.12** & **71.37** & **53.51** & **48.47** & **47.01** & **40.67** \\ \hline \end{tabular}
\end{table}
Table 3: Top1 Accuracy (%) on PGD attack for different \(\epsilon\)
The extracted global features were concatenated with local features extracted by pre-trained CNN to classify the images. The results in Table 4 show that the GAS model can consistently improve the performance of the CNN methods significantly. In other words, the results validate the importance of global features in the object classification task.
Also, the the top1 accuracy results in Table 2 and Table 3 indicate that the GLP model is more robust than the CNNs in against both FGSM and PGD attacks overall. Also, the diagrams of the top5 and the top10 accuracy comparison are shown in Appendix B for further comparisons.
**Visualization** In order to visualize the GLP model feature maps with Grad-CAM, since this method requires a convolution layer for extracting feature maps, We provide an extra convolution layer after the GLP network and right before the concatenation operation of GLP and pre-trained CNN networks. This extra layer equalizes the size of the last convolution layer of GLP with the last convolution layer of the pre-trained network. Thus, after training this new architecture, we are capable of visualizing the feature maps of our GLP model using the Grad-CAM method. Fig. 5 depicted the comparison of Grad-CAM visualization between the CNNs and the modified version with the GAS module. The remarkable localizing objects' boundaries of the GAS network, together with the power of CNN networks, provide a more precise object localization for two-stream networks compared to using a single CNN network.
**Discussion** In this experiment, we modified commonly used CNNs with a GAS module to improve them with an extra quick holistic view. According to the results, this module not only improved the accuracy of the models but also, make them more resistant to attacks. Actually, this module inspired by the subconscious function of the human pupil as an early function in perception helped these CNNs as local processors to become more holistic in the same way as humans and made the models more accurate, more robust, and more explainable.
## 4 Conclusion
The main goal of this paper is to develop CNNs with an extra quick holistic view. To this end, first, we introduce a new module called GAS to extract global features. The main idea behind this module is a smart filtering layer. This layer, inspired by the subconscious function of the human pupil, fades the noises using the low-pass filter. The parameter should choose smartly so that the total entropy of the whole filtered image is at its maximum. This new hybrid model (GLP model) has both sets of local and global features to detect images correctly.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Resnet18 & GA-Resnet & InceptionV3 & GA-Inception \\ \hline
86.41 & **91.24** & 90.50 & **92.40** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Top1 Accuracy (%) on Caltech101 dataset
The new model not only has better performance in image classification but also the robustness of the model has increased. Also, it has been shown that the explainability of the model is improved. GPE, as a strong hypothesis in cognitive psychology, is not limited to the visual modality. It holds in all modalities like auditory or in language processing. So, as the feature works, it has been suggested to improve the models by adding a holistic view of other modalities and applications. It will be hoped that these improvements not only increase the performance and robustness of the models but also helps deep learning methods approach their main essence by imitating human at a higher level of cognition.
## 5 Compliance with Ethical Standards
We hereby declare that there are no conflicts of interest with respect to the research presented in this paper. Furthermore, we affirm that we have not received any external funding or financial support for this research. It is important to note that this article does not include any studies involving human participants conducted by any of the authors.
Figure 5: Comparison of visualization with Grad-Cam. In each column, the visualization results and the predicted label of the corresponding model are defined for three input images. |
2304.09651 | Non-Archimedean Vertex Algebras | Foundations of the theory of vertex algebras are extended to the
non-Archimedean setting. | Victor G. Kac | 2023-04-19T13:43:17Z | http://arxiv.org/abs/2304.09651v1 | # Non-Archimedean Vertex Algebras
###### Abstract.
Foundations of the theory of vertex algebras are extended to the non-Archimedean setting.
## 0. Introduction
This is a joint work with Vladimir G. Berkovich.
The notion of a vertex algebra was introduced by Borcherds [Bo] in terms of infinitely many bilinear products \(a_{(n)}b\), \(n\in{\bf Z}\), satisfying a cubic identity, similar to, but much more complicated than, the Jacobi identity for a Lie algebra. This notion is a rigorous definition of the chiral part of a 2-dimensional conformal field theory, studied intensively by physicists and mathematicians since the landmark paper of Belavin, Polyakov and Zamolodchikov [BPZ].
Much simpler axioms were used in the book [K], where it was demonstrated that they are equivalent to the Borcherds identity. These axioms are those of the paper [FKRW], which were inspired by the paper of Goddard [G]. They are formulated in terms of fields \(a(z)=\sum_{n\in{\bf Z}}a_{(n)}z^{-n-1}\), which are the generating series of the operators of \(n^{\rm th}\) multiplication \(a_{(n)}\), where \(a_{(n)}b=0\) for \(n\gg 0\).
In more detail, a vertex algebra is a vector space \(V\) (space of states) over a unital commutative ring \(K\), endowed with a vector \(|0\rangle\) (vacuum vector), an endomorphism \(T\) (infinitesimal translation operator), and a linear map of \(V\) to the space of fields (the state-field correspondence) \(a\mapsto a(z)=\sum_{n\in{\bf Z}}a_{(n)}z^{-n-1}\), where \(a_{(n)}\in{\rm End}(V)\), \(a_{(n)}b=0\) for each \(b\in V\), if \(n\gg 0\), subject to the following axioms (\(a\), \(b\in V\)):
1. (_vacuum_) \(T\,|0\rangle=0\), \(|0\rangle\,(z)=I_{V}\), \(a(z)\,|0\rangle\in a+zV[[z]]\);
2. (_translation covariance_) \([T,a(z)]=\frac{d}{dz}a(z)\);
3. (_locality_) \((z-w)^{N}[a(z),b(w)]=0\) for \(N\gg 0\).
In fact, as explained in [K], these axioms form the essential part of the Wightman axioms of an arbitrary QFT [W].
In the present paper we develop the notion of a non-Archimedean vertex algebra, along the lines of [K], which generalizes the notion of a vertex algebra. See Definition 2.1.1, where they are called vertex \(K\)-algebras. Namely, we take for \(K\) a commutative non-Archimedean Banach ring and for \(V\) a Banach \(K\)-module. (In the case of "ordinary" vertex algebras one takes the trivial Banach norm on \(K\) and on \(V\).) Furthermore the condition on \(a(z)\) to be a field is replaced by
\[\lim_{n\to+\infty}a_{(n)}b=0\mbox{ for each }b\in V,\]
the vacuum and translation covariance axioms remain the same, but the locality axiom generalizes to
\[\lim_{N\to+\infty}(z-w)^{N}[a(z),b(w)]=0.\]
Finally, the field-state correspondence
\[a(z)\mapsto a(z)\left|0\right>|_{z=0}\]
is not surjective in general, but it is injective, with image \(V^{\prime}\) (Lemma 2.1.2), so that in general the state-field correspondence is an unbounded operator with the domain \(V^{\prime}\). Thus, we cannot identify the spaces of states and of fields (like in [FR]), as is always done in the case of ordinary vertex algebras.
A non-Archimedean vertex algebra \(V\) is called admissible if the field-state homomorphism of Banach \(K\)-modules is admissible. In this case \(V^{\prime}\) is a closed submodule of \(V\) and the state-field homomorphism is bounded. In particular, if \(K\) is a field with a non-trivial valuation, \(V\) is admissible if and only if \(V=V^{\prime}\).
In the first part of the paper we develop the theory of formal distributions and fields in the non-Archimedean Banach setting. In particular, we prove the Decomposition Theorem for any local formal distribution in two indeterminates (Lemma 1.5.3); we define the \(n^{\text{th}}\) product \(a(z)_{(n)}b(z)\) of two fields for each \(n\in\mathbf{Z}\), and establish their properties, in particular, Dong's Lemma 1.8.7.
In the second part we define a non-Archimedean vertex algebra over a non-Archimedean commutative Banach ring \(K\), which is torsion free as an abelian group (Definition 2.1.1), and prove their basic properties. In particular, we prove Goddard's uniqueness theorem (Lemma 2.2.1), the \(n^{\text{th}}\) product identity (Lemma 2.7.2 ii), and the Borcherds identity (Corollary 2.2.5). Furthermore, we establish the Extension Theorem (Theorem 2.4.1), which along with the ring extension lemma (Lemma 2.4.4) allows us to construct examples of non-Archimedean vertex algebras. In SS2.5 we discuss their construction, associated to Lie (super)algebras. Based on this, we construct in Subsection 2.6 the free boson and fermion vertex algebras, and the universal Virasoro and affine vertex algebras.
Finally in SS2.7 we introduce the notion of a non-Archimedean Lie conformal algebra of radius \(r>0\) (Definition 2.7.1), and show that for a non-Archimedean vertex algebra the space of fields carries a structure of a non-Archimedean Lie conformal algebra of radius \(r\), defined by the \(\lambda\)-bracket
\[[a(z)_{\lambda}b(z)]=\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}\left(a(z)_{(n)}b (z)\right),\]
where
\[\lim_{n\to+\infty}r^{n}||a(z)_{(n)}b(z)||/n!=0,\]
see Proposition 2.7.2. Here we assume that the base Banach ring \(K\) contains \(\mathbf{Q}\), and we let \(r=1\) (resp. \(r=|p|^{\frac{1}{p-1}}\)) if the valuation \(|.|\) of \(\mathbf{Q}\), induced from \(K\) is trivial (resp. \(p\)-adic).
When our paper was nearing completion, we learned about the paper [FM], where \(p\)-adic vertex algebras are introduced and studied. Their context is more restrictive, since they assume that the base ring \(K\) is a \(p\)-adic field and that the state-field correspondence is bijective. On the other hand, they consider a connection of their theory to \(p\)-adic modular forms.
## 1. Formal distributions and fields
### Non-Archimedean Banach algebras and modules (see [B] for details)
A _non-Archimedean Banach ring_ is a unital ring \(A\) complete with respect to a non-Archimedean Banach norm. Recall that the latter is a function
satisfying the following axioms: (1) \(||a||=0\) if and only if \(a=0\); (2) \(||ab||\leq||a||\cdot||b||\); and (3) \(||a+b||\leq\max\{||a||,||b||\}\). An example of a non-Archimedean norm is the _trivial_ one \(||_{0}\) which takes value \(1\) at all nonzero elements. Any abstract ring is complete with respect to the trivial norm, i.e., it can be considered as a non-Archimedean ring.
A _Banach \(A\)-module_ is a left \(A\)-module \(V\) complete with respect to a _Banach norm_, i.e., a function \(||\ ||:V\to{\bf R}_{+}\) satisfying the following axioms: (1) \(||v||=0\) if and only if \(v=0\); (2) \(||av||\leq||a||\cdot||v||\) for all \(a\in A\); and (3) \(||u+v||\leq\max\{||u||,||v||\}\). A _Banach \(A\)-algebra_ is a non-Archimedean Banach ring \(B\) provided with a homomorphism of rings \(\varphi:A\to B\) such that \(||\varphi(a)||\leq||a||\) for all \(a\in A\).
_Throughout the text, K is a fixed commutative non-Archimedean Banach ring, and all modules considered are K-modules or Banach K-modules, unless otherwise stated._
Given Banach \(K\)-modules \(U\) and \(V\), a \(K\)-linear operator \(T:U\to V\) is said to be _bounded_ if there exists a positive constant \(C\) such that \(||Tu||\leq C||u||\) for all \(u\in U\). The infinum of such \(C\)'s is called the _norm_ of \(T\) and denoted by \(||T||\). Banach \(K\)-modules form an additive category in which sets of morphisms are the spaces \({\rm Hom}(U,V)={\rm Hom}_{K}(U,V)\) of all bounded \(K\)-linear operators from \(U\) to \(V\), called _homomorphisms of Banach \(K\)-modules_. Such a space, provided with the above norm, is a Banach \(K\)-module. If \(V=K\), it is denoted by \(U^{*}\) and, if \(U=V\), it is denoted by \({\rm End}(V)\). Notice that \({\rm End}(V)\) is a Banach \(K\)-algebra, and to provide a Banach \(K\)-module \(V\) with the structure of a Banach \(A\)-module for a Banach \(K\)-algebra \(A\) is the same as to provide \({\rm End}(V)\) with the structure of a Banach \(A\)-algebra. Given Banach \(A\)-modules \(U\) and \(V\), the space \({\rm Hom}_{A}(U,V)\) of bounded homomorphisms of \(A\)-modules is a closed \(K\)-submodule of \({\rm Hom}(U,V)\). Of course, if \(A\) is commutative, it is again a Banach \(A\)-module.
Any closed \(K\)-submodule \(U\) of a Banach \(K\)-module \(V\) is a Banach \(K\)-module with respect to the induced norm. In this case, the quotient \(V/U\) is a Banach \(K\)-module with respect to the quotient norm (the norm of a coset is the infimum of norms of elements from this coset). A homomorphism of Banach \(K\)-modules \(\varphi:U\to V\) is said to be _admissible_ if the Banach norm on \({\rm Im}(\varphi)\), induced from \(V\), is equivalent to the norm induced from \({\rm Coim}(\varphi)=U/{\rm Ker}(\varphi)\) by the algebraic isomorphism \({\rm Coim}(\varphi)\widetilde{\to}{\rm Im}(\varphi)\). In this case, \({\rm Im}(\varphi)\) is complete with respect to the norm, induced from \(V\), i.e., it is closed in \(V\) and the above algebraic isomorphism is in fact an isomorphism of Banach \(K\)-modules. A bijective homomorphism of Banach \(K\)-modules is an isomorphism (in the category of Banach \(K\)-modules) if and only if it is admissible. Obviously, any isometry is an admissible homomorphism.
Given Banach \(K\)-modules \(U\), \(V\) and \(W\), a \(K\)-bilinear map \(\varphi:U\times V\to W\) is _bounded_ if there is a positive constant \(C\) with \(||\varphi(u,v)||\leq C||u||\cdot||v||\) for all \(u\in U\) and \(v\in V\). The _completed tensor product_ of Banach \(K\)-modules \(U\) and \(V\) is a Banach \(K\)-module \(U\widehat{\otimes}_{K}V\) provided with a bounded \(K\)-bilinear map \(U\times V\to U\widehat{\otimes}_{K}V\) such that any bounded \(K\)-bilinear map \(U\times V\to W\) goes through a unique bounded \(K\)-linear operator \(U\widehat{\otimes}_{K}V\to W\). It is clear that the completed tensor product is unique up to a unique isomorphism, and it is constructed as follows. Consider the usual tensor product \(U\otimes_{K}V\) (over \(K\)), and provide it with a real valued function \(||\ ||\) as follows: \(||x||=\inf\max\limits_{i}||u_{i}||\cdot||v_{i}||\), where the infimum is taken over all presentation of \(x\in U\otimes V\) in the form of a finite sum \(\sum_{i}u_{i}\otimes v_{i}\). This function is a semi-norm (i.e., it possesses the properties (2) and (3) of norms),
and it gives rise to a norm on the quotient of \(U\otimes_{K}V\) by its kernel. Then \(U\widehat{\otimes}_{K}V\) is the completion of the quotient with respect to that norm. Notice that, if \(V\) is a Banach \(K^{\prime}\)-module for a commutative non-Archimedean Banach \(K\)-algebra \(K^{\prime}\), then \(U\widehat{\otimes}_{K}V\) has the structure of a Banach \(K^{\prime}\)-module (defined in the evident way).
**Remark 1.1.1**.: (i) The function \(||\ ||^{\prime}:K\to{\bf R}_{+}\), defined by \(||a||^{\prime}=\sup_{b\neq 0}\frac{||ab||}{||b||}\) (it is the norm of the operator of multiplication by \(a\)), is a Banach norm, equivalent to the norm \(||\ ||\). For example, \(||1||^{\prime}=1\). Similarly, for any Banach \(K\)-module \(V\), the function \(||\ ||^{\prime}:V\to{\bf R}_{+}\), defined by \(||v||^{\prime}=\sup_{a\neq 0}\frac{||av||}{||a||}\) is a Banach norm on \(V\), equivalent to its norm \(||\ ||\), and one has \(||av||^{\prime}\leq||a||^{\prime}\cdot||v||^{\prime}\). Thus, we may always assume that the norm of elements from the image of \({\bf Z}\) in \(K\) is at most one.
(ii) A non-Archimedean Banach norm on a commutative ring \(A\) is said to be a (real) _valuation_ if it is multiplicative, i.e., \(||ab||=||a||\cdot||b||\) for all \(a,b\in A\). For example, by the Ostrowski theorem, any valuation on \({\bf Z}\) is either trivial, or \(p\)-adic for some prime integer \(p\). The later means that \(|p|<1\). In this case, \(|n|=|p|^{e}\), where \(e\) is the maximal power of \(p\) such that \(p^{e}\) divides \(n\), and the completion of \({\bf Z}\) is the ring of \(p\)-adic integers \({\bf Z}_{p}\).
### Examples of Banach modules and algebras
For a Banach \(K\)-module \(V\), let \(V[[z]]\) denote the Banach \(K\)-module of all formal power series \(f=\sum_{n=0}^{\infty}v_{n}z^{n}\) with \(||f||=\sup_{n}||v_{n}||<\infty\), and let \(V\{z\}\) denote the Banach \(K\)-submodule of \(V[[z]]\) consisting of the series with \(v_{n}\to 0\) as \(n\to\infty\). Similarly, let \(V[[z^{\pm 1}]]\) denote the Banach \(K\)-module of all formal power series \(f=\sum_{n\in{\bf Z}}v_{n}z^{n}\) with \(||f||=\sup_{n}||v_{n}||<\infty\), and let \(V\{z^{\pm 1}\}\) denote the Banach \(K\)-submodule of \(V[[z]]\) consisting of the series with \(v_{n}\to 0\) as \(n\to\pm\infty\). We also introduce an intermediate Banach \(K\)-submodule \(V[[z]]\subset V((z))\subset V[[z^{\pm 1}]]\) that consists of the series \(f=\sum_{n\in{\bf Z}}v_{n}z^{n}\in V[[z^{\pm 1}]]\) with \(v_{n}\to 0\) as \(n\to-\infty\).
Notice that for any Banach \(K\)-module \(V\) the canonical bounded \(K\)-linear operator \(V\widehat{\otimes}K\{z\}\to V\{z\}\) is an isomorphism, but the similar operators \(V[[z]]\to V[[z]]\) and \(V\widehat{\otimes}K((z))\to V((z))\) are not.
One can iterate the above constructions. For example, both Banach \(K\)-modules \(V[[z^{\pm 1}]][[w^{\pm 1}]]\) and \(V[[w^{\pm 1}]][[z^{\pm 1}]]\) (resp. \(V\{z^{\pm 1}\}\{w^{\pm 1}\}\) and \(V\{w^{\pm 1}\}\{z^{\pm 1}\}\)) are canonically identified and denoted by \(V[[z^{\pm 1},w^{\pm 1}]]\) (resp. \(V\{z^{\pm 1},w^{\pm 1}\}\)). But \(V((z))((w))\) and \(V((w))((z))\) are different Banach \(K\)-submodules of \(V[[z^{\pm 1},w^{\pm 1}]]\). Their intersection, denoted by \(V((z,w))\), consists of all series \(f=\sum_{m,n\in{\bf Z}}v_{mn}z^{m}w^{n}\) from \(V[[z^{\pm 1},w^{\pm 1}]]\) with \(v_{mn}\to 0\) as \(\min(m,n)\to-\infty\).
Assume now that the above \(V\) is a Banach \(K\)-algebra \(A\). Then all of the introduced spaces are Banach \(A\)-modules. Moreover, the spaces \(A[[z]]\), \(A\{z\}\), \(A\{z^{\pm 1}\}\) and \(A((z))\) are Banach \(K\)-algebras, \(A((z))\) is a Banach \(A\{z^{\pm 1}\}\)-algebra and, if \(V\) is a Banach \(A\)-module, \(V[[z^{\pm 1}]]\) and \(V((z))\) are Banach \(A\{z^{\pm 1}\}\)-modules. Notice that although \(A[[z^{\pm 1},w^{\pm 1}]]\) is not an algebra, the product \(f(z)g(w)\) of any pair of elements \(f(z)\in A[[z^{\pm 1}]]\) and \(g(w)\in A[[w^{\pm 1}]]\) is well defined, and one has \(||f(z)g(w)||\leq||f(z)||\cdot||g(w)||\).
For example, if the norms on \(K\) and \(A\) are trivial, \(A\{z^{\pm 1}\}\) is the algebra of Laurent polynomials \(A[z^{\pm 1}]\), \(A((z))\) is the algebra of Laurent power series (i.e., those with at most a finite number of nonzero coefficients at the monomials of negative degree), and \(A[[z^{\pm 1}]]\) is the space of bilateral formal power series in \(z\).
### A Banach \(K\)-module \(V((z,w))\{(z-w)^{-1}\}\)
Let \(V\) be a Banach \(K\)-module.
**Lemma 1.3.1**.: The following is true:
1. the operator of multiplication by \(z-w\) on \(V[[w^{\pm 1}]]((z))\) is an isometry;
2. \(\cap_{n=0}^{\infty}(z-w)^{n}V((z,w))=0\);
3. the operator of multiplication by \(u(z-w)-1\) on \(V((z,w))\{u\}\) is an isometry.
Proof.: (i) If \(f=\sum_{n\in\mathbf{Z}}f_{n}(w)z^{-n}\in V[[w^{\pm 1}]]((z))\), then \(||f||=\sup_{n}||f_{n}||\) and \(f_{n}\to 0\) as \(n\to+\infty\). One has \((z-w)f=\sum_{n\in\mathbf{Z}}(f_{n+1}-wf_{n})z^{-n}\) and, therefore,
\[||(z-w)f||=\sup_{n}||f_{n+1}-wf_{n}||\.\]
Assume that \(f\neq 0\). Given \(\varepsilon>0\), let \(m\) be maximal with the property \(||f_{m}||>||f||-\varepsilon\) and \(||f_{n}||\leq||f||-\varepsilon\) for all \(n\geq m+1\). Then \(||wf_{m}||=||f_{m}||\) and, therefore, \(||f_{m+1}-wf_{m}||=||f_{m}||>||f||-\varepsilon\), i.e., \(||(z-w)f||>||f||-\varepsilon\). Since the latter is true for every \(\varepsilon>0\), the required fact follows.
(ii) For \(n\in\mathbf{Z}\), let \(L_{n}\) be the bounded \(K\)-linear operator
\[V((z,w))\to V((z)):f=\sum_{i,j\in\mathbf{Z}}v_{ij}z^{i}w^{j}\mapsto\sum_{i\in \mathbf{Z}}v_{i,n-i}z^{i}\.\]
One can easily see that \(||f||=\sup_{n}||L_{n}f||\) and \(L_{n+1}((z-w)f)=(z-1)L_{n}(f)\). Thus, to verify the required fact, it suffices to show that \(\cap_{n=0}^{\infty}(z-1)^{n}V((z))=0\). Given an element \(f=\sum_{i\in\mathbf{Z}}v_{i}z^{i}\in V((z))\), let \(l(f)\) and \(r(f)\) be the minimal and maximal \(i\) with \(||v_{i}||=||f||\), and set \(c(f)=r(f)-l(f)\). One can easily see that \(l((z-1)f)=l(f)\) and \(r((z-1)f)=r(f)+1\) and, therefore, \(c((z-1)f)=c(f)+1\). It follows that an element \(f\in V((z))\) can be divisible by at most \(c(f)\)-th power of \(z-1\).
(iii) If \(f=\sum_{n=0}^{\infty}f_{n}(z,w)u^{n}\in V((z,w))\{u\}\), then \(f_{n}\to 0\) as \(n\to+\infty\) and \(||f||=\max_{n}||f_{n}||\). One has \((u(z-w)-1)f=-f_{0}+\sum_{n=1}^{\infty}((z-w)f_{n-1}-f_{n})u^{n}\) and, therefore
\[||(u(z-w)-1)f||=\max\{||f_{0}||,\max_{n\geq 1}||(z-w)f_{n-1}-f_{n}||\}\.\]
If \(||f||=||f_{0}||\), the statement is trivial. Assume that \(||f||>||f_{0}||\), and let \(m\) be maximal with the property \(||f||=||f_{m}||>||f_{n}||\) for all \(n\geq m+1\). By (i), \(||(z-w)f_{m}||=||f_{m}||\) and, therefore, \(||(z-w)f_{m}-f_{m+1}||=||f_{m}||=||f||\). This implies the required fact.
Let \(V((z,w))\{(z-w)^{-1}\}\) denote the quotient of \(V((z,w))\{u\}\) by the closed \(K\)-submodule \((u(z-w)-1)V((z,w))\{u\}\). It is a Banach \(K\)-module with respect to the quotient norm. We denote by \((z-w)^{-1}\) the image of \(u\) in it. Of course, if the norm on \(V\) is trivial, then \(V((z,w))\{(z-w)^{-1}\}=V((z,w))[(z-w)^{-1}]\).
**Corollary 1.3.2**.: Every element \(f\in V((z,w))\{(z-w)^{-1}\}\) has a unique representation in the form
\[f(z,w)=h(z,w)+\sum_{n=0}^{\infty}\frac{g_{n}(w)}{(z-w)^{n+1}}\,\]
where \(h\in V((u,w))\) and \(g_{n}\in V((w))\) are such that \(g_{n}\to 0\) as \(n\to\infty\). One also has \(||f||=\max\{||h||,\max_{n}||g_{n}||\}\).
There are the following bounded homomorphisms of Banach \(K\)-modules
\[V((w))((z))\stackrel{{ i_{z,w}}}{{\longleftarrow}}V((z,w))\{(z- w)^{-1}\}\stackrel{{ i_{w,\uparrow}}}{{\longrightarrow}}V((z))((w))\,\]
whose restrictions to \(V((z,w))\) are the canonical ones and which take \((z-w)^{n}\) for \(n\in{\bf Z}\) to its decompositions in the domains \(|z|>|w|\) and \(|z|<|w|\), respectively. For example, one has
\[i_{z,w}\left(\frac{1}{z-w}\right)=\sum_{n=0}^{\infty}w^{n}z^{-n-1}\ \ \text{and}\ \ i_{w,z}\left(\frac{1}{z-w}\right)=-\sum_{n=-1}^{-\infty}w^{n}z^{-n-1}\.\]
We denote by \(\Delta\) the operator (see SS1.5)
\[V((z,w))\{(z-w)^{-1}\}\to V[[z^{\pm 1},w^{\pm 1}]]:f\mapsto i_{z,w}(f)-i_{w,z}(f )\.\]
Notice that this operator commutes with multiplication by \(z-w\).
**Remarks 1.3.3**.: (i) The correspondence \(z\mapsto z\), \(w\mapsto-w\) extends in the evident way to an isometric automorphism of the Banach \(K\)-module \(V((z,w))\). This automorphism takes \(z-w\) to \(z+w\) and extends to an isometric isomorphism \(V((z,w))\{(z-w)^{-1}\}\widetilde{\to}V((z,w))\{(z+w)^{-1}\}:f(z,w)\mapsto f(z,-w)\). One can therefore apply the constructions of this subsection and, in particular, the operators \(i_{z,w}\) and \(i_{w,z}\) to the Banach \(K\)-module \(V((z,w))\{(z+w)^{-1}\}\).
(ii) The elements \(i_{z,w}(f)\) and \(i_{w,z}(f)\) are often denoted by \(f_{>}\) and \(f_{<}\), respectively.
### Derivation and residue operators
For a Banach \(K\)-module \(V\), the derivation \(\partial_{z}:V[[z^{\pm 1}]]\to V[[z^{\pm 1}]]\) is a bounded \(K\)-linear operator of norm one. More generally, for every \(i\geq 0\), there is a well defined bounded \(K\)-linear operator of norm one \(\partial_{z}^{(i)}=\frac{1}{i!}\partial_{z}^{i}:V[[z^{\pm 1}]]\to V[[z^{\pm 1}]]\)
\[\partial_{z}^{(i)}(\sum_{n\in{\bf Z}}v_{n}z^{n})=\sum_{n\in{\bf Z}}{n\choose i }v_{n}z^{n-i}\.\]
Notice that the operators \(\partial_{z}^{(i)}\) commute between themselves and preserve the Banach \(K\)-submodules \(V[[z]]\), \(V((z))\) and \(V\{z^{\pm 1}\}\). In particular, they induce operators on \(V((z,w))\), and the latter are extended to bounded \(K\)-linear operators on \(V((z,w))\{(z-w)^{-1}\}\), which commute with the operators \(i_{z,w}\), \(i_{w,z}\), and \(\Delta\). Since \(\frac{1}{(z-w)^{i+1}}=\partial_{\varpi}^{(i)}(\frac{1}{z-w})\), it follows that
\[i_{z,w}\left(\frac{1}{(z-w)^{i+1}}\right)=\sum_{n\geq 0}{n\choose i}w^{n-i}z^{-n- 1}\ \text{and}\]
\[i_{w,z}\left(\frac{1}{(z-w)^{i+1}}\right)=-\sum_{n\leq-1}{n\choose i}w^{n-i}z^ {-n-1}\.\]
Furthermore, let \(\text{Res}_{z}\) denote the bounded \(K\)-linear operator of norm one (the _residue_ operator)
\[V[[z^{\pm 1}]]\to V:f=\sum_{n\in{\bf Z}}v_{n}z^{n}\mapsto\text{Res}_{z}f(z)=v_{-1}\.\]
Notice that \(\text{Res}_{z}\circ\partial_{z}^{(i)}=0\) for all \(i\geq 1\). If \(V\) is a Banach \(A\)-module, the residue operator defines a bounded \(K\)-bilinear pairing
\[A\{z^{\pm 1}\}\times V[[z^{\pm 1}]]\to V:(\varphi,f)\mapsto\text{Res}_{z}( \varphi f)\.\]
**Lemma 1.4.1**.: The above pairing gives rise to an isometric isomorphism of Banach \(K\)-modules
\[V[[z^{\pm 1}]]\widetilde{\to}\mathrm{Hom}_{A}(A\{z^{\pm 1}\},V)\.\]
Proof.: This homomorphism takes an element \(v(z)=\sum_{j\in\mathbf{Z}}v_{j}z_{j}\in V[[z^{\pm 1}]]\) to the homomorphism \(\chi_{v}:A\{z^{\pm 1}\}\to V\), which sends an element \(\varphi(z)=\sum_{i\in\mathbf{Z}}a_{i}z^{i}\in A\{z^{\pm 1}\}\) to the element \(\sum_{i+j=-1}a_{i}v_{j}\). It follows that \(||\chi_{v}(\varphi(z))||\leq||v(z)||\cdot||\varphi(z)||\), i.e., \(||\chi_{v}||\leq||v(z)||\). The homomorphism in the opposite direction takes a homomorphism \(\chi:A\{z^{\pm 1}\}\to V\) to the element \(v_{\chi}(z)=\sum_{j\in J}\chi(z^{-j-1})z^{j}\). Both homomorphisms are inverse to each other, and \(||v_{\chi}(z)|=\sup_{j}\{||\chi(z^{-j-1})||\}=||\chi||\).
Because of Lemma 1.4.1, elements of \(V[[z^{\pm 1}]]\) are called _formal distributions_ on the space of _test functions_\(A\{z^{\pm 1}\}\). The operator \(\mathrm{Res}_{z}:V((z,w))\to V((w))\) is naturally extended to a bounded homomorphism of \(K((w))\)-modules
\[\mathrm{Res}_{z}:V((z,w))\{(z-w)^{-1}\}\to V((w)):f\mapsto\mathrm{Res}_{z}(i_ {z,w}(f))\.\]
For example, \(\mathrm{Res}_{z}(\frac{1}{z-w})=1\).
For \(m\geq 1\), consider the bounded homomorphism of \(K\)-modules
\[V[[z]]\ \stackrel{{\psi_{m}}}{{\to}}\ V^{m}\oplus V[[z]]:f(z) \mapsto(f(0),(\partial_{z}f)(0),\dots,(\partial_{z}^{(m-1)})(0);\partial_{z}^ {(m)}f(z))\.\]
**Lemma 1.4.2**.: Assume that \(V\) is torsion free as an abelian group. Then
1. the homomorphism \(\psi_{m}\) is injective;
2. if in addition, \(K\) is a \(\mathbf{Q}\)-algebra and the norm on \(K\) induces the trivial norm on \(\mathbf{Q}\), then \(\psi_{m}\) is an isometric isomorphism.
Proof.: If \(f(z)=\sum_{n=0}^{\infty}u_{n}z^{n}\), then
\[\psi_{m}(f(z))=(u_{0},u_{1},\dots,u_{m-1};\sum_{n=m}^{\infty}{n\choose m}u_{n} z^{n-m})\.\]
Thus, if \(g(z)=\sum_{n=0}^{\infty}v_{n}z^{n}\), then \(u_{n}=v_{n}\) for \(0\leq n\leq m-1\) and \({n\choose m}(u_{n}-v_{n})=0\) for \(n\geq m\). Then the assumption in (i) implies that \(u_{n}=v_{n}\) for \(n\geq m\). The additional assumption in (ii) implies that \(||\alpha v||=||v||\) for all \(v\in V\) and all nonzero \(\alpha\in\mathbf{Q}\), and this gives the required fact.
### Delta-function
The formal _delta-function_\(\delta(z-w)\) is the element of \(K[[z^{\pm 1},w^{\pm 1}]]\) defined by \(\delta(z-w)=\Delta(\frac{1}{z-w})\), i.e.,
\[\delta(z-w)=\sum_{n\in\mathbf{Z}}z^{n}w^{-n-1}\.\]
Since \((z-w)\cdot\frac{1}{z-w}=1\), it follows that \((z-w)\delta(z-w)=0\). More generally, for each \(i\geq 0\) one has
\[\partial_{w}^{(i)}\delta(z-w)=\Delta\left(\frac{1}{(z-w)^{i+1}}\right)=\sum_{n \in\mathbf{Z}}{n\choose i}w^{n-i}z^{-n-1}\]
and \((z-w)^{i+1}\partial_{w}^{(i)}\delta(z-w)=0\). One also has
\[\mathrm{Res}_{z}((z-w)^{i}\partial_{w}^{(i)}\delta(z-w))=\mathrm{Res}_{z} \left(\Delta\left(\frac{1}{(z-w)^{j-i+1}}\right)\right)=\delta_{i,j}\.\]
Let \(V\) be a Banach \(K\)-module.
**Lemma 1.5.1**.: If an element \(f\in V[[z^{\pm 1},w^{\pm 1}]]\) can be represented in the form \(f=\sum_{i=0}^{\infty}g_{i}(w)\partial_{w}^{(i)}\delta(z-w)\), where \(g_{i}(w)\) are elements of \(V[[w^{\pm 1}]]\) such that \(g_{i}(w)\to 0\) as \(i\to\infty\), then \(g_{i}(w)=\operatorname{Res}_{z}((z-w)^{i}f(z,w))\) and \(||f||=\max_{i}||g_{i}||\).
Proof.: The formula for \(g_{i}(w)\) follows from the equality before the formulation; in particular, such a representation of \(f(z,w)\) is unique. Since \(||\partial_{w}^{(i)}\delta(z-w)||\leq 1\), then \(||f(z,w)||\leq\max_{i}||g_{i}(w)||\), and since \(g_{i}(w)=\operatorname{Res}_{z}((z-w)^{i}f(z,w))\), it follows that \(||g_{i}(w)||\leq||f(z,w)||\).
Let \(V[[w^{\pm 1}]]\{\delta(z-w)\}\) denote the closed \(K\)-submodule of \(V[[z^{\pm 1},w^{\pm 1}]]\) consisting of the elements that can be represented in the form of Lemma 1.5.1. Then the lemma implies that such a representation is unique. Let also \(V((w))\{\delta(z-w)\}\) denote the subspace of \(V[[w^{\pm 1}]]\{\delta(z-w)\}\) consisting of the elements for which \(g_{i}(w)\in V((w))\) for all \(i\geq 0\).
**Corollary 1.5.2**.: For the operator \(\Delta:V((z,w))\{(z-w)^{-1}\}\to V[[z^{\pm 1},w^{\pm 1}]]\), one has \(\operatorname{Ker}(\Delta)=V((z,w))\) and \(\operatorname{Im}(\Delta)=V((w))\{\delta(z-w)\}\).
**Lemma 1.5.3**.: (Decomposition theorem) The following properties of an element \(f(z,w)\in V[[z^{\pm 1},w^{\pm 1}]]\) are equivalent:
1. \(f\in V[[w^{\pm 1}]]\{\delta(z-w)\}\);
2. \((z-w)^{N}f(z,w)\to 0\) as \(N\to\infty\).
Proof.: (a)\(\Longrightarrow\)(b) If \(f\) is represented in the form of (a), then
\[(z-w)^{N}f(z,w)=\sum_{i=N}^{\infty}g_{i}(w)(z-w)^{N}\partial_{w}^{(i)}\delta(z -w)\]
and, therefore, \(||(z-w)^{N}f(z,w)||\leq\max_{i\geq N}||g_{i}||\to 0\) as \(N\to\infty\).
(b)\(\Longrightarrow\)(a) For \(i\geq 0\), we set \(g_{i}(w)=\operatorname{Res}_{z}((z-w)^{i}f(z,w))\). The assumption implies that \(g_{i}\to 0\) as \(i\to\infty\). We can therefore replace \(f(z,w)\) by \(f(z,w)-\sum_{i=0}^{\infty}g_{i}(w)\partial_{w}^{(i)}\delta(z-w)\) and assume that \(\operatorname{Res}_{z}((z-w)^{i}f(z,w))=0\) for all \(i\geq 0\). In this case, _we claim that \(f(z,w)=0\)_. Indeed, the latter easily implies that \(f(z,w)\in V[[w^{\pm 1}]][[z]]\) and, therefore, the claim follows from Lemma 1.3.1(i).
**Remark 1.5.4**.: Since the operator of multiplication by \(z-w\) is isometric on \(V[[w^{\pm 1}]]((z))\) (Lemma 1.3.1(i)), a nonzero element of \(V[[z^{\pm 1},w^{\pm 1}]]\) that possesses the equivalent properties of Lemma 1.5.3, cannot lie in \(V[[w^{\pm 1}]]((z))\).
### Mutual locality of formal distributions
**Definition 1.6.1**.: A _Banach Lie \(K\)-algebra_ is a Banach \(K\)-module \(\mathfrak{g}\) provided with the structure of a Lie \(K\)-algebra with the property \(||[u,v]||\leq||u||\cdot||v||\) for all \(u,v\in\mathfrak{g}\).
For example, for any Banach \(K\)-module \(V\), \(\operatorname{End}(V)\) is a Banach Lie \(K\)-algebra. Let \(\mathfrak{g}\) be a Banach Lie \(K\)-algebra.
**Definition 1.6.2**.: Formal distributions \(a(z),b(z)\in\mathfrak{g}[[z^{\pm 1}]]\) are said to be _mutually local_ (or just _local_) if
\[(z-w)^{N}[a(z),b(w)]\to 0\text{ as }N\to\infty\.\]
**Example 1.6.3**.: Suppose that formal distributions \(a(z)\) and \(b(z)\) do not depend on \(z\), i.e., they are just elements \(a,b\in\mathfrak{g}\). Then \(a\) and \(b\) are mutually local if and only if they commute, i.e., \([a,b]=0\).
It is convenient to express the decomposition in \(z\) of a formal distribution \(a(z)\in\mathfrak{g}[[z^{\pm 1}]]\) in the form \(a(z)=\sum_{n\in\mathbf{Z}}(a(z))_{(n)}z^{-n-1}\). For brevity, one also writes \(a_{(n)}=(a(z))_{(n)}\in\mathfrak{g}\), i.e.,
\[a(z)=\sum_{n\in\mathbf{Z}}a_{(n)}z^{-n-1}\.\]
**Lemma 1.6.4**.: For every pair of mutually local formal distributions \(a(z),b(z)\in\mathfrak{g}[[z^{\pm 1}]]\), the following is true:
1. there is a unique decomposition \[[a(z),b(w)]\ =\ \sum_{i=0}^{\infty}(a(w)_{(i)}b(w))\partial_{z}^{(i)}\delta(z -w)\,\] where \(a(w)_{(i)}b(w)\in\mathfrak{g}[[z^{\pm 1}]]\); moreover, one has \[a(w)_{(i)}b(w)=\operatorname{Res}_{z}((z-w)^{i}[a(z),b(w)])\,\] \[a(w)_{(i)}b(w)\to 0\ \text{as}\ i\to\infty\ \text{and}\ ||[a(z),b(w)]||=\max_{i\geq 0}\{||a(w)_{(i)}b(w)||\};\]
2. for every \(m,n\in\mathbf{Z}\), one has \[[a_{(m)},b_{(n)}]\ =\ \sum_{i=0}^{\infty}\binom{m}{i}(a(z)_{(i)}b(z))_{(m+n-i)}\ ;\]
3. for every \(m\geq 0\), the formal distributions \(\partial_{z}^{(m)}a(z)\) and \(b(z)\) are mutually local.
Proof.: The statement (i) follows from the definition and Lemmas 1.5.1 and 1.5.3, and the statement (ii) follows from (i).
(iii) We prove the statement by induction on \(m\). If \(m=0\), it is true by the assumption. Assume that \(m\geq 1\) and the statement is true for all smaller values. Since multiplication by \(z-w\) in \(\mathfrak{g}[[z^{\pm 1},w^{\pm 1}]]\) does not increase the norm, it suffices to show that, for every \(\varepsilon>0\), there exists a sufficiently large \(N\) such that the norm of the element \((z-w)^{N}[\partial_{z}^{(m)}a(z),b(w)]\) is at most \(\varepsilon\). Let \(n\) be such that for all \(0\leq i\leq m-1\) and \(j\geq n\) the norm of the element \((z-w)^{j}[\partial_{z}^{(i)}a(z),b(w)]\) is at most \(\varepsilon\). One has
\[\partial_{z}^{(m)}((z-w)^{N}[a(z),b(w)]) = (z-w)^{N}[\partial_{z}^{(m)}a(z),b(w)]\] \[+\sum_{i=1}^{m}\binom{N}{i}(z-w)^{N-i}[\partial_{z}^{(m-i)}a(z),b (w)]\.\]
If \(N\geq m+n\) then, by the induction, the left hand side and the second summand on the right hand side are at most \(\varepsilon\). It follows that the same is true for the first summand.
### Quantum fields and normally ordered products
Let \(V\) be a Banach \(K\)-module.
**Definition 1.7.1**.: A formal distribution
\[a(z)=\sum_{n\in{\bf Z}}a_{(n)}z^{-n-1}\in{\rm End}(V)[[z^{\pm 1}]]\]
is called an _\({\rm End}(V)\)-valued quantum field_ (or just a _field_) if, for any \(v\in V\), one has \(a(z)v\in V((z))\) (i.e., \(a_{(n)}(v)\to 0\) as \(n\to+\infty\)).
For example, all elements of \({\rm End}(V)((z))\) are fields and, in particular, the identity operator \(I_{V}\in{\rm End}(V)\) is a field. It is easy to see that the space \({\rm End}(V)\langle\langle z\rangle\rangle\) of all fields is closed in \({\rm End}(V)[[z^{\pm 1}]]\) and is preserved by the operators \(\partial_{z}^{(n)}\), \(n\geq 0\).
**Lemma 1.7.2**.: If \(a(z)\) and \(b(z)\) are fields, then for every \(n\geq 0\), the element \(a(w)_{(n)}b(w)={\rm Res}_{z}((z-w)^{n}[a(z),b(w)])\) (see Lemma 1.6.4(i)) is a field.
Proof.: If \(a(w)_{(n)}b(w)=\sum_{p\in{\bf Z}}c_{(p)}w^{-p-1}\), one has
\[c_{(p)}=\sum_{i=0}^{n}(-1)^{n-i}{n\choose i}[a_{(i)},b_{(p+n-i)}]\.\]
It follows that for every vector \(v\in V\) one has
\[||c_{(p)}(v)||\leq\max_{0\leq i\leq n}\{||a_{(i)}||\cdot||b_{(p+n-i)}(v)||,||b _{(p+n-i)}(a_{(i)}(v))||\}\.\]
The latter number evidently tends to zero as \(p\) tends to \(+\infty\).
For an element \(a(z)\in{\rm End}(V)[[z^{\pm 1}]]\) as above, we set
\[a(z)_{+}=\sum_{n\leq-1}a_{(n)}z^{-n-1}\ {\rm and}\ a(z)_{-}=\sum_{n\geq 0}a_{(n) }z^{-n-1}\.\]
The _normally ordered product_ of two fields \(a(z),b(z)\in{\rm End}(V)\langle\langle z\rangle\rangle\) is defined by
\[:\!a(z)b(z)\!:\ =a(z)_{+}b(z)+b(z)a(z)_{-}\.\]
For example, for any field \(a(z)\), one has \(:\!a(z)I_{V}\!:\,=:\!I_{V}a(z)\!:\,=a(z)\).
**Lemma 1.7.3**.: The normally ordered product \(:\!a(z)b(z)\!:\) is a well defined element of \({\rm End}(V)\langle\langle z\rangle\rangle\) whose norm is at most \(||a(z)||\cdot||b(z)||\).
Proof.: One has
\[a(z)_{+}b(z)=\sum_{p\in{\bf Z}}c_{p}z^{-p-1}\ {\rm and}\ b(z)a(z)_{-}=\sum_{p \in{\bf Z}}d_{p}z^{-p-1}\,\]
where
\[c_{p}=\sum_{\begin{subarray}{c}m+n=p-1\\ m\leq-1\end{subarray}}a_{(m)}b_{(n)}\ \ {\rm and}\ \ d_{p}=\sum_{ \begin{subarray}{c}m+n=p-1\\ m\geq 0\end{subarray}}b_{(n)}a_{(m)}\.\]
Although both infinite sums do not convergent with respect to the Banach norm on \({\rm End}(V)\), they do convergent to bounded linear operators in the weak topology. Let us show this, for example, for the operator \(c_{p}\). Given a vector \(v\in V\), one has \(||(a_{(m)}b_{(n)})(v)||\leq||a_{(m)}||\cdot||b_{(n)}(v)||\). Since \(m+n=p-1\) and \(m\leq-1\), the latter number tends to zero as \(n\to+\infty\), and so \(c_{p}(v)\) is well defined. Furthermore, given \(\varepsilon>0\), there exists \(n_{0}\) such that \(||b_{(n)}(v)||\leq\varepsilon\) for all \(n\geq n_{0}\). It follows that
\[||c_{p}(v)||\leq\max\{\max_{p\leq n<n_{0}}||a_{(p-1-n)}||\cdot||b_{(n)}||\cdot ||v||,\max_{n\geq n_{0}}||a_{(p-1-n)}||\varepsilon\}\]
and, therefore, \(||c_{p}||\leq||a(z)||\cdot||b(z)||\)
**Lemma 1.7.4**.: For every \(m\geq 0\), one has
\[\partial_{z}^{(m)}(:\!a(z)b(z)\!:)=\sum_{i=0}^{m}:\!\partial_{z}^{(i)}a(z) \partial_{z}^{(m-i)}b(z)\!:.\]
Proof.: The statement easily follows from the fact that the operator \(\partial_{z}^{(m)}\) commutes with the operators \(a(z)\mapsto a(z)_{+}\) and \(a(z)\mapsto a(z)_{-}\).
**Lemma 1.7.5**.: Let \(U\) and \(V\) be Banach \(K\)-modules, and let \(a(z)\in\operatorname{End}(U)\langle\langle z\rangle\rangle\) and \(b(z)\in\operatorname{End}(V)\langle\langle z\rangle\rangle\). Then
1. for every \(n\in\mathbf{Z}\), the sum \(\sum_{i+j=n-1}a_{(i)}\otimes b_{(j)}\) converges in the week topology of \(U\widehat{\otimes}_{K}V\) to a bounded operator \(c_{(n)}\in\operatorname{End}(U\widehat{\otimes}_{K}V)\) of norm at most \(||a(z)||\cdot||b(z)||\);
2. for every \(x\in U\widehat{\otimes}_{K}V\), \(c_{(n)}(x)\to 0\) as \(n\to+\infty\) and, therefore, the sum \(\sum_{n\in\mathbf{Z}}c_{(n)}z^{-n-1}\) is an element of \(\operatorname{End}(U\widehat{\otimes}_{K}V)\langle\langle z\rangle\rangle\), denoted by \(a(z)\widehat{\otimes}b(z)\);
3. if \(a^{\prime}(z),a^{\prime\prime}(z)\in\operatorname{End}(U)\langle\langle z\rangle\rangle\) and \(b^{\prime}(z),b^{\prime\prime}(z)\in\operatorname{End}(V)\langle\langle z\rangle\rangle\) are pairs of mutually local fields, then the fields \(a^{\prime}(z)\widehat{\otimes}b^{\prime}(z)\) and \(a^{\prime\prime}(z)\widehat{\otimes}b^{\prime\prime}(z)\) in \(\operatorname{End}(U\widehat{\otimes}V)\langle\langle z\rangle\rangle\) are mutually local.
Proof.: (i) Let \(x\otimes y\) be a nonzero element of \(U\otimes_{K}V\). Since \(a(z)\) and \(b(z)\) are fields, it follows that, for any \(\varepsilon>0\), there exists \(N\geq 1\) such that, for all \(i,j\geq N\), one has \(||a_{(i)}(x)||\leq\frac{\varepsilon}{||b(z)||\cdot||y||}\) and \(||b_{(j)}(y)||\leq\frac{\varepsilon}{||a(z)||\cdot||x||}\). Thus, if \(i\geq N\), one has
\[||(a_{(i)}\otimes b_{(j)})(x\otimes y)||\leq||a_{(i)}(x)||\cdot||b_{(j)}(y)|| \leq\frac{\varepsilon}{||b(z)||\cdot||y||}\cdot||b_{(j)}(y)||\leq\varepsilon\,\]
and the similar inequality holds for \(j\geq N\). This implies that the sum considered is weakly converges and induces a bounded operator \(c_{(n)}\) on \(U\widehat{\otimes}_{K}V\) of norm at most \(||a(z)||\cdot||b(z)||\), i.e., (i) is true.
(ii) If \(n\geq 2N+1\) and \(i+j=n-1\), then either \(i\) or \(j\) is at least \(N\), and so \(||c_{(n)}(x\otimes y)||\leq\varepsilon\). This implies (ii).
(iii) The commutator \([a^{\prime}(z)\widehat{\otimes}b^{\prime}(z),a^{\prime\prime}(w)\widehat{ \otimes}b^{\prime\prime}(w)]\), considered as an element of \(\operatorname{End}(U\widehat{\otimes}V)[[z^{\pm 1},w^{\pm 1}])\), is the image of the following element
\[[a^{\prime}(z),a^{\prime\prime}(w)]\otimes b^{\prime}(z)b^{\prime\prime}(w)+a^ {\prime\prime}(w)a^{\prime}(z)\otimes[b^{\prime}(z),b^{\prime\prime}(w)]\.\]
Since the norms of the elements \(b^{\prime}(z)b^{\prime\prime}(w)\) and \(a^{\prime\prime}(w)a^{\prime}(z)\) are bounded by a constant, we see that the norm of the commutator, multiplied by \((z-w)^{N}\), tends to zero as \(N\) tends to infinity.
Lemma 1.7.5 implies that, for Banach \(K\)-modules \(U\) and \(V\), there is a well defined bounded homomorphism of Banach \(K\)-modules
\[\operatorname{End}(U)\langle\langle z\rangle\rangle\widehat{\otimes} \operatorname{End}(V)\langle\langle z\rangle\rangle\to\operatorname{End}(U \widehat{\otimes}V)\langle\langle z\rangle\rangle:a(z)\otimes b(z)\mapsto a(z) \widehat{\otimes}b(z)\,\]
and one has \(||a(z)\widehat{\otimes}b(z)||\leq||a(z)||\cdot||b(z)||\).
### \(n\)-th products of quantum fields
Let \(V\) be a Banach \(K\)-module. By Lemma 1.7.2, for each \(n\geq 0\), the \(n\)-th product \(a(z)_{(n)}b(z)\) of fields is a field. One can extend these \(K\)-bilinear operations on the space of fields to arbitrary \(n\in\mathbf{Z}\). Namely, if \(n=-m-1<0\), we set
\[a(z)_{(n)}b(z)=\ :\!(\partial_{z}^{(m)}a(z))b(z)\!:.\]
(The latter is a field, by Lemma 1.7.3 and the fact that the space \(\operatorname{End}(V)\langle\langle z\rangle\rangle\) is preserved by the operators \(\partial_{z}^{(m)}\), \(m\geq 0\).) For example, for any field \(a(z)\), one has \(a(z)_{(n)}I_{V}=0\) for \(n\geq 0\), and \(a(z)_{(n)}I_{V}=\partial_{z}^{(m)}a(z)\), for \(n=-m-1<0\). Notice that one always has \(||a(z)_{(n)}b(z)||\leq||a(z)||\cdot||b(z)||\).
**Example 1.8.1**.: As in Example 1.6.3, suppose that fields \(a(z)\) and \(b(z)\) do not depend on \(z\), i.e., they are just operators \(a,b\in\operatorname{End}(V)\). Then their \(n\)-th product is the composition \(ab\), if \(n=-1\), and zero, if \(n\neq 0\).
**Lemma 1.8.2**.: For every \(n\in\mathbf{Z}\), one has
\[a(w)_{(n)}b(w)=\operatorname{Res}_{z}(a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w, z}((z-w)^{n}))\.\]
Proof.: By the definition, if \(n\geq 0\), then \(a(w)_{(n)}b(w)=g_{n}(w)\) for \(g_{n}(w)=\operatorname{Res}_{z}((z-w)^{n}[a(z),b(w)])\). This coincides with the right hand side of the required formula. If \(n=-m-1<0\), the formula follows from the following equalities:
\[\operatorname{Res}_{z}\left(a(z)i_{z,w}\left(\frac{1}{(z-w)^{m+1}}\right) \right)=\partial_{w}^{(m)}a(w)_{+}\text{ and }\]
\[\operatorname{Res}_{z}\left(a(z)i_{w,z}\left(\frac{1}{(z-w)^{m+1}}\right) \right)=\partial_{w}^{(m)}a(w)_{-}\,\]
which easily follow from the formulas for \(i_{z,w}\left(\frac{1}{(z-w)^{m+1}}\right)\) and \(i_{w,z}\left(\frac{1}{(z-w)^{m+1}}\right)\) established in SS1.4.
**Corollary 1.8.3**.: If fields \(a(z)\) and \(b(z)\) are mutually local, then for each \(n\in\mathbf{Z}\), one has
\[a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w,z}((z-w)^{n})=\sum_{j=0}^{\infty}(a(w) _{(n+j)}b(w))\partial_{w}^{(j)}\delta(z-w)\.\]
Proof.: If an integer \(N\) is large enough so that \(N+n\geq 0\), then the element on the left hand side multiplied by \((z-w)^{N}\) is equal to \((z-w)^{N+n}[a(z),b(w)]\). Since \(a(z)\) and \(b(z)\) are mutually local, the latter tends to zero as \(N\to+\infty\). This means that the element on the left hand side satisfy the condition (b) of Lemma 1.5.3 and, therefore, this element is of the form \(\sum_{j=0}^{\infty}g_{j}(w)\partial_{w}^{(j)}\delta(z-w)\) with \(g_{j}(w)\to 0\) as \(j\to\infty\). By Lemma 1.5.1, the element \(g_{j}(w)\) is equal to
\[\operatorname{Res}_{z}((a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w,z} ((z-w)^{n}))(z-w)^{j})\] \[= \operatorname{Res}_{z}(a(z)b(w)i_{z,w}((z-w)^{n+j})-b(w)a(z)i_{w,z}((z-w)^{n+j}))\] \[= a(w)_{(n+j)}b(w)\.\]
The required statement follows.
**Lemma 1.8.4**.: For any pair of fields \(a(z)\) and \(b(z)\) and every \(n\in\mathbf{Z}\), the following is true:
1. \(\partial_{z}(a(z)_{(n)}b(z))=(\partial_{z}a(z))_{(n)}b(z)+a(z)_{(n)}\partial_ {z}b(z)\ ;\)
2. if \(T\) is a bounded \(K\)-linear endomorphism of \(V\) such that \([T,a(z)]=\partial_{z}a(z)\) and \([T,b(z)]=\partial_{z}b(z)\), then \[[T,a(z)_{(n)}b(z)]=Ta(z)_{(n)}b(z)+a(z)_{(n)}Tb(z)\.\]
Proof.: (i) If \(n<0\), the equality follows from the definition and the similar equality for normally ordered products from Lemma 1.7.4 (for \(m=1\)). Suppose \(n\geq 0\). Then \(a(w)_{(n)}b(w)=\operatorname{Res}_{z}((z-w)^{n}[a(z),b(w)])\), and \(\partial_{w}^{(m)}(a(w)_{(n)}b(w))\) is equal to
\[\operatorname{Res}_{z}(a(z)\partial_{w}^{(m)}((z-w)^{n}b(w))- \partial_{w}^{(m)}((z-w)^{n}b(w)a(z)))\] \[= \operatorname{Res}_{z}(a(z)\partial_{w}((z-w)^{n})b(w)-\partial_{ w}((z-w)^{n})b(w)a(z))\] \[+ \operatorname{Res}_{z}(a(z)(z-w)^{n}\partial_{w}b(w)-(z-w)^{n} \partial_{w}b(w)a(z))\.\]
Substituting the equality \(\partial_{w}((z-w)^{n})=-\partial_{z}((z-w)^{n})\) and then the equality \(\operatorname{Res}_{z}(a(z)\partial_{z}((z-w)^{n}))=-\operatorname{Res}_{z}( \partial_{z}a(z)(z-w)^{n})\), we get the required fact.
(ii) By Lemma 1.8.2, the left hand side is equal to
\[\operatorname{Res}_{z}(Ta(z)b(w)i_{z,w}((z-w)^{n})-a(z)b(w)Ti_{z,w }((z-w)^{n})\] \[- Tb(w)a(z)i_{w,z}((z-w)^{n})-b(w)a(z)Ti_{w,z}((z-w)^{n}))\] \[= \operatorname{Res}_{z}(Ta(z)b(w)i_{z,w}((z-w)^{n})-a(z)Tb(w)i_{z, w}((z-w)^{n})\] \[+ a(z)Tb(w)i_{z,w}((z-w)^{n})-a(z)b(w)Ti_{z,w}((z-w)^{n})\] \[- Tb(w)a(z)i_{w,z}((z-w)^{n})+b(w)Ta(z)i_{w,z}((z-w)^{n})\] \[- b(w)Ta(z)i_{w,z}((z-w)^{n})-b(w)a(z)Ti_{w,z}((z-w)^{n}))\] \[= \operatorname{Res}_{z}([T,a(z)]b(w)i_{z,w}((z-w)^{n})-b(w)[T,a(z) ]i_{w,z}((z-w)^{n}))\] \[+ \operatorname{Res}_{z}(a(z)[T,b(w)]i_{z,w}((z-w)^{n})-[T,b(w)]a(z) i_{w,z}((z-w)^{n})).\]
The assumption implies that the latter expression is equal to the right hand side of the required equality.
A field \(a(z)\) is said to be _translation covariant with respect_ to an endomorphism \(T\in\operatorname{End}(V)\) if \([T,a(z)]=\partial_{z}a(z)\).
**Corollary 1.8.5**.: If fields \(a(z)\) and \(b(z)\) are translation covariant with respect to an endomorphism \(T\in\operatorname{End}(V)\), then so are their \(n\)-th products \(a(z)_{(n)}b(z)\) for all \(n\in\mathbf{Z}\).
**Lemma 1.8.6**.: Assume that for \(a(z)\) and \(b(z)\) and for some element \(v\in V\), one has \(a(z)v,b(z)v\in V[[z]]\). If \(c(z)=a(z)_{(n)}b(z)\) for \(n\in\mathbf{Z}\), then
1. \(c(z)v\in V[[z]]\);
2. \(c_{(-1)}v=a_{(n)}(b_{(-1)}v)\).
Proof.: (i) By Lemma 1.8.2, \(c(w)v\) is equal to
\[c(w)v=\operatorname{Res}_{z}(a(z)b(w)i_{z,w}((z-w)^{n}))v-\operatorname{Res}_{z }(b(w)a(z)i_{w,z}((z-w)^{n}))v\.\]
Since \(a(z)v\in V[[z]]\) and the series \(i_{w,z}((z-w)^{n}\) has no negative powers of \(z\), the second summand is equal to zero. Since \(b(w)v\in V[[w]]\) and the element \(i_{z,w}((z-w)^{n})\) has no negative powers of \(w\), the first summand lies in \(V[[w]]\). The statement follows.
(ii) We can substitute the value \(w=0\) to the above equality, and we get \(c(0)v=\operatorname{Res}_{z}(a(z)b(0)z^{n})v=a_{(n)}(b(0)v)\). Since \(c(z),b(z)\in V[[z]]\), one has \(c(0)=c_{(-1)}\) and \(b(0)=b_{(-1)}\), and the required equality follows.
**Lemma 1.8.7**.: (Dong's Lemma) If \(a(z)\), \(b(z)\) and \(c(z)\) are pairwise mutually local fields, then for each \(n\in\mathbf{Z}\) the fields \(a(z)_{(n)}b(z)\) and \(c(z)\) are mutually local.
Proof.: By Lemma 1.8.2, it suffices to show that the element \((w-u)^{N}P\) tends to zero as \(N\to\infty\), where
\[P = (a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w,z}((z-w)^{n}))c(u)\] \[-\;c(u)(a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w,z}((z-w)^{n}))\] \[= [a(z)b(w),c(u)]i_{z,w}((z-w)^{n})-[b(w)a(z),c(u)]i_{w,z}((z-w)^{n})\]
Given \(\varepsilon>0\), let \(k\) be such that \(k\geq\max\{n,-n\}\) and for all \(m\geq k\) the elements \((z-w)^{m}[a(z),b(w)]\), \((z-u)^{m}[a(z),c(u)]\) and \((w-u)^{m}[b(w),c(u)]\) are of norm at most \(\varepsilon\). For \(N=3k\) we have
\[(w-u)^{3k}=(w-u)^{k}\sum_{m=0}^{2k}\binom{2n}{m}(w-z)^{m}(z-u)^{2k-m}\.\]
To verify the required fact, we evaluate separately summands in \((w-u)^{3k}P\) for \(0\leq m\leq k\) and for \(k+1\leq m\leq 2k\).
(1) If \(0\leq m\leq k\), we substitute to the formula for \(P\) the equalities
\[[a(z)b(w),c(u)]=a(z)[b(w),c(u)]+[a(z),c(u)]b(w)\mbox{ and }\]
\[[b(w)a(z),c(u)]=b(w)[a(z),c(u)]+[b(w),c(u)]a(z)\,\]
and deduce from the assumptions that the element \((w-u)^{k}(z-u)^{2k-m}P\) is of norm at most \(\varepsilon\max\{||a(z)||,||b(w)||\}\).
(2) If \(k+1\leq m\leq 2k\), we use the equality
\[P=[[a(z),b(w)],c(u)]i_{z,w}((z-w)^{n})+[b(w)a(z),c(u)]\Delta((z-w)^{n})\.\]
The product of the first summand by \((z-w)^{m}\) is of norm at most \(\varepsilon||c(u)||\). If \(n\geq 0\), then \(\Delta((z-w)^{n})=0\), and so the second summand is equal to zero. If \(n=-p-1<0\), then \(\Delta((z-w)^{p})=\partial_{w}^{(p)}\delta(z-w)\). Since \((z-w)^{p+1}\partial_{w}^{(p)}\delta(z-w)=0\) (see SS1.5) and \(k\geq-n=p+1\), it follows the second summand vanishes by multiplication by \((w-z)^{m}\).
### The closed \(K\)-submodule generated by mutually local fields
Let \(\mathcal{F}\) be a family of mutually local fields. The \(K\)_-submodule generated by \(\mathcal{F}\)_ is the minimal \(K\)-submodule \(\mathcal{F}_{\min}\) of \(\operatorname{End}(V)\langle\langle z\rangle\rangle\), which contains all fields from \(\mathcal{F}\cup\{I_{V}\}\) and is preserved by all of the \(n\)-products. The _closed \(K\)-submodule generated by \(\mathcal{F}\)_ is the closure \(\overline{\mathcal{F}}_{\min}\) of \(\mathcal{F}_{\min}\) in \(\operatorname{End}(V)\langle\langle z\rangle\rangle\).
**Lemma 1.9.1**.: In the above situation, the following is true:
1. all fields from \(\overline{\mathcal{F}}_{\min}\) are mutually local;
2. \(\overline{\mathcal{F}}_{\min}\) is preserved by all of the \(n\)-products;
3. given \(v\in V\), if \(\varphi(z)v\in V[[z]]\) for all \(\varphi(z)\in\mathcal{F}\), then the same holds for all fields \(\varphi(z)\in\overline{\mathcal{F}}_{\min}\);
4. if all fields from \(\mathcal{F}\) are translation covariant with respect to an endomorphism \(T\in\operatorname{End}(V)\), then the same holds for all fields \(\varphi(z)\in\overline{\mathcal{F}}_{\min}\).
Proof.: Dong's Lemma 1.8.7 implies mutual locality of all fields from \(\mathcal{F}_{\min}\). It is also clear that the property (ii) extend to \(\mathcal{F}_{\min}\) and the properties (iii)-(iv) extend to \(\overline{\mathcal{F}}_{\min}\). Thus, in order to prove (i) and (ii), we may replace \(\mathcal{F}\) by \(\mathcal{F}_{\min}\).
Let \(\{a^{i}(z)\}_{i\geq 1}\) and \(\{b^{i}(z)\}_{i\geq 1}\) be sequences of fields from \(\mathcal{F}\) that converge to fields \(a(z)\) and \(b(z)\), respectively. For (i) (resp. (ii)), we have to show that \(a(z)\) and
\(b(z)\) are mutually local (resp. the sequence \(a^{i}(z)_{(n)}b^{i}(z)\) converges to \(a(z)_{(n)}b(z)\)). For each \(i\geq 1\), one has
\[[a(z),b(w)]-[a^{i}(z),b^{i}(w)]=[a(z)-a^{i}(z),b(w)]+[a^{i}(z),b(w)-b^{i}(w)]\]
(resp. \(a(z)_{(n)}b(z)-a^{i}(z)_{(n)}b^{i}(z)=(a(z)-a^{i}(z))_{(n)}b(z)+a^{i}(z)_{(n)}( b(z)-b^{i}(z))\)).
Given \(\varepsilon>0\), we can find \(i\geq 1\) such that \(||a(z)-a^{i}(z)||<\varepsilon\), \(||b(z)-b^{i}(z)||<\varepsilon\), and \(||a^{i}(z)||=||a(z)||\). Then the norms of the two summands on the right hand side are less than \(\varepsilon\max\{||a(z)||,||b(z)||\}\). This immediately implies (ii). Furthermore, we can find \(N\geq 1\) such that, for all \(j\geq N\), one has \(||(z-w)^{N}[a^{i}(z),b^{i}(w)]||<\varepsilon\). Multiplication by \((z-w)^{j}\) does not increase the norms of the above two summands, but the norm of the second summand on the left hand side, multiplied by \((z-w)^{j}\) is less than \(\varepsilon\). It follows that \((z-w)^{N}[a(z),b(w)]||\to 0\) as \(N\to\infty\).
## 2. Vertex \(K\)-algebras
### Definition of a non-Archimedean vertex algebra
In this section, we assume that the commutative Banach ring \(K\) as well as all Banach \(K\)-modules considered are torsion free as abelian groups.
**Definition 2.1.1**.: A _vertex \(K\)-algebra_ is a collection of data:
1. (_space of states_) a torsion free Banach \(K\)-module \(V\);
2. (_vacuum vector_) an element \(|0\rangle\in V\);
3. (_translation operator_) a bounded \(K\)-linear operator \(T:V\to V\) with \(T|0\rangle=0\);
4. (_space of fields_) a closed \(K\)-submodule \(\operatorname{Fld}(V)\subset\operatorname{End}(V)\langle\langle z\rangle\rangle\) that contains the identity operator \(I_{V}\in\operatorname{End}(V)\);
with the following properties:
1. \(\varphi(z)|0\rangle\in V[[z]]\) for all \(\varphi(z)\in\operatorname{Fld}(V)\);
2. all fields from \(\operatorname{Fld}(V)\) are translation covariant with respect to \(T\);
3. all fields from \(\operatorname{Fld}(V)\) are mutually local;
4. \(\operatorname{Fld}(V)\) is preserved under all of the \(n\)-th products;
5. for the image \(V^{\prime}\) of the following bounded homomorphism \[\operatorname{Fld}(V)\to V:\varphi(z)\mapsto\varphi_{(-1)}|0\rangle\,\] its \(K\)-saturation \(\{v\in V\big{|}\lambda v\in V^{\prime}\) for a nonzero \(\lambda\in K\}\) is dense in \(V\).
For brevity, the above object is referred to as a vertex \(K\)-algebra \(V\). The bounded homomorphism from (V.5) is denoted by \(fs\) (field-state).
**Lemma 2.1.2**.: The field-state homomorphism \(fs:\operatorname{Fld}(V)\to V\) is injective.
Proof.: Let \(\varphi(z)\in\operatorname{Fld}(V)\) be a field from the kernel of the homomorphism \(fs\), and let \(\psi(z)\in\operatorname{Fld}(V)\). Since \(\varphi(z)\) and \(\psi(z)\) are mutually local, it follows that
\[(z-w)^{N}\varphi(z)\psi(w)|0\rangle-(z-w)^{N}\psi(w)\varphi(z)|0\rangle\to 0 \text{ as }N\to+\infty\.\]
The property (V.2) means that \([T,\varphi_{(n)}]=(-n)\varphi_{(n-1)}\) and, since \(T|0\rangle=0\), it follows that \(T\varphi_{(n)}|0\rangle=(-n)\varphi_{(n-1)}|0\rangle\) for all \(n\in\mathbf{Z}\). Furthermore, since \(fs(\varphi(z))=\varphi_{(-1)}|0\rangle=0\) and \(V\) is torsion free as an abelian group, it follows, by induction, that \(\varphi_{(n)}|0\rangle=0\) for all \(n<0\). Finally, by the property (V.1), \(\varphi_{(n)}|0\rangle=0\) for all \(n\geq 0\). Thus, \(\varphi(z)|0\rangle=0\) and, therefore, the second summand of the above expression is zero. It follows that \((z-w)^{N}\varphi(z)\psi(w)|0\rangle\to 0\) as \(N\to+\infty\). Finally,
since \(\psi(w)|0\rangle\in V[[w]]\), we can substitute \(w=0\) in the left hand side, and we get \(z^{N}\varphi(z)\psi_{(-1)}|0\rangle\to 0\) as \(N\to+\infty\). Since the norm of the element considered is equal to the norm of \(\varphi(z)\psi_{(-1)}|0\rangle\), it follows that the latter element is equal to zero. Since \(V\) is torsion free as a \(K\)-module, the property (V.5) then implies that \(\varphi(z)=0\).
Lemma 2.1.2 implies that there is a bijective homomorphism of \(K\)-modules
\[V^{\prime}\to\operatorname{Fld}(V):a\mapsto Y(a,z)=\sum_{n\in\mathbf{Z}}a_{(n )}z^{-n-1}\,\]
which is inverse to \(fs\) and denoted by \(sf\) (state-field). It can be viewed as an unbounded operator \(sf:V\to\operatorname{Fld}(V)\) with the domain \(V^{\prime}\). The \(K\)-module \(V^{\prime}\) is complete with respect to the Banach norm, defined by \(||a||^{\prime}=||Y(a,z)||\), the homomorphism \(sf:(V^{\prime},||\ ||^{\prime})\to\operatorname{Fld}(V)\) is isometric, and the canonical embedding \((V^{\prime},||\ ||^{\prime})\to V\) is bounded.
**Definition 2.1.3**.: A vertex \(K\)-algebra \(V\) is said to be _admissible_ if the field-state homomorphism of Banach \(K\)-modules \(fs:\operatorname{Fld}(V)\to V\) is admissible.
In this case, \(V^{\prime}\) is a closed \(K\)-submodule of \(V\) and the state-field operator \(sf:V^{\prime}\to\operatorname{Fld}(V)\) is bounded. If \(K\) is a non-Archimedean field with nontrivial valuation, then by the Banach open mapping theorem, a vertex \(K\)-algebra \(V\) is admissible if and only if \(V^{\prime}=V\).
**Example 2.1.4**.: Let \(V\) be a vertex \(K\)-algebra, whose translation operator \(T\) is zero. Then all fields in \(\operatorname{Fld}(V)\) do not depend on \(z\) and, by Examples 1.6.3 and 1.8.1, \(\operatorname{Fld}(V)\) is a commutative Banach \(K\)-subalgebra of \(\operatorname{End}(V)\) and, by the property (V.5), the \(K\)-saturation of the image \(V^{\prime}\) of the field-state homomorphism \(\operatorname{Fld}(V)\to V:\varphi\mapsto fs(\varphi)=\varphi|0\rangle\) is dense in \(V\). Conversely, given a torsion free Banach \(K\)-module \(V\) and an element \(|0\rangle\in V\), any commutative Banach \(K\)-subalgebra of \(\operatorname{End}(V)\) with the latter property defines the structure of a vertex \(K\)-algebra on \(V\) with zero translation operator. For example, let \(K\) be the field of \(p\)-adic numbers \(\mathbf{Q}_{p}\), \(V=\mathbf{Q}_{p}\{x\}\), and \(|0\rangle=1+px+p^{2}x^{2}+\ldots\). Let also \(\operatorname{Fld}(V)\) be the commutative Banach \(K\)-subalgebra of \(\operatorname{End}(V)\) consisting of the operators that correspond to infinite sequences \(\varphi=(\lambda_{0},\lambda_{1},\ldots)\) of elements of \(\mathbf{Q}_{p}\) with \(||\varphi||=\sup_{n}\{|\lambda_{n}|\}<\infty\) and act on \(V\) as follows: \(\varphi(\sum_{n=0}^{\infty}\alpha_{n}x^{n})=\sum_{n=0}^{\infty}\lambda_{n} \alpha_{n}x^{n}\). Then \(V\) is a vertex \(K\)-algebra, which is not admissible. Indeed, for \(\varphi\) as above, one has \(fs(\varphi)=\sum_{n=0}^{\infty}\lambda_{n}p^{n}x^{n}\). Thus, if for \(n\geq 0\), \(\varphi_{n}\) is the operator defined by \(\varphi_{n}(x^{m})=\delta_{m,n}x^{m}\), one has \(||\varphi_{n}||=1\) and \(||fs(\varphi_{n})||=|p|^{n}\). The latter tends to zero when \(n\) tends to infinity, i.e., the operator \(fs\) is not admissible.
Let again \(V\) be an arbitrary vertex \(K\)-algebra. It follows from the definition of the operator \(sf\) that, for any \(a\in V^{\prime}\), one has \(a=a_{(-1)}|0\rangle\). In particular, if \(|0\rangle=0\), then \(V=0\). If \(|0\rangle\neq 0\), then \(||a||\leq|||0\rangle||\cdot||a||^{\prime}\) and \(|||0\rangle||^{\prime}=1\). Furthermore, by the property (V.2), \(Ta_{(n)}|0\rangle=-na_{(n-1)}|0\rangle\) and, therefore, \(Ta=a_{(-2)}|0\rangle\).
**Lemma 2.1.5**.: For every \(n\geq 0\), the following is true:
1. there is a well defined \(K\)-linear operator \(\frac{1}{n!}T^{n}:V^{\prime}\to V^{\prime}\), and one has \(\frac{1}{n!}T^{n}=T^{(n)}\) for the \(K\)-linear operator \(T^{(n)}:V^{\prime}\to V^{\prime}\), defined by \(T^{(n)}a=a_{(-n-1)}|0\rangle\);
2. there is a well defined \(K\)-linear operator \(e^{zT}:V^{\prime}\to V^{\prime}[[z]]\), defined by \(e^{zT}a=\sum_{n=0}^{\infty}\frac{1}{n!}T^{n}a\), and one has \(e^{zT}a=Y(a,z)|0\rangle\);
* if the operators of multiplication by integers on \(V\) are admissible, then the operators \(\frac{1}{n!}T^{n}\) are extended to bounded operators \(\overline{V}^{\prime}\to\overline{V}^{\prime}\) on the closure \(\overline{V}^{\prime}\) of \(V^{\prime}\) in \(V\).
Proof.: The equality \(Ta_{(n)}|0\rangle=-na_{(n-1)}|0\rangle\) implies that
\[T^{m}a_{(n)}|0\rangle=(-1)^{m}n(n-1)\cdot\ldots\cdot(n-m+1)a_{(n-m)}|0\rangle\]
for all \(m\geq 0\) and, in particular, \(T^{m}a_{(-1)}|0\rangle=m!a_{(-m-1)}|0\rangle\). Since \(a_{(-1)}|0\rangle=a\) and \(V\) is torsion free as an abelian group, it follows that \(\frac{1}{m!}T^{m}a=a_{(-m-1)}|0\rangle\). This implies (i). Since \(||T^{(m)}a||\leq||a||^{\prime}\cdot||0\rangle||\), the element \(e^{zT}a\) lies in \(V^{\prime}[[z]]\), and the statement (ii) follows. Under the assumption of (iii), there exists a constant \(C>0\) such that \(||v||\leq C||m!v||\) for all \(v\in V\). This implies that \(||T^{(m)}||\leq C||T^{m}||\), and (iii) follows.
**Remarks 2.1.6**.: (i) The structure of a vertex \(K\)-algebra on \(V\) defines, for every \(n\in\mathbf{Z}\), a \(K\)-bilinear operation \(V^{\prime}\times V\to V:(a,b)\mapsto a_{(n)}b\). It follows from the definition that, for each pair \((a,b)\in V^{\prime}\times V\), one has \(||a_{(n)}b||\leq||a||^{\prime}\cdot||b||\) and \(a_{(n)}b\to 0\) as \(n\to+\infty\). Recall that, if \(V\) is admissible, then \(V^{\prime}\) is closed in \(V\) and the norms \(||\ ||^{\prime}\) and \(||\ ||\) on it are equivalent.
(ii) If \(K\) is a field, the \(K\)-saturation of \(V^{\prime}\) coincides with \(V^{\prime}\).
(iii) If the Banach norms on \(K\) and \(V\) are trivial, then any vertex \(K\)-algebra is admissible but \(V^{\prime}\) does not necessarily coincides with \(V\) (see SS2.6).
**Definition 2.1.7**.: A _homomorphism of vertex \(K\)-algebras_\(U\to V\) is a homomorphism of Banach \(K\)-modules \(\varphi:U\to V\) such that
* \(\varphi(|0\rangle)=|0\rangle\);
* \(\varphi\circ T=T\circ\varphi\);
* \(\varphi(U^{\prime})\subset V^{\prime}\);
* \(\varphi(a_{(n)}b)=\varphi(a)_{(n)}\varphi(b)\) for all \(a\in U^{\prime}\), \(b\in U\), and \(n\in\mathbf{Z}\);
* the induced homomorphism \(\operatorname{Fld}(U)\to\operatorname{Fld}(V):Y(a,z)\mapsto Y(\varphi(a),z)\) is bounded.
**Definition 2.1.8**.: An _ideal of a vertex \(K\)-algebra_\(V\) is a closed \(K\)-submodule \(J\subset V\) such that \(V/J\) is a torsion free \(K\)-module, \(TJ\subset J\) and \(\psi_{(n)}J\subset J\) for all \(\psi(z)=\sum_{n\in\mathbf{Z}}\psi_{(n)}z^{-n-1}\in\operatorname{Fld}(V)\) and \(n\in\mathbf{Z}\).
**Lemma 2.1.9**.: Given an ideal \(J\) of a vertex \(K\)-algebra \(V\), the quotient \(V/J\) has a canonical structure of a vertex \(K\)-algebra for which the canonical map \(\pi:V\to V/J\) is a homomorphism of vertex \(K\)-algebras.
Proof.: The vertex \(K\)-algebra structure on \(V/J\) is defined by the vacuum vector \(\pi(|0\rangle)\), the translation operator \(\pi(T)\), and the space of fields \(\operatorname{Fld}(V/J)\), which is the closure of the \(K\)-module of \(\operatorname{End}(V/J)\)-valued fields of the form \(\pi(\psi(z))=\sum_{n\in\mathbf{Z}}\pi(\psi_{(n)})z^{-n-1}\) for \(\psi(z)\in\operatorname{Fld}(V)\). (Here \(\pi(T)\) and \(\pi(\psi_{(n)})\) are the operators on \(V/J\), induced by \(T\) and \(\psi_{(n)}\), respectively.) Indeed, the latter \(K\)-module obviously possesses the properties (V.1)-(V.5) and, by Lemma 1.9.1, the same properties hold for its closure \(\operatorname{Fld}(V/J)\). This implies the claim.
Lemma 2.1.9 and injectivity of the state-field homomorphisms of vertex \(K\)-algebras imply that, for each element \(a\in J\cap V^{\prime}\), one has \(a_{(n)}V\subset J\). Notice also that an ideal \(J\) of \(V\) is nontrivial (i.e., \(J\neq V\)) if and only if \(|0\rangle\not\in J\).
### Basic properties of vertex algebras
Let \(V\) be a vertex \(K\)-algebra.
**Lemma 2.2.1**.: (Goddard's Uniqueness Theorem) Suppose that a field \(X(z)\) is mutually local with respect to all fields from \(\operatorname{Fld}(V)\) and such that \(X(z)|0\rangle=\varphi(z)|0\rangle\) for some \(\varphi(z)\in\operatorname{Fld}(V)\). Then \(X(z)=\varphi(z)\).
Proof.: First of all, replacing \(X(z)\) by \(X(z)-\varphi(z)\), we may assume that \(X(z)|0\rangle=0\), and our purpose is to show that in this case \(X(z)=0\). By the assumption, for every \(\psi(z)\in\operatorname{Fld}(V)\) and \(N\geq 0\), we have
\[(z-w)^{N}[X(z),\psi(w)]|0\rangle=(z-w)^{N}X(z)\psi(w)|0\rangle\]
and, by the vacuum axiom, the right hand side is an element of \(V[[z^{\pm 1},w]]\). By Lemma 1.3.1(i), multiplication by \((z-w)^{N}\) on \(V[[z^{\pm 1},w]]\) is an isometry and, therefore, one has
\[||(z-w)^{N}[X(z),\psi(z)]|0\rangle||=||X(z)\psi(w)|0\rangle||\geq||X(z)\psi_{( -1)}|0\rangle||\.\]
By the locality assumption, the left hand side tends to zero as \(N\to\infty\). It follows that \(X(z)b=0\) for all \(b\in V^{\prime}\). Since the \(K\)-saturation of \(V^{\prime}\) is dense in \(V\), we get \(X(z)=0\).
**Lemma 2.2.2**.: (i) The \(K\)-module \(\operatorname{Fld}(V)\) is preserved by the operators \(\partial_{z}^{(n)}\), and \(\partial_{z}^{(n)}Y(a,z)=Y(T^{(n)}a,z)\) for all \(a\in V^{\prime}\) and \(n\geq 0\);
(ii) \(Y(a,z)_{(n)}Y(b,z)=Y(a_{(n)}b,z)\) for all \(a,b\in V^{\prime}\) and \(n\in\mathbf{Z}\).
Proof.: (i) One has \(\partial_{z}^{(m)}Y(a,z)=Y(a,z)_{(n)}I_{V}\) for \(n=-m-1<0\), and since \(\operatorname{Fld}(V)\) contains \(I_{V}\) and is preserved by the \(n\)-product, it follows that \(\partial_{z}^{(m)}Y(a,z)\in\operatorname{Fld}(V)\). Furthermore, we apply Lemma 2.2.1 to the field \(X(z)=\partial_{z}^{(m)}Y(a,z)\). By Lemma 1.6.4(iii), it is mutually local with respect to all fields from \(\operatorname{Fld}(V)\). One also has
\[X(z)|0\rangle=\partial_{z}^{(m)}\left(\sum_{n=0}^{\infty}T^{(n)}az^{n}\right) =\sum_{n=0}^{\infty}{m+n\choose m}T^{(m+n)}az^{n}=\sum_{n=0}^{\infty}T^{(n)}( T^{(m)}a)z^{n}\.\]
The latter is \(Y(T^{(m)}a,z)|0\rangle\), and so Lemma 2.2.1 implies that \(X(z)=Y(T^{(m)}a,z)\).
(ii) The equality follows from Lemmas 1.8.6(ii) and 2.1.2.
**Corollary 2.2.3**.: For all \(a,b\in V^{\prime}\) and \(n\in\mathbf{Z}\), one has
\[T(a_{(n)}b)=(Ta)_{(n)}b+a_{(n)}(Tb)\.\]
Proof.: Applying the operator \(fs\) to the equality of Lemma 1.8.4(i) and using the equalities of Lemma 2.2.2(ii), we get the required fact.
**Corollary 2.2.4**.: For every pair \((a,b)\in V^{\prime}\times V\) and every \(n\in\mathbf{Z}\), one has \(||a_{(n)}b||\leq||a||^{\prime}\cdot||b||\). If in addition, \(b\in V^{\prime}\), then \(||a_{(n)}b||^{\prime}\leq||a||^{\prime}\cdot||b||^{\prime}\).
Proof.: One has \(||a_{(n)}b||\leq||a_{(n)}||\cdot||b||\leq||a||^{\prime}\cdot||b||\). If \(b\in V^{\prime}\), then \(a_{(n)}b\in V^{\prime}\) as well, and one has
\[||a_{(n)}b||^{\prime}=||Y(a_{(n)}b,z)||\leq||Y(a,z)||\cdot||Y(b,z)||=||a||^{ \prime}\cdot||b||^{\prime}\.\]
The statement follows.
**Corollary 2.2.5**.: (Borcherds identity) For all \(a,b\in V^{\prime}\) and \(n\in\mathbf{Z}\), setting \(a(z)=Y(a,z)\) and \(b(z)=Y(b,z)\), one has
\[a(z)b(w)i_{z,w}((z-w)^{n})-b(w)a(z)i_{w,z}((z-w)^{n})=\sum_{j=0}^{\infty}(a_{(n +j)}b)(w)\partial_{w}^{(j)}\delta(z-w)\.\]
Proof.: The identity follows from the equality of Corollary 1.8.3 in which, by Lemma 2.2.2(ii), \(a(w)_{(n+j)}b(w)\) is replaced by \((a_{(n+j)}b)(w)\).
**Corollary 2.2.6**.: The \(K\)-submodule of \(\mathrm{End}(V)\), generated by the operators \(a_{(n)}\) for all \(a\in V^{\prime}\) and \(n\in\mathbf{Z}\), is a Lie subalgebra of \(\mathrm{End}(V)\), denoted by \(\mathrm{Lie}_{V}\).
Proof.: The statement follows from Lemmas 1.6.4(ii) and 2.2.2(ii).
**Lemma 2.2.7**.: For all \(a,b\in V^{\prime}\), one has:
* \(e^{wT}Y(a,z)e^{-wT}=i_{z,w}Y(a,z+w)\);
* (Skewsymmetry) \(Y(a,z)b=e^{zT}Y(b,-z)a\).
Proof.: (i) Let \(f(z,w)\) and \(g(z,w)\) denote the left and right hand sides of the equality. One has \(f(z,0)=Y(a,z)=g(z,0)\) and, therefore,
\[\partial_{w}f(z,w)=Te^{wT}Y(a,z)e^{-wT}-e^{wT}Y(a,z)Te^{-wT}=[T,f(z,w)].\]
On the other hand, by the translation covariance axiom, one has \(\partial_{w}g(z,w)=[T,g(z,w)]\), and the required fact follows from Lemma 1.4.2(i).
(ii) Mutual locality of \(Y(a,z)\) and \(Y(b,z)\) implies that
\[(z-w)^{N}Y(a,z)e^{wT}b-(z-w)^{N}Y(b,w)e^{zT}a\to 0\ \text{as}\ N\to+\infty\.\]
By (i), the second summand is equal to
\[(z-w)^{N}e^{zT}e^{-zT}Y(b,w)e^{zT}a=e^{zT}i_{w,z}((z-w)^{N}Y(b,w-z))a\.\]
Since \(Y(b,w-z)\in\mathrm{End}(V)\langle\langle w-z\rangle\rangle\), it follows that
\[(z-w)^{N}Y(b,w-z)-i_{w,z}((z-w)^{N}Y(b,w-z))\to 0\ \text{as}\ N\to+\infty\]
and, therefore, one has
\[(z-w)^{N}Y(a,z)e^{wT}b-(z-w)^{N}e^{zT}Y(b,w-z)a\to 0\ \text{as}\ N\to+\infty\.\]
Substituting \(w=0\) and using the fact that multiplication by \(z^{N}\) is an isometric operator, we get the required fact.
**Corollary 2.2.8**.: (i) For all \(a,b\in V^{\prime}\) and \(n\in\mathbf{Z}\), one has
\[a_{(n)}b=-\sum_{m=0}^{\infty}(-1)^{m+n}T^{(m)}(b_{(m+n)}a)\ ;\]
(ii) for all \(a(z),b(z)\in\mathrm{Fld}(V)\), one has
\[a(z)_{(n)}b(z)=-\sum_{m=0}^{\infty}(-1)^{m+n}\partial_{z}^{(m)}(b(z)_{(m+n)}a( z))\.\]
Proof.: The statement (i) straightforwardly follows from Lemma 2.2.7(ii). The statement (ii) follows from (i) and Lemma 2.2.2(ii).
In the same way Corollary 2.2.5 implies two other forms of the Borcherds identity.
**Corollary 2.2.9**.: (i) For all \(a,b,c\in V^{\prime}\) and \(m,n,k\in\mathbf{Z}\), one has
\[\sum_{j=0}^{\infty}\binom{m}{j}(a_{(n+j)}b)_{(m+k-j)}c=\sum_{j=0}^{\infty}(-1)^{j }\binom{n}{j}((a_{(m+n-j)}(b_{(k+j)}c)-(-1)^{n}(b_{(n+k-j)}(a_{(m+j)}c));\]
(ii) identity from (i) holds if we replace all \(a,b,c\in V^{\prime}\) by \(a(z),b(z),c(z)\in\operatorname{Fld}(V)\).
Note that Corollary 2.2.8 follows from Corollary 2.2.9 by letting \(c=|0\rangle\), \(m=-1\) and \(k=0\).
### Commutative vertex \(K\)-algebras
**Lemma 2.3.1**.: The following properties of a vertex \(K\)-algebra \(V\) are equivalent:
1. \(\varphi(z)\in\operatorname{End}(V)[[z]]\) for all \(\varphi(z)\in\operatorname{Fld}(V)\);
2. \(\varphi(z)\in\operatorname{End}(V)((z))\) for all \(\varphi(z)\in\operatorname{Fld}(V)\);
3. \([\varphi(z),\psi(w)]=0\) for all \(\varphi(z),\psi(z)\in\operatorname{Fld}(V)\).
Proof.: The implication (a)\(\Longrightarrow\)(b) is trivial. Suppose (b) is true. Then for all \(\varphi(z),\psi(z)\in\operatorname{Fld}(V)\), one has \([\varphi(z),\psi(w)]\in\operatorname{End}(V)[[w^{\pm 1}]]((z))\). Since the fields \(\varphi(z)\) and \(\psi(z)\) are mutually local, \((z-w)^{N}[\varphi(z),\psi(w)]\to 0\) as \(N\to\infty\). But the multiplication by \(z-w\) on \(\operatorname{End}(V)[[w^{\pm 1}]]((z))\) is an isometry, by Lemma 1.3.1. This implies (c). Finally, assume (c) is true. Then \(\varphi(z)\psi(w)|0\rangle\in V[[z^{\pm 1},w]]\) and \(\psi(w)\varphi(z)|0\rangle\in V[[z,w^{\pm 1}]]\). Since both vectors are equal, it follows that they both are contained in \(V[[z,w]]\). Substituting \(w=0\), we get \(\varphi(z)\psi_{(-1)}|0\rangle\in V[[z]]\) for all \(\psi(z)\in\operatorname{Fld}(V)\). Since the \(K\)-saturation of the image \(V^{\prime}\) of the homomorphism \(fs\) is dense in \(V\), we get \(\varphi(z)\in\operatorname{End}(V)[[z]]\).
A vertex \(K\)-algebra \(V\) that possesses the equivalent properties of Lemma 2.3.1 is called _commutative_. For example, the vertex \(K\)-algebras with zero translation operator (from Example 2.1.4) are commutative.
**Definition 2.3.2**.: A _strongly commutative Banach \(K\)-algebra_ is a quadruple \((V,|0\rangle,T,A)\) consisting of a Banach torsion free \(K\)-module \(V\), an element \(|0\rangle\in V\), a bounded operator \(T\in\operatorname{End}(V)\) with \(T|0\rangle=0\), and a Banach \(K\)-subalgebra \(A\subset\operatorname{End}(V)[[z]]\) such that
1. \([a(z),b(w)]=0\) for all \(a(z),b(z)\in A\);
2. \(A\) is preserved by the operators \(\partial_{z}^{(n)}\) for all \(n\geq 0\);
3. \([T,a(z)]=\partial_{z}a(z)\) for all \(a(z)\in A\);
4. if \(V^{\prime}\) is the image of the homomorphism \(A\to V:a(z)\mapsto a(0)|0\rangle\), then the \(K\)-saturation of \(V^{\prime}\) is dense in \(V\).
Notice that the homomorphism in (C.4) is injective (cf. Lemma 2.1.2). Indeed, suppose that for \(a(z)=\sum_{n=0}^{\infty}a^{(n)}z^{n}\in A\), one has \(a(0)|0\rangle=a^{(0)}|0\rangle=0\). The property (C.3) implies that \(Ta^{(n)}|0\rangle=(n+1)a^{(n+1)}|0\rangle\) for all \(n\geq 0\) and, by induction, we get \(a^{(n)}|0\rangle=0\) for all \(n\geq 0\), i.e., \(a(z)|0\rangle=0\). Then for every \(b(z)\in A\), one has \(a(z)b(w)|0\rangle=b(w)a(z)|0\rangle=0\). It follows that \(a(z)b(0)|0\rangle=0\) for all \(b(z)\in A\), i.e., \(a(z)V^{\prime}=0\), and the property (C.4) implies that \(a(z)=0\). Thus, we can denote by \(a(z)\) the preimage of an element \(a\in V^{\prime}\) in \(A\).
A morphism of such objects \((V,|0\rangle,T,A)\to(U,|0\rangle,T,B)\) is a bounded \(K\)-linear homomorphism \(\varphi:V\to U\) which takes \(|0\rangle\) to \(|0\rangle\) and \(V^{\prime}\) to \(U^{\prime}\), commutes with \(T\), and the induced map \(V^{\prime}\to B:a\mapsto\varphi(a)(z)\) gives rise to a bounded homomorphism of Banach \(K\)-algebras \(A\to B\).
**Lemma 2.3.3**.: The correspondence \(V\mapsto(V,|0\rangle,T,\operatorname{Fld}(V))\) gives rise to an equivalence between the category of commutative vertex \(K\)-algebras and the category of strongly commutative Banach \(K\)-algebras.
Proof.: First of all, we notice that for any fields \(\varphi(z),\psi(z)\in\operatorname{End}(V)[[z]]\), one has \(\varphi(z)_{(n)}\psi(z)=0\) for \(n\geq 0\), and \(\varphi(z)_{(n)}\psi(z)=(\partial_{z}^{(m)}\varphi(z))\psi(z)\) for \(n=-m-1<0\). Let \(V\) be commutative vertex \(K\)-algebra. Then the property (C.1) holds, by Lemma 2.3.1, and (C.3) and (C.4) hold by (V.3) and (V.5), respectively. The above remark implies that \(\operatorname{Fld}(V)\) is preserved by multiplication and by the operators \(\partial_{z}^{(n)}\), i.e., \((V,|0\rangle,T,\operatorname{Fld}(V))\) is a strongly commutative Banach \(K\)-algebra. Conversely, if the latter holds, then all of the properties (V.1)-(V.5) evidently hold, and so \(V\) is a commutative vertex \(K\)-algebra. That the correspondence considered induces an equivalence of categories follow easily from the definitions of morphisms in both categories.
**Corollary 2.3.4**.: Let \(V\) be a commutative vertex \(K\)-algebra. Then
1. the \(K\)-bilinear operation \(V^{\prime}\times V\to V:(a,b)\mapsto a\cdot b=Y(a,0)b\) defines the structure of a commutative Banach \(K\)-algebra on \((V^{\prime},||\ ||^{\prime})\) with the unit element \(|0\rangle\), and the structure of a Banach module on \(V\) over the latter;
2. for every pair \((a,b)\in V^{\prime}\times V\), one has \(Y(a,z)b=(e^{zT}a)\cdot b\);
3. the operator \(T\) on \((V^{\prime},||\ ||^{\prime})\) is a derivation with respect to the above multiplication and of norm at most one.
In the admissible case, \(V^{\prime}\) is a Banach \(K\)-algebra with respect to the norm \(||\ ||\) since it is equivalent to the norm \(||\ ||^{\prime}\).
Proof.: (i) Let \(a,b\in V^{\prime}\). Since \(b=Y(b,0)|0\rangle\), we have \(a\cdot b=Y(a,0)Y(b,0)|0\rangle\) and, by the property (c) of Lemma 2.3.1, \(a\cdot b=b\cdot a\). Furthermore, the equality \(Y(a,0)Y(b,0)=Y(b,0)Y(a,0)\) gives the equality \(a\cdot(b\cdot c)=b\cdot(a\cdot c)\) for all \(a,b,c\in V^{\prime}\), and we get \(a\cdot(b\cdot c)=a\cdot(c\cdot b)=c\cdot(a\cdot b)=(a\cdot b)\cdot c\). If \(a\in V^{\prime}\) and \(b\in V\), then \(||a\cdot b||\leq||a||^{\prime}\cdot||b||\). If in addition \(b\in V^{\prime}\), then \(||a\cdot b||^{\prime}=||a_{(-1)}b||^{\prime}\leq||a||^{\prime}\cdot||b||^{\prime}\), by Corollary 2.2.4.
(ii) Since \(Y(a,z)=\sum_{n=0}^{\infty}T^{(n)}az^{n}\), one has
\[Y(a,z)b=\sum_{n=0}^{\infty}(T^{(n)}a)bz^{n}=\sum_{n=0}^{\infty}Y(T^{(n)}a,0) bz^{n}=\sum_{n=0}^{\infty}((T^{(n)}a)\cdot b)z^{n}=(e^{zT}a)\cdot b\,\]
and the statement (ii) follows.
(iii) For \(a,b\in V^{\prime}\), Lemma 2.2.3 implies that
\[T(a\cdot b)=T(a_{(-1)}b)=(Ta)_{(-1)}b+a_{(-1)}(Tb)=(Ta)\cdot b+a\cdot(Tb)\,\]
i.e., \(T\) is a derivation. Using (i), we get \(||Ta||^{\prime}=||a_{(-2)}|0\rangle||^{\prime}\leq||a||^{\prime}\cdot|||0 )||^{\prime}\leq||a||^{\prime}\) (if \(|0\rangle\neq 0\), the second inequality is an equity). Thus, the operator \(T\) on \((V^{\prime},||\ ||^{\prime})\) is of norm at most one.
**Examples 2.3.5**.: (i) Let \(A\) be a commutative Banach \(K\)-algebra provided with a system of bounded \(K\)-linear operators \(\{T^{(n)}\}_{n\geq 0}\) such that \(T^{(0)}=I_{A}\) and \(T^{(n)}(a\cdot b)=\sum_{i+j=n}T^{(i)}a\cdot T^{(j)}b\) for all \(n\geq 0\) and \(a,b\in A\). Assume that the set \(A^{\prime}=\{a\in A|\}\) there exists \(C>0\) with \(||T^{(n)}a||\leq C||a||\) for all \(n\geq 0\}\) is dense in \(A\). Then \(A\) is a commutative vertex \(K\)-algebra with \(Y(a,z)\), defined for \(a\in A^{\prime}\) as multiplication
by \(e^{zT}a=\sum_{n=0}^{\infty}T^{(n)}az^{n}\). If there exists \(C>0\) with \(||T^{(n)}||\leq C\) for all \(n\geq 0\), then \(A\) is admissible.
(ii) The commutative Banach \(K\)-algebra \(K\{t^{\pm 1}\}\), provided with the system of operators \(T^{(n)}=\partial_{t}^{(n)}\), \(n\geq 0\), is an admissible commutative vertex \(K\)-algebra.
(iii) For a positive real number \(r\), let \(K\{r^{-1}t\}\) denote the commutative Banach \(K\)-algebra of formal power series \(f=\sum_{n=0}^{\infty}\lambda_{n}t^{n}\) with \(|\lambda_{n}|r^{n}\to 0\) as \(n\to\infty\), provided with the Banach norm \(||f||=\max_{n}\{|\lambda_{n}|r^{n}\}\). The derivation operator \(\partial_{t}\) has the norm \(r^{-1}\) and, for \(n\geq 0\), the operator \(T^{(n)}=\partial_{t}^{(n)}\) has the norm \(r^{-n}\). Thus, if \(r\geq 1\), then \(K\{r^{-1}t\}\) is an admissible commutative vertex \(K\)-algebra. If \(r<1\), \(A\) is not admissible (see Example 2.4.3).
### Extension Theorem
Suppose we are given a torsion free Banach \(K\)-module \(V\), an element \(|0\rangle\in V\), a bounded endomorphism \(T\in\operatorname{End}(V)\), and a collection of fields
\[\mathcal{F}=\left\{a^{j}(z)=\sum_{n\in\mathbf{Z}}a^{j}_{(n)}z^{-n-1}\right\} _{j\in J}\subset\operatorname{End}(V)\langle\langle z\rangle\rangle\.\]
Let \(V_{\mathcal{F}}\) be the minimal \(K\)-submodule of \(V\) that contains the element \(|0\rangle\) and is preserved under \(K\)-linear endomorphisms of the form \(v\mapsto a^{j}_{(n)}v\) for \(j\in J\) and \(n\in\mathbf{Z}\). Furthermore, suppose that the above data possess the following properties:
1. (vacuum axiom) \(T|0\rangle=0\) and \(a^{j}(z)|0\rangle\in V[[z]]\) for all \(j\in J\);
2. (translation covariance) \([T,a^{j}(z)]=\partial_{z}a^{j}(z)\) for all \(j\in J\);
3. (locality) the fields \(a^{j}(z)\in\mathcal{F}\) are mutually local;
4. (completeness) the \(K\)-saturation of the \(K\)-submodule \(V_{\mathcal{F}}\) is dense in \(V\).
**Theorem 2.4.1**.: In the above situation, there is a vertex \(K\)-algebra structure on \(V\) in which \(\operatorname{Fld}(V)\) is the closed \(K\)-submodule of \(\operatorname{End}(V)\langle\langle z\rangle\rangle\), generated by \(\mathcal{F}\).
Proof.: Validity of the properties (V.1)-(V.4) from Definition 2.1.1 follows from Lemma 1.9.1, and validity of the property (V.5) from the same definition follows from Lemma 1.8.6(ii).
Let \(U\) be a vertex \(K\)-algebra, and let \(V\) be a vertex \(K^{\prime}\)-algebra \(V\) for a commutative non-Archimedean Banach \(K\)-algebra \(K^{\prime}\). The completed tensor product \(U\widehat{\otimes}_{K}V\) is a Banach \(K^{\prime}\)-module. Assume that \(K^{\prime}\) and \(U\widehat{\otimes}_{K}V\) possess the property from the beginning of this section, i.e., they are torsion free abelian groups, and assume that \(U\widehat{\otimes}_{K}V\) is a torsion free \(K^{\prime}\)-module.
Notice that the valuation on \(\mathbf{Z}\) for \(K^{\prime}\) is not necessarily the same as for \(K\). For example, they do not coincide for \(K=\mathbf{Z}\) with the trivial valuation and \(K^{\prime}=\mathbf{Z}_{p}\), the ring of \(p\)-adic integers, with a \(p\)-adic valuation.
**Corollary 2.4.2**.: In the above situation, the completed tensor product \(U\widehat{\otimes}_{K}V\) is a vertex \(K^{\prime}\)-algebra with the vacuum vector \(|0\rangle\widehat{\otimes}|0\rangle\), the translation operator \(T\widehat{\otimes}I_{V}+I_{U}\widehat{\otimes}T\), and the space of states \(\operatorname{Fld}(U\widehat{\otimes}_{K}V)\), which is the closed \(K\)-submodule of \(\operatorname{End}(U\widehat{\otimes}_{K}V)\langle\langle z\rangle\rangle\), generated by fields of the form \(\varphi(z)\widehat{\otimes}\psi(z)\) for \(\varphi(z)\in\operatorname{Fld}(U)\) and \(\psi(z)\in\operatorname{Fld}(V)\).
**Example 2.4.3**.: Assume that \(r<1\) in Example 2.3.5. Then the norm of \(\partial_{t}^{(n)}\) tends to infinity together with \(n\). We apply Theorem 2.4.1 to the Banach space \(V=K\{r^{-1}t\}\), the vacuum element \(1\in K\{r^{-1}t\}\), the translation operator \(T=\partial_{t}\), and the collection of fields \(\mathcal{F}=\{e^{zT}a\big{|}a\in K[t]\}\). The properties (E.1)-(E.3) clearly
hold, and the property (E.4) holds because the ring of polynomials \(K[t]\) is dense in \(K\{r^{-1}t\}\). Thus, \(K\{r^{-1}t\}\) is a commutative vertex \(K\)-algebra, which is not admissible.
In all of the examples of vertex \(K\)-algebras, considered in the next subsections, we start from an ordinary vertex algebra \(V\) over a commutative ring \(R\), which is torsion free as an abelian group, with both \(R\) and \(V\) provided with the trivial Banach norm. Then for each injective homomorphism \(R\to K\) to a commutative non-Archimedean ring \(K\), by Corollary 2.4.2, we get a non-Archimedean vertex \(K\)-algebra \(V_{K}=V\widehat{\otimes}_{R}K\).
**Lemma 2.4.4**.: In the above situation, assume that \(V\) is a free \(R\)-module. Then the following is true:
1. if the field-state homomorphism \(fs:\operatorname{Fld}(V)\to V\) is bijective, the field-state homomorphism \(\operatorname{Fld}(V_{K})\to V_{K}\) is an isometric isomorphism;
2. if the Banach norm on \(K\) induces the trivial norm on \(R\), then the field-state homomorphism \(\operatorname{Fld}(V_{K})\to V_{K}\) is an isometric injection.
In particular, in both cases the vertex \(K\)-algebra \(V_{K}\) is admissible.
Proof.: Let \(\{e_{i}\}_{i\in I}\) is a basis of \(V\) over \(R\). Each element \(v\in V_{K}\) has a unique representation as a sum \(v=\sum_{i\in I}\lambda_{i}e_{i}\) with \(\lambda_{i}\in K\) and \(\lambda_{i}\to 0\) with respect to the filter of complements of finite subsets of \(I\), and one has \(||v||=\max_{i\in I}\{||\lambda_{i}||\}\). This implies that the canonical homomorphisms \(V\to V_{K}\) and \(\operatorname{End}(V)\to\operatorname{End}(V_{K})\) are isometric. Furthermore, Let \(U\) be the \(K\)-submodule of \(V\) consisting of finite sums \(v=\sum_{i\in I}\lambda_{i}e_{i}\). Then \(U\) is dense in \(V_{K}\) in the case of (i), and its \(\mathbf{Z}\)-saturation is dense in \(V_{K}\) in the case of (ii).
(i) For an element \(v=\sum_{i\in I}\lambda_{i}e_{i}\in U\), we set \(\varphi(z)=\sum_{i\in I}\lambda_{i}sf(e_{i})\in\operatorname{End}(V_{K})((z))\). Since \(fs(\varphi(z))=v\), one has \(||f||\leq||\varphi(z)||\). On the other hand, since \(||sf(e_{i})||=1\), one has \(||\varphi(z)||\leq\max_{i\in I}\{||\lambda_{i}||\}=|||v||\). This implies that \(||fs(\varphi(z))||=||\varphi(z)||\), and the required statement follows.
(ii) For each nonzero element \(v=\sum_{i\in I}\lambda_{i}e_{i}\in U\) there exists \(N\geq 1\) with \(Ne_{i}\in V^{\prime}\) for all \(i\in I\) with \(\lambda_{i}\neq 0\) and, in particular, \(Nv\in V^{\prime}\). We set \(\varphi(z)=\sum_{i\in I}\lambda_{i}sf(Ne_{i})\). Then \(\varphi(z)\in\operatorname{Fld}(V_{K})\) and \(fs(\varphi(z))=N\cdot\sum_{i\in I}\lambda_{i}e_{i}\). By the assumption, \(||Ne_{i}||=1\) and, therefore, one has
\[||\varphi(z)||\leq\max_{i\in I}\{||\lambda_{i}||\}=||fs(\varphi(z))||\leq|| \varphi(z)||\.\]
Thus, \(||fs(\varphi(z))||=||\varphi(z)||\). This implies the required fact.
### Vertex \(K\)-algebra associated to Lie algebras
Let \(R\) be a commutative ring, torsion free as an abelian group, let \(\mathfrak{g}\) be a Lie \(R\)-algebra, torsion free as an \(R\)-module, let \(T\) be a derivation of \(\mathfrak{g}\), and let \(\mathcal{F}\) be a collection of mutually local _formal distributions_
\[\mathcal{F}=\left\{a^{j}(z)=\sum_{n\in\mathbf{Z}}a^{j}_{(n)}z^{-n-1}\right\}_{ j\in J}\subset\mathfrak{g}[[z^{\pm 1}]]\.\]
The action of \(T\) on \(\mathfrak{g}\) extends naturally to an action on the space of formal distributions \(\mathfrak{g}[[z^{\pm 1}]]\). Let \(\mathcal{F}_{T}\) denote the minimal \(R\)-submodule of \(\mathfrak{g}[[z^{\pm 1}]]\) that contains \(\mathcal{F}\) and is invariant under the action of \(T\). Suppose also that the above data possess the following properties:
1. the \(R\)-saturation of the \(R\)-submodule of \(\mathfrak{g}\), generated by the elements \(a^{j}_{(n)}\) for all \(j\in J\) and \(n\in\mathbf{Z}\), coincides with \(\mathfrak{g}\);
2. \(T(a^{j}(z))=\partial_{z}a^{j}(z)\) for all \(j\in J\);
3. for every \(n\geq 0\), the \(n\)-th product of any pair of formal distributions from \(\mathcal{F}\) lies in \(\mathcal{F}_{T}\).
Let \(\mathfrak{g}_{-}\) be the \(R\)-submodule of \(\mathfrak{g}\), generated by the elements \(a^{j}_{(n)}\) for all \(j\in J\) and \(n\geq 0\). It follows from the property (L.2) and Lemma 1.6.4(ii) that \(\mathfrak{g}_{-}\) is a \(T\)-invariant Lie \(R\)-subalgebra of \(\mathfrak{g}\) (the _annihilation subalgebra_). Let \(V\) be the quotient \(U(\mathfrak{g})/\mathbf{a}\) of the universal enveloping \(R\)-algebra \(U(\mathfrak{g})\) of \(\mathfrak{g}\) by the \(R\)-saturation \(\mathbf{a}\) of the left ideal \(U(\mathfrak{g})\mathfrak{g}_{-}\).
Notice that left multiplication by elements of \(\mathfrak{g}\) in \(U(\mathfrak{g})\) induces an \(R\)-linear action of \(\mathfrak{g}\) on \(V\). This gives rise to a homomorphism \(\mathfrak{g}[[z^{\pm 1}]]\to\operatorname{End}[[z^{\pm 1}]]\), and it takes the collection \(\mathcal{F}\) to a collection of mutually local formal distributions \(\mathcal{F}\big{|}_{V}\) in \(\operatorname{End}[[z^{\pm 1}]]\). Notice also that the derivation \(T\) on \(\mathfrak{g}\) induces an action of \(T\) on \(V\).
**Lemma 2.5.1**.: In the above situation, formal distributions in \(\mathcal{F}\big{|}_{V}\) are in fact \(\operatorname{End}(V)\)-valued quantum fields, i.e., \(\mathcal{F}\big{|}_{V}\subset\operatorname{End}(V)\langle\langle z\rangle\rangle\).
Notice that since the valuation on \(V\) is trivial, then \(\operatorname{End}(V)\langle\langle z\rangle\rangle=\operatorname{End}(V)((z))\).
Proof.: Step 1. Let \(F\) be the \(R\)-submodule of formal distributions \(a(z)\in\mathfrak{g}[[z^{\pm 1}]]\) with \(a_{(n)}\in\mathfrak{g}_{-}\) for all \(n\geq 0\). By the definition, one has \(\mathcal{F}\subset F\), and the property (L.2) implies that \(\mathcal{F}_{T}\subset F\). From the property (L.3) it then follows that, for every \(i\geq 0\) and every pair of formal distributions \(a(z),b(z)\in\mathcal{F}\), one has \(a(z)_{(i)}b(z)\in F\) and, in particular, \((a(z)_{(i)}b(z))_{(k)}\in\mathfrak{g}_{-}\) for all \(k\geq 0\).
Step 2. _For any pair \(a(z),b(z)\in\mathcal{F}\) and \(n\in\mathbf{Z}\), one has_
\[[a(z),b_{(n)}]\in\mathfrak{g}((z))+\mathfrak{g}_{-}[[z^{\pm 1}]]\subset \mathfrak{g}[[z^{\pm 1}]]\.\]
Indeed, by Lemma 1.6.4(ii), one has
\[[a(z),b_{(n)}]=\sum_{m\in\mathbf{Z}}[a_{(m)},b_{(n)}]z^{-m-1}=\sum_{m\in \mathbf{Z}}\sum_{i=0}^{\infty}\binom{m}{i}(a(z)_{(i)}b(z))_{(m+n-i)}z^{-m-1}\.\]
Since \(a(z)\) and \(b(z)\) are mutually local, there exists \(N\geq 0\) with \(a(z)_{(i)}b(z)=0\) for all \(i>N\). Thus, for \(m\geq 0\) the coefficient at \(z^{-m-1}\) is equal to
\[\sum_{i=0}^{\min(m,N)}\binom{m}{i}(a(z)_{(i)}b(z))_{(m+n-i)}\.\]
If \(n\geq 0\), then \(m+n-i\geq 0\) for all \(0\leq i\leq m\) and, by Step 1, that coefficient belongs to \(\mathfrak{g}_{-}\). If \(n=-k-1\) for \(k\geq 0\), then for \(m\geq N+k+1\), one has \(m+n-i\geq 0\) and, by Step 1, the coefficient at \(z^{-m-1}\) belongs to \(\mathfrak{g}_{-}\).
Step 3. _The statement of the lemma is true._ Indeed, by the property (L.1), it suffices to show that, for any \(b(z)\in\mathcal{F}\) and any element \(x\in U(\mathfrak{g})\) of the form \(x=a^{j_{1}}_{(n_{1})}\cdot\cdot\cdot\cdot a^{(j_{s})}_{(n_{s})}\), one has \(b(z)x\in U(\mathfrak{g})((z))+U(\mathfrak{g})g_{-}[[z^{\pm 1}]]\). If \(s=0\), then \(x=1\) and the required fact follows from the inclusion \(a(z)\in F\) (see Step 1). Assume that \(s\geq 1\) and the required fact is true for smaller values of \(s\). Then \(x=a^{j}_{(n)}y\) for \(y\) of the same form with smaller value of \(s\). One has
\[b(z)x=[b(z),a^{j}_{(n)}]y+a^{j}_{(n)}b(z)y\.\]
By Step 2, the first summand on the right hand side belongs to \(U(\mathfrak{g})((z))+U(\mathfrak{g})g_{-}[[z^{\pm 1}]]\) and, by the induction, the same holds for the second summand.
**Corollary 2.5.2**.: In the above situation, there is a vertex \(R\)-algebra structure on \(V\) in which the vacuum vector \(|0\rangle\) is the image of \(1\) in \(V\), the endomorphism \(T\) of \(V\) is induced by the derivation \(T\) on \(\mathfrak{g}\), and \(\operatorname{Fld}(V)\) is the \(R\)-submodule of \(\operatorname{End}(V)\langle\langle z\rangle\rangle\), generated by \(\mathcal{F}\big{|}_{V}\).
The vertex \(R\)-algebra from Corollary 2.5.2 is denoted by \(V(\mathfrak{g},\mathcal{F})\).
### Examples of vertex \(K\)-algebras
#### Free boson vertex \(K\)-algebra
Let \(B=\mathbf{Z}[x_{1},x_{2},\ldots]\) be the ring of polynomials over \(\mathbf{Z}\) in the variables \(x_{1},x_{2},\ldots\) with \(\mathbf{Z}\) and \(B\) provided with the trivial Banach norm. Let also \(|0\rangle=1\), \(T=\sum_{i=2}^{\infty}(i-1)x_{i}\frac{\partial}{\partial x_{i-1}}:B\to B\), and \(\mathcal{F}=\{a(z)\}\) for \(a(z)=\sum_{n\in\mathbf{Z}}a_{(n)}z^{-n-1}\), defined by \(a_{(n)}=n\frac{\partial}{\partial x_{n}}\) for \(n>0\), \(a_{(n)}=x_{-n}\) for \(n<0\), and \(a_{(0)}=0\). These data possess the properties (E.1)-(E.4) from SS2.4 and so, by Theorem 2.4.1, they give rise to an admissible vertex \(\mathbf{Z}\)-algebra structure on \(B\). Notice that \(B\) is a free abelian group that consists of _finite_ sums \(\sum_{\mu}\lambda_{\mu}x^{\mu}\), taken over sequences of non-negative integers \(\mu=(\mu_{1},\mu_{2},\ldots)\) all but finitely many of which are equal to zero, and where \(x^{\mu}=\prod_{i=1}^{\infty}x^{\mu_{i}}\) and \(\lambda_{\mu}\in\mathbf{Z}\). Notice also that, since the \(K\)-submodule \(B_{\mathcal{F}}\) from (E.4) coincides with \(B\), the field-state homomorphism \(fs:\operatorname{Fld}(B)\to B\) is bijective. Thus, by Lemma 2.4.4(i), for any \(K\), \(B_{K}=B\widehat{\otimes}_{\mathbf{Z}}K\) is an admissible vertex \(K\)-algebra for which the field-state homomorphism \(fs:\operatorname{Fld}(B_{K})\to B_{K}\) is an isometric isomorphism.
Sometimes in literature, the free boson vertex algebra is presented in a slightly different form. Namely, let \(B^{t}=\mathbf{Z}[y_{1},y_{2}\ldots]\) be a similar ring of polynomials, and let \(|0\rangle=1\), \(T=\sum_{i=2}^{\infty}iy_{i}\frac{\partial}{\partial y_{i-1}}:B\to B\), and \(\mathcal{F}^{t}=\{b(z)\}\) for \(b(z)=\sum_{n\in\mathbf{Z}}b_{(n)}z^{-n-1}\), defined by \(b_{(n)}=\frac{\partial}{\partial y_{n}}\) for \(n>0\), \(b_{(n)}=-ny_{-n}\) for \(n<0\), and \(b_{(0)}=0\). Notice that these data possess the same properties (E.1)-(E.4) from SS2.4 and so, by Theorem 2.4.1, they give rise to an admissible vertex \(\mathbf{Z}\)-algebra structure on \(B^{t}\). But in this case, the field-state homomorphism \(fs:\operatorname{Fld}(B^{t})\to B^{t}\) is not surjective. There is a homomorphism of vertex \(\mathbf{Z}\)-algebras \(B\to B^{t}:x_{i}\mapsto iy_{i}\), which becomes an isomorphism after tensoring with \(\mathbf{Q}\).
In any case, by Corollary 2.4.2, for any \(K\), \(B^{t}_{K}\) is a vertex \(K\)-algebra. By Lemma 2.4.4(ii), if the valuation on the image of \(\mathbf{Z}\) in \(K\) is trivial, then \(B^{t}_{K}\) is admissible. Assume now that the valuation \(|\ |\) on the image of \(\mathbf{Z}\) in \(K\) is \(p\)-adic. _We claim that in this case \(B^{t}_{K}\) is not admissible._ Indeed, one has \(||b_{(n)}||=1\) for \(n>0\) and \(||b_{(n)}||=|n|\) for \(n<0\). The space \(\operatorname{Fld}(B^{t}_{K})\) contains also the fields \(\partial_{z}^{(m)}b(z)\) for all \(m\geq 1\). One has
\[\partial_{z}^{(m)}b(z)=\sum_{n\in\mathbf{Z}}(-1)^{m}\binom{m+n}{m}b_{(n)}z^{-m -n-1}\]
and, therefore, \(fs(\partial^{(m)}b(z))=b_{(-m-1)}|0\rangle=(m+1)y_{m+1}\). The norm of the latter element is equal to \(|m+1|\). Consider the above element for \(m=p^{k}-1\) with \(k\geq 1\). For this value of \(m\), one has \(||fs(\partial_{z}^{(m)}b(z))||=|p|^{k}\). On the other hand, the binomial coefficient in the above sum for \(n=p\) is equal to \(\binom{m+p}{p}\) and, therefore,
it is not divisible by \(p\). Since \(||b_{(p)}||=1\), this implies that \(||\partial_{z}^{(m)}b(z)||=1\). Since \(|p|^{k}\to 0\) as \(k\to\infty\), we see that the operator \(fs\) is not admissible and, therefore, the vertex \(K\)-algebra \(B_{K}^{t}\) is not admissible.
_Free fermion vertex \(K\)-algebra_
For simplicity of exposition, we were considering the spaces of states with the trivial parity \(p=\overline{0}\). This discussion easily extends to the super spaces. Namely, \(V=V_{\overline{0}}\oplus V_{\overline{1}}\), where \(V_{\overline{j}}\) for \(\overline{j}\in{\bf Z}/2{\bf Z}\) are \(K\)-modules, and the parity \(p\) is defined by \(p(v)=\overline{j}\) for \(v\in V_{\overline{j}}\). In the vertex algebra definition, one assumes that \(|0\rangle\in V_{\overline{0}}\), \(TV_{\overline{j}}\subset V_{\overline{j}}\), and the commutator is understood in the "super" sense. The free fermion algebra is the most important example of such an object.
Let \(F={\bf Z}[\xi_{1},\xi_{2},\ldots]\) be the Grassmann superalgebra over \({\bf Z}\) in \(\xi_{1},\xi_{2},\ldots\) with \(\xi_{i}\xi_{j}=-\xi_{j}\xi_{i}\), \(\xi_{i}^{2}=0\), and \(p(\xi_{i})=\overline{1}\). Both \({\bf Z}\) and \(F\) are provided with the trivial norm. Let also \(|0\rangle=1\) and \(T=\sum_{i=1}^{\infty}i\xi_{i+1}\frac{\partial}{\partial\xi_{i}}\), where \(\frac{\partial}{\partial\xi_{i}}\) is the odd derivation (i.e., \(\frac{\partial(fg)}{\partial\xi_{i}}=\frac{\partial f}{\partial\xi_{i}}g+(-1) ^{p(f)}f\frac{\partial g}{\partial\xi_{j}}\)) such that \(\frac{\partial\xi_{i}}{\partial\xi_{j}}=\delta_{i,j}\). Finally, let \({\mathcal{F}}=\{\varphi(z)\}\) for \(\varphi(z)=\sum_{n\in{\bf Z}}\varphi(n)z^{-n-1}\), defined by \(\varphi_{(n)}=\frac{\partial}{\partial\xi_{n+1}}\) for \(n\geq 0\), and \(\varphi_{(n)}=\xi_{-n}\) for \(n<0\). These data data possess the properties (E.1)-(E.4) from SS2.4 and so, by Theorem 2.4.1, they gives rise to an admissible vertex \({\bf Z}\)-algebra structure on \(F\). Notice that \(F\) is the free abelian group that consists of _finite_ sums \(\sum_{\nu}a_{\nu}\xi^{\nu}\), taken over finite subsets \(\nu=\{\nu_{1}<\ldots<\nu_{m}\}\subset{\bf Z}_{>0}\), and where \(\alpha_{\nu}\in{\bf Z}\), \(\xi^{\nu}=\xi_{\nu_{1}}\wedge\ldots\wedge\xi_{\nu_{m}}\), and \(\xi^{\emptyset}=1\). Notice also that, in this case, the field-state homomorphism \(fs\) is a bijection. Thus, by Lemma 2.4.4(i), for any \(K\), \(F_{K}\) is an admissible vertex \(K\)-algebra for which the field-state homomorphism \(fs:{\rm Fld}(F_{K})\to F_{K}\) is an isometric isomorphism.
_Universal Virasoro vertex \(K\)-algebra_
The Virasoro algebra is a Lie algebra Vir over the ring \(R={\bf Z}[\frac{1}{2}]\), the localization of \({\bf Z}\) by powers of \(2\), with free generators \(L_{n}\) for \(n\in{\bf Z}\) and \(C\) satisfying the following relations
\[[L_{m},L_{n}]=(m-n)L_{m+n}+\delta_{m,-n}\frac{m^{3}-m}{12}C,\ [C,L_{m}]=0\ \mbox{for all}\ m,n\in{\bf Z}\.\]
(Both \(R\) and Vir are provided with the trivial norm.) Setting \(L_{(n)}=L_{n-1}\), consider the formal distribution \(L(z)=\sum_{n\in{\bf Z}}L_{(n)}z^{-n-1}\). The above relations can be written in the equivalent form
\[[L(z),L(w)]=\partial_{w}L(w)\delta(z-w)+2L(w)\partial_{w}\delta(z-w)+\frac{C} {2}\partial^{(3)}\delta(z-w)\.\]
It follows that \((z-w)^{4}[L(z),L(w)]=0\) and, therefore, \(L(z)\) is mutually local with itself. The property (L.1) for \({\mathcal{F}}=\{L(z),C\}\) clearly holds. If \(T={\rm ad}(L_{-1})\), then \(T(L_{(n)})=[L_{-1},L_{n-1}]=-nL_{(n-1)}\) and, therefore, \(T(L(z))=\partial_{z}L(z)\), i.e., the property (L.2) holds. The above equality implies that \(L(z)_{(0)}L(z)=\partial_{z}L(z)\), \(L(z)_{(1)}L(z)=2L(z)\), \(L(z)_{(3)}L(z)=\frac{C}{2}\), and \(L(z)_{(i)}L(z)=0\) for all other values of \(i\). This implies that the property (L.3) also holds. Thus, by Corollary 2.5.2, we get a vertex \(R\)-algebra \({\rm VIR}=V({\rm Vir},\{L(z),C\})\) called the _universal Virasoro vertex algebra_.
Notice that the annihilation subalgebra of Vir is \({\rm Vir}_{-}=\oplus_{n\geq-1}RL_{n}\). By the commutation relations, it follows that \({\rm VIR}\) is a free \(R\)-module that consists of _finite_ sums \(\sum_{m,\mu}\lambda_{m,\mu}C^{m}L^{\mu}\), taken over sequences of non-negative integers
\((m,\mu)=(m,\mu_{1},\mu_{2},\ldots)\) all but finitely many of which are equal to zero, and where \(L^{\mu}=\prod_{i=1}^{\infty}L^{\mu_{i}}_{-i-1}\) and \(\lambda_{m,\mu}\in R\). Notice also that the field-state homomorphism \(fs:\operatorname{Fld}(\operatorname{VIR})\to\operatorname{VIR}\) is a bijection. Thus, by Lemma 2.4.4, for any \(K\) in which \(2\) is invertible, \(\operatorname{VIR}_{K}\) is an admissible vertex \(K\)-algebra for which the field-state homomorphism \(\operatorname{Fld}(\operatorname{VIR}_{K})\to\operatorname{VIR}_{K}\) is an isometric isomorphism. Notice that
\[\operatorname{VIR}_{K}=\{f=\sum\lambda_{m,\mu}C^{m}L^{\mu}\big{|}\lambda_{m, \mu}\in K\text{ and }||\lambda_{m,\mu}||\to 0\text{ as }m+\langle\mu\rangle\to\infty\}\,\]
where the sum is taken over \((m,\mu)\) as above and \(\langle\mu\rangle=\sum_{i=0}^{\infty}i\mu_{i}\). The Banach norm on \(\operatorname{VIR}_{K}\) is defined by \(||f||=\max_{\mu}\{|\lambda_{\mu}|\}\).
**Lemma 2.6.1**.: (i) If \(c\) is such that \(|c|\leq 1\), then \((C-c)\operatorname{VIR}_{K}\) is a nontrivial ideal of \(\operatorname{VIR}_{K}\), and the quotient \(\operatorname{VIR}_{K}^{c}=\operatorname{VIR}_{K}/(C-c)\operatorname{VIR}_{K}\) is an admissible vertex \(K\)-algebra, whose field-state homomorphism \(fs:\operatorname{Fld}(\operatorname{VIR}_{K}^{c})\to\operatorname{VIR}_{K}^{c}\) is a bijection;
(ii) If \(c\) is invertible in \(K\) and \(|c|>1\), then \((C-c)\operatorname{VIR}_{K}=\operatorname{VIR}_{K}\);
Proof.: (i) Consider the Banach subspace of \(\operatorname{VIR}_{K}\)
\[W=\{f=\sum_{\mu}\lambda_{\mu}L^{\nu}\big{|}\lambda_{\mu}\in K\text{ and }|| \lambda_{\mu}||\to 0\text{ as }\langle\mu\rangle\to\infty\}\,\]
where the sum is taken over sequences of non-negative integers \(\mu=(\mu_{1},\mu_{2},\ldots)\) as above, and \(||f||=\max_{\mu}\{||\lambda_{\mu}||\}\). Then there is a canonical isometric isomorphism \(\operatorname{VIR}_{K}\widetilde{\to}W\{C\}\), and we can work with \(W\{C\}\) instead of \(\operatorname{VIR}_{K}\). The multiplication by \(C-c\) on \(W\{C\}\) is the composition of the isometric automorphism \(W\{C\}\to W\{C\}\) that is identical on \(W\) and takes \(C\) to \(C-c\) and of the multiplication by \(C\). This implies the claim.
(ii) If \(c\) is invertible and \(|c|>1\), the series \(1+\frac{C}{c}+\frac{C^{2}}{c^{2}}+\ldots\) lies in \(K\{C\}\) and its product with \(1-\frac{C}{c}\) is equal to one. Since \(K\{C\}\subset\operatorname{VIR}_{K}\), the claim follows.
The algebra \(\operatorname{VIR}_{K}^{c}\) is called the _universal Virasoro vertex \(K\)-algebra with central charge \(c\)_.
_Universal affine vertex \(K\)-algebra_
Let \(\mathfrak{g}\) be a Lie \(R\)-algebra, free of finite rank over \(R=\mathbf{Z}[\frac{1}{N}]\) for an integer \(N\geq 1\), and assume \(\mathfrak{g}\) is provided with a non-degenerate symmetric \(R\)-bilinear form \(\mathfrak{g}\times\mathfrak{g}\to R:(a,b)\mapsto(a|b)\). The latter gives rise to the associated Kac-Moody affinization \(\widehat{\mathfrak{g}}=\mathfrak{g}[t^{\pm 1}]+R\mathrm{K}\). defined by the relations
\[[at^{m},bt^{n}]=[a,b]t^{m+n}+m\delta_{m,-n}(a|b)\mathrm{K},\ [\mathrm{K},at^{m}]=0\]
for all \(a,b\in\mathfrak{g}\) and \(m,n\in\mathbf{Z}\). For each \(a\in\mathfrak{g}\), consider the formal distribution \(a(z)=\sum_{n\in\mathbf{Z}}(at^{n})z^{-n-1}\). Then the above relations can be written in the equivalent form
\[[a(z),b(w)]=[a,b](w)\delta(z-w)+(a|b)\partial_{w}\delta(z-w)\mathrm{K},\ [ \mathrm{K},a(z)]=0\.\]
It follows that \((z-w)^{2}[a(z),b(w)]=0\) and, therefore, \(\mathcal{F}=\{a(z)\}_{a\in\mathfrak{g}}\cup\{\mathrm{K}\}\) is a collection of mutually local formal distributions. The property (L.1) for \(\mathcal{F}\) clearly holds. Since \(-\partial_{t}(a(z))=\partial_{z}a(z)\), the property (L.2) holds for \(T=-\partial_{t}\). The above equality implies that \(a(z)_{(0)}b(z)=[a,b](z)\), \(a(z)_{(1)}b(z)=(a|b)\mathrm{K}\), and \(a(z)_{(i)}b(z)=0\) for \(i\geq 2\). This implies that the property (L.3) also holds. Thus, by Corollary 2.5.2, we get a vertex \(R\)-algebra \(V(\widehat{\mathfrak{g}},\mathcal{F})\) called the _universal affine vertex
algebra_ associated to the pair \((\mathfrak{g},(|.|.))\). Notice that the field-state homomorphism \(\operatorname{Fld}(V(\widehat{\mathfrak{g}},\mathcal{F}))\to V(\widehat{ \mathfrak{g}},\mathcal{F})\) is a bijection.
The annihilation subalgebra of \(\widehat{\mathfrak{g}}\) is \(\widehat{\mathfrak{g}}_{-}=\mathfrak{g}[t]\). Fix a basis \(\{e_{1},\ldots,e_{p}\}\) of \(\mathfrak{g}\) over \(R\) and, for each \(1\leq i\leq p\) and \(n\geq 1\), set \(x_{i,n}=e_{i}t^{-n}\in\widehat{\mathfrak{g}}\). Then \(V(\widehat{\mathfrak{g}},\mathcal{F})\) is a free \(R\)-module and, as an \(R\)-module, it is isomorphic to the ring of polynomials \(R[\mathrm{K},x_{i,n}]_{1\leq i\leq p,n\geq 1}\). A basis of the latter over \(R\) is formed by the monomials \(\mathrm{K}^{m}\cdot x^{\mu}\), where \(m\geq 0\), \(\mu=(\mu_{1},\ldots,\mu_{p})\), each \(\mu_{i}\) is an infinite sequence of non-negative integers \((\mu_{i,1},\mu_{i,2},\ldots)\) all but finitely many of which are equal to zero, and \(x^{\mu}=x_{1}^{\mu_{1}}\cdot\ldots\cdot x_{p}^{\mu_{p}}\) with \(x_{i}^{\mu_{i}}=\prod_{n=1}^{\infty}x_{i,n}^{\mu_{i,n}}\).
Assume now that \(N\) is invertible in the commutative Banach ring \(K\). By Lemma 2.4.4(i), \(V_{K}(\widehat{\mathfrak{g}},\mathcal{F})=V(\widehat{\mathfrak{g}},\mathcal{ F})\widehat{\otimes}_{R}K\) is an admissible vertex \(K\)-algebra, called the _universal affine vertex \(K\)-algebra associated to \((\mathfrak{g},(\ |\ ))\)_. One has
\[V_{K}(\widehat{\mathfrak{g}},\mathcal{F})=\{f=\sum\lambda_{m,\mu}\mathrm{K}^{ m}x^{\mu}|\lambda_{m,\mu}\in K\text{ and }\lambda_{m,\mu}\to 0\text{ as }m+\langle\mu\rangle\to\infty\}\,\]
where \(m\) and \(\mu\) are as above and \(\langle\mu\rangle=\sum_{i=1}^{p}\sum_{n=1}^{\infty}in\mu_{i,n}\). The following statement is verified in the same way as Lemma 2.6.1.
**Lemma 2.6.2**.: (i) If \(|k|\leq 1\), then \((\mathrm{K}-k)V_{K}(\widehat{\mathfrak{g}},\mathcal{F})\) is a nontrivial closed subspace of \(V_{K}(\widehat{\mathfrak{g}},\mathcal{F})\), and the quotient \(V_{K}^{k}(\widehat{\mathfrak{g}})=V_{K}(\widehat{\mathfrak{g}},\mathcal{F})/ (\mathrm{K}-k)V_{K}(\widehat{\mathfrak{g}},\mathcal{F})\) is an admissible vertex \(K\)-algebra whose field-state homomorphism \(fs:\operatorname{Fld}(V_{K}^{k}(\widehat{\mathfrak{g}}))\to V_{K}^{k}( \widehat{\mathfrak{g}})\) is a bijection;
(ii) if \(k\) is invertible in \(K\) and \(|k|>1\), then \((\mathrm{K}-k)V_{K}(\widehat{\mathfrak{g}},\mathcal{F})=V_{K}(\widehat{ \mathfrak{g}},\mathcal{F})\).
The algebra \(V_{K}^{k}(\widehat{\mathfrak{g}})\) is called the _universal affine vertex \(K\)-algebra of level \(k\)_.
For example, assume that \(K=R=\mathbf{Z}\) and \(\mathfrak{g}\) is the commutative Lie algebra \(\mathbf{Z}\), provided with the bilinear form \((a|b)=ab\). Then \(V_{K}^{1}(\widehat{\mathfrak{g}})\) is canonically isomorphic to the free boson vertex \(K\)-algebra \(B_{K}\).
In all of the above examples the Lie algebra in question is naturtally \(\mathbf{Z}\)-graded. Namely, the Virasoro algebra (resp. affine Lie algebra) is graded by letting \(\deg L_{n}=n\) (resp. \(\deg at^{n}=n\) for \(a\in\mathfrak{g}\)). These gradings induce \(\mathbf{Z}\)-gradings on the corresponding universal vertex \(K\)-algebras. Since the highest component in these gradings of a singular vector is a singular vector, it follows that the quotient of these vertex \(K\)-algebras by the maximal graded ideal is a simple vertex \(K\)-algebra. The same argument shows that the free boson and fermion vertex \(K\)-algebras are simple.
### Vertex \(K\)-algebras and Lie conformal \(K\)-algebras
For a Banach \(K\)-module and a positive real number \(r\), one denotes by \(V\{r^{-1}\lambda\}\) the Banach \(K\)-module of formal power series \(v=\sum_{n=0}\lambda^{n}v_{n}\) with \(v_{n}\in V\) and \(||a_{n}||r^{n}\to 0\) as \(n\to\infty\), provided with the Banach norm \(||v||=\max_{n}||v_{n}||\). Notice that there is an isometric isomorphism \(K\{r^{-1}\lambda\}\widehat{\otimes}_{K}V\widetilde{\to}V\{r^{-1}\lambda\}\). If \(K\) is a non-Archimedean field, \(K\{r^{-1}\lambda\}\) is the algebra of functions analytic on the closed disc in the affine line over \(K\) of radius \(r\) with center at zero.
**Definition 2.7.1**.: A _Lie conformal \(K\)-algebra of radius \(r\)_ is a Banach \(K\)-module \(L\), provided with a bounded \(K\)-linear endomorphism \(T:L\to L\) and a bounded \(K\)-linear homomorphism \((\lambda\)_-bracket_)
\[L\widehat{\otimes}_{K}L\to L\{r^{-1}\lambda\}:a\otimes b\mapsto[a_{\lambda}b]\]
with the following properties:
1. (_sesquilinearity_) \([Ta_{\lambda}b]=-\lambda[a_{\lambda}b]\), \([a_{\lambda}Tb]=(\lambda+T)a_{\lambda}b]\);
(L.2) (_skew-symmetry_) \([b_{\lambda}a]=-[a_{-\lambda-T}b]\);
(L.3) (_Jacobi identity_) \([a_{\lambda}[b_{\mu}c]]=[[a_{\lambda}b]_{\lambda+\mu}c]+[b_{\mu}[a_{\lambda}c]]\).
The right hand side in (L.2) should be understood as follows: if \([a_{\lambda}b]=\sum_{n=0}^{\infty}\lambda^{n}x_{n}\), then \([a_{-\lambda-T}b]=\sum_{n=0}^{\infty}(-\lambda-T)^{n}x_{n}\).
Suppose now that our commutative Banach ring \(K\) contains the field of rational numbers \(\mathbf{Q}\). Then the valuation on \(\mathbf{Q}\), induced by that on \(K\), is either \(p\)-adic for a prime number \(p\), or trivial. In the former case, we set \(r_{p}=|p|^{\frac{1}{p-1}}\), and in the latter case, we set \(p=1\) and \(r_{1}=1\). In the following statements, \(V\) is a vertex \(K\)-algebra.
**Proposition 2.7.2**.: The space of fields \(\mathrm{Fld}(V)\) is a Lie conformal \(K\)-algebra of radius \(r_{p}\) with respect to the derivation \(\partial_{z}\) and the \(\lambda\)-bracket, defined by
\[[\varphi(w)_{\lambda}\psi(w)]=\mathrm{Res}_{z}(e^{\lambda(z-w)}[\varphi(z), \psi(w)])=\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}(\varphi(w)_{(n)}\psi(w))\.\]
Proof.: Step 1. _The second equality is true._ Indeed, the right hand side is equal to \(\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}\mathrm{Res}_{z}((z-w)^{n}[\varphi(z),\psi(w)])\) and, by the definition in Lemma 1.6.4(i), the latter is equal to \(\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}(\varphi(z)_{(n)}\psi(z))\).
Step 2. _The sum on the right hand side of the above equality lies in \(\mathrm{Fld}(V)\{r_{p}^{-1}\lambda\}\)._ Indeed, if \(p=1\), then \(|n!|=1\), and since the fields \(\varphi(z)\) and \(\psi(z)\) are mutually local, then \(||\varphi(z)_{(n)}\psi(z)||\to 0\) as \(n\to\infty\). This implies that the coefficient at \(\lambda^{n}\) tends to zero as \(n\) goes to infinity. Suppose \(p\) is a prime. Then the degree of \(p\) in the integer \(n!\) is at most \([\frac{n}{p}]+[\frac{n}{p^{2}}]+\ldots\leq\frac{n}{p-1}\). It follows that \(|n!|\geq|p|^{\frac{n}{p-1}}\) and, therefore, \(\frac{r_{p}^{n}}{|n!|}\leq 1\). Since the fields \(\varphi(z)\) and \(\psi(z)\) are mutually local, the claim follows.
Step 3. _The equalities (L.1) hold._ Indeed, one has
\[[\partial_{w}\varphi(w)_{\lambda}\psi(w)]=e^{-\lambda w}\mathrm{Res}_{z}([e^{ \lambda z}\partial_{z}\varphi(z),\psi(w)])\.\]
Since \(e^{\lambda z}\partial_{z}\varphi(z)=\partial_{z}(e^{\lambda z}\varphi(z))- \lambda e^{\lambda z}\varphi(z)\) and \(\mathrm{Res}_{z}(\partial_{z}(e^{\lambda z}\varphi(z)))=0\), the first equality follows. Furthermore, one has
\[[\varphi(w)_{\lambda}\partial_{w}\psi(w)]=\mathrm{Res}_{z}(e^{\lambda z}[ \varphi(z),e^{-\lambda w}\partial_{w}\psi(w)])\.\]
Since \(e^{-\lambda w}\partial_{w}\psi(w)=\lambda e^{-\lambda w}\psi(w)+\partial_{w}( e^{-\lambda w}\psi(w))\), the second equality follows.
Step 4. _The equality (L.2) holds._ Indeed, \([\varphi(z)_{-\lambda-\partial_{z}}\psi(z)]\) is equal to
\[\sum_{n=0}^{\infty}\frac{(-\lambda-\partial_{z})^{n}}{n!}( \varphi(z)_{(n)}\psi(z))=\sum_{n=0}^{\infty}(-1)^{n}\sum_{i=0}^{n}\frac{1}{i!} \lambda^{i}\partial_{z}^{(n-i)}(\varphi(z)_{(n)}\psi(z))\] \[= \sum_{i=0}^{\infty}\frac{\lambda^{i}}{i!}\sum_{m=0}^{\infty}(-1)^{ m+i}\partial_{z}^{(m)}(\varphi(z)_{(m+i)}\psi(z))\.\]
By Corollary 2.2.8(ii), the inner sum is equal to \(-\psi(z)_{(i)}\varphi(z)\) and, therefore, the whole expression is equal to \(-[\psi(z)_{\lambda}\varphi(z)]\).
Step 5. _The equality (L.3) holds._ Indeed, by Step 1, the left hand side of (L.3) is equal to
\[[\varphi(z)_{\lambda}[\psi(z)_{\mu}\chi(z)]]=\mathrm{Res}_{z}\mathrm{Res}_{y}(e^ {\lambda(z-w)+\mu(y-w)}[\varphi(z),[\psi(y),\chi(w)]])\.\]
By the classical Jacobi identity, one has
\[[\varphi(z),[\psi(y),\chi(w)]]=[[\varphi(z),\psi(y)],\chi(w)]+[\psi(y),\varphi( z),\chi(w)]]\.\]
Thus, the left hand side of (l.3) is a sum of
\[\operatorname{Res}_{y}(e^{(\lambda+\mu)(y-w)}[\operatorname{Res}_{z}(e^{\lambda(z -y)}[\varphi(z),\psi(y)]),\chi(w)])\,\]
which is equal to \([[\varphi(z)_{\lambda}\psi(z)]_{\lambda+\mu}\chi(z)]\), and of
\[\operatorname{Res}_{y}(e^{\mu(y-w)}[\psi(y),\operatorname{Res}_{z}(e^{\lambda( z-w)}[\varphi(z),\chi(w)])])\,\]
which is equal to \([\psi(z)_{\mu}[\varphi(z)_{\lambda}\chi(z)]]\). The required equality follows.
**Corollary 2.7.3**.: In \(V\) is admissible, the subspace \(V^{\prime}\) is a Lie conformal \(K\)-algebra of radius \(r_{p}\) with respect to the translation operator \(T\) and the \(\lambda\)-bracket, defined by
\[[a_{\lambda}b]=\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}(a_{(n)}b)\.\]
Proof.: Since \(V\) is admissible, the space-field homomorphism \(fs:\operatorname{Fld}(V)\to V^{\prime}\) is an isomorphism of Banach \(K\)-modules. The statement follows from Proposition 2.7.2 and Lemma 2.2.2.
**Example 2.7.4**.: Let \(V\) be a commutative vertex \(K\)-algebra. Then \(\operatorname{Fld}(V)\subset V[[z]]\), and since \(\varphi(z)_{(n)}\psi(z)=0\) for all \(\varphi(z),\psi(z)\in V[[z]]\) and \(n\geq 0\), it follows that the \(\lambda\)-bracket on the associated Lie conformal \(K\)-algebra is zero.
|
2304.06204 | A Flexible Piezoresistive/Self-Capacitive Hybrid Force and Proximity
Sensor to Interface Collaborative Robots | Force and proximity sensors are key in robotics, especially when applied in
collaborative robots that interact physically or cognitively with humans in
real unstructured environments. However, most existing sensors for use in
robotics are limited by: 1) their scope, measuring single parameters/events and
often requiring multiple types of sensors, 2) being expensive to manufacture,
limiting their use to where they are strictly necessary and often compromising
redundancy, and 3) have null or reduced physical flexibility, requiring further
costs with adaptation to a variety of robot structures. This paper presents a
novel mechanically flexible force and proximity hybrid sensor based on
piezoresistive and self-capacitive phenomena. The sensor is inexpensive and
easy to apply even on complex-shaped robot structures. The manufacturing
process is described, including controlling circuits, mechanical design, and
data acquisition. Experimental trials featuring the characterisation of the
sensor were conducted, focusing on both force-electrical resistance and
self-capacitive proximity response. The sensor's versatility, flexibility,
thinness (1 mm thickness), accuracy (reduced drift) and repeatability
demonstrated its applicability in several domains. Finally, the sensor was
successfully applied in two distinct situations: hand guiding a robot (by touch
commands), and human-robot collision avoidance (by proximity detection). | Diogo Fonseca, Mohammad Safeea, Pedro Neto | 2023-04-13T00:45:29Z | http://arxiv.org/abs/2304.06204v1 | A Flexible Piezoresistive/Self-Capacitive Hybrid Force and Proximity Sensor to Interface Collaborative Robots
###### Abstract
Force and proximity sensors are key in robotics, especially when applied in collaborative robots that interact physically or cognitively with humans in real unstructured environments. However, most existing sensors for use in robotics are limited by: 1) their scope, measuring single parameters/events and often requiring multiple types of sensors, 2) being expensive to manufacture, limiting their use to where they are strictly necessary and often compromising redundancy, and 3) have null or reduced physical flexibility, requiring further costs with adaptation to a variety of robot structures. This paper presents a novel mechanically flexible force and proximity hybrid sensor based on piezoresistive and self-capacitive phenomena. The sensor is inexpensive and easy to apply even on complex-shaped robot structures. The manufacturing process is described, including controlling circuits, mechanical design, and data acquisition. Experimental trials featuring the characterisation of the sensor were conducted, focusing on both force-electrical resistance and self-capacitive proximity response. The sensor's versatility, flexibility, thinness (1 mm thickness), accuracy (reduced drift) and repeatability demonstrated its applicability in several domains. Finally, the sensor was successfully applied in two distinct situations: hand guiding a robot (by touch commands), and human-robot collision avoidance (by proximity detection).
Keywords: Force sensing resistor, force sensor, hybrid, piezoresistive sensor, proximity sensor, robotics.
## I Introduction
Advanced robots increasingly require more sensors to perceive their surroundings. Robot sensors are key to safe and intuitive human-robot interaction (HRI) as well as robot autonomy. It is often the case, however, that existing conventional sensors do not fit specific robots or application requirements. Conventional sensors are often unable to detect events of interest either due to insufficient scope, accuracy and/or robustness in working conditions. For example, vision sensors are constrained by light and suffer from occlusions. Other sensors may have excessive latency or limited repeatability. Frequently, researchers and engineers look for hybrid single sensor devices that can measure a wide scope of variables to detect events of interest, while being easy to install on a robot's structure (or in its surroundings). These sensors are, most often, not available. Existing devices frequently require costly alterations to reach hybrid, integrated solutions and they are usually not mechanically flexible, which further hampers their integration on structures with complex geometries.
The robotics and artificial intelligence markets are rapidly growing and sensors have a leading role in their success [1]. Sensors become particularly important in a context where robots operate in more unstructured environments, often requiring interaction with humans. Tactile/force and proximity sensing are much-desired features for robot safety and interaction [2]. The ISO/TS 15066 standard for collaborative robot safety significantly depends on the capability to detect the presence of humans or objects within the robot's surroundings, mainly for speed and separation monitoring [3]. In this context, hybrid force and proximity sensors, particularly ones that are easy to install on pre-existing robot systems, become extremely desirable.
This paper proposes a novel mechanically flexible and self-adhesive piezoresistive/self-capacitive hybrid force and proximity sensor that is both inexpensive and easy to apply even on complex-shaped surfaces. The development process is described here step by step, including design, manufacturing, complementary circuitry, and signal processing. Experiments featuring sensor characterisation were conducted, focusing on both its piezoresistive and self-capacitive operation modes. Finally, the proposed sensor is demonstrated in two distinct situations: 1) hand guiding a robot (by touch commands), and 2) collision avoidance (by proximity detection). From this work resulted the following contributions:
1. A novel piezoresistive/self-capacitive hybrid force and proximity sensor;
2. The sensor's structure is mechanically flexible and self-adhesive making it easy to install even on complex-shaped structures;
3. The sensor and its complementary circuitry are easy to manufacture, highly scalable and inexpensive (the manufacturing process is explained here step by step);
4. The sensor provides reasonable accuracy and repeatability concerning force and proximity measurements. The presence of a human hand is detected at distances up to 100 mm away from an 8 cm\({}^{2}\) sensor while the minimum detectable force is 0.5 N and the single point repeatability is about 11%;
5. The sensor's versatility, flexibility and accuracy demonstrated its potential as an interface for collaborative robots.
### _Related Work_
Dual-mode sensors (capacitive and inductive) have demonstrated the ability to measure tactile pressure of up to 330 kPa and sense proximity with a good spatial resolution (3 mm) at distances of up to 150 mm [4]. More recently, a dual proximity sensor combining the principles of inductive and capacitive proximity sensing was demonstrated, although with no pressure sensitivity [5]. Our proposed sensor requires significantly less complex signal processing and complementary hardware than those presented in both [4] and [5], albeit featuring a lower proximity resolution (30 mm). A soft Force Sensing Resistor (FSR) sensor capable of three-axial force measurements has been presented with interesting results, but is incapable of proximity detection [6]. Recent research has also shown the potential of using commercially available FSRs for proximity sensing, taking advantage of the electrodes already present in the FSR to perform capacitive proximity sensing [7]. A method for eliminating blind areas in piezoresistive sensor arrays has been presented, with moderate success [8]. A tactile sensor for smart prosthetics, based on giant magneto-impedance material has shown very interesting results [9]. Its scalability however may be limited, and proximity detection is not featured. Smart conductive textiles are increasingly used in wearable electronics. Proximity measurements have been demonstrated by measuring the electrostatic capacity in conductive flexible fabrics [10]. Recently, conductive elastomers were studied to address the challenges of sensing deformable soft structures [11]. Printable and stretchable elastic conductors, with initial conductivity of up to 738 Scm\({}^{-1}\), have been shown to endure up to 215% strain while maintaining a conductivity of 182 Scm\({}^{-1}\)[12]. Programmable sensor networks have been integrated modularly in the form of a tape (flexible electronics substrate) [13], achieving considerably higher precision than our proposed sensor. Our solution is, however, over 200 times less expensive (by area), easier to manufacture, and easier to scale. In a recent study, contact forces and proximity were measured by arrays of proximity sensors in flexible printed circuit boards embedded in silicone [14]. A robot skin composed of modularized hexagonally shaped cells is proposed in [15]. Each cell includes a set of sensors reporting vibrations (3D accelerometer), pressure (three capacitive force sensors), pre-touch (optical proximity sensor), and temperature (two temperature sensors). This skin has a wider scope than our proposed solution (which cannot measure neither temperature nor acceleration). Once again, both [14] and [15] achieve higher resolutions particularly in the proximity detection, but with limited mechanical flexibility and manufacturing costs two to three orders of magnitude higher than the sensor proposed in this study. Electrostatic ultrathin flexible pressure sensors were successfully demonstrated in health-monitoring applications such as breathing monitoring [16] and finger manipulation monitoring [17]. Flexible pressure sensors based on organic-transistor-driven active matrices have shown great potential in the past two decades. Features ranging from optical transparency and bending-insensitivity [18] to simultaneous pressure and temperature mapping [19] have been demonstrated with promising results. All these solutions are significantly more complex to manufacture, although low-cost production should be achievable through high-volume production processes.
### _Background and Design Principals_
When a conductive material is deformed, electrical resistance across it changes. Volume resistivity is often considered constant, making electrical resistance solely a function of geometric dimensions. This is the underlying principle behind various sensors, such as resistive strain gauges [20]. However, semiconductive polymer composites such as Caplinq's Linqstat or 3M's Velostat behave differently. They feature a suspension of conductive filler particles randomly dispersed in a nonconductive polymeric matrix. Under strain, those particles move closer together in random movements known as Micro-Brownian Motion [21], Fig. 1 a). This effect increases the number of contacting particles, creating more paths for electricity to flow through. Simultaneously, non-contacting particles become, on average, separated by shorter distances, elevating the number of electrons tunnelling through the non-conductive matrix between them [22]. Both mechanisms contribute to a great reduction of the material's electrical resistivity under strain.
Proximity sensors are usually based on optical, ultrasonic, or capacitive technology. We were aiming for a thin, highly scalable, and flexible sensor so both ultrasonic and optical solutions were considered impractical. The self-capacitance
Fig. 1: a) Compression of a semiconductive polymer, b) Resistor-Capactor (R-C) circuit, c) R-C circuit with common ground, d) Self-capacitive proximity sensing circuit.
phenomenon was deemed adequate to meet our design criteria. In Fig. 1 b) a simple RC circuit is shown where, after an initial transient period, a steady-state is reached. The potential across the capacitor's terminals opposes the power supply voltage and, at that point, no electric current flows through the circuit. A similar circuit may be considered, Fig. 1 c), where after a certain time, both the power supply and the capacitor will also tend to reach symmetric potentials. The resistor R, wired in series, increases the transient period (note that the time constant is directly proportional to the circuit's resistance). Consider now a third circuit, Fig. 1 d), in which electrostatic energy is stored not inside a capacitor, but in an electric field created between an electrode and a nearby conductive surface, such as a human hand. Two digital pins in a microcontroller board are connected to the ends of the circuit. The first pin is turned high (+5V, in case of an Arduino Uno), and the second pin is set to input (without pullup). A counter variable is incremented in a loop until the potential reaching the second pin triggers a high state (at around 2.5V). The final value of this counter variable will be directly proportional to the circuit's time constant and, in turn, to the circuit's capacitance (considering R to be constant). Higher resistance values increase the transient period, allowing for more sensitive measurements at the expense of longer response times. Capacitance is inversely proportional to the distance between the hand and the electrode, but it can no longer be determined by the parallel capacitor formula as a plethora of other variables become involved (skin conductivity, air humidity, interference from the surrounding environment, etc.). Voltage measurements are also dependent on the reference ground values, which may vary. This method is relatively imprecise for distance measuring, but it enables reliable human presence detection, is easy to implement, and requires only basic hardware, while also being highly scalable.
## II Piezoresistive/Self-Capacitive Sensor
### _Design_
The development of the proposed piezoresistive/self-capacitive hybrid sensor was driven by the following design goals:
1. Easy and inexpensive manufacturability, with minimal specialized tooling;
2. Good tactile resolution;
3. Force sensing range of up to 15 N;
4. Human presence detection at distances up to 100 mm;
5. Mechanical flexibility and reduced thickness.
Two sensor variants were developed: one featuring a FSR array with 16 prexels (ie. unitary pressure-sensitive elements) of 48 mm\({}^{2}\) each, arranged in a 2 \(\times\) 8 configuration, and a second higher resolution one, featuring 64 prexels of 81 mm\({}^{2}\) each, arranged in an 8 \(\times\) 8 configuration. The blind area of each sensor can be practically eliminated by design (ensuring short distances between adjacent prexels) or by application of the connected structure-based method presented by L. Wang [8]. We made our sensors compatible with Wang's method by designing a continuous layer of piezoresistive material (Fig. 2). The area of each prexel was made similar to that of the tip of a human finger, ensuring adequate spatial resolution for tactile robot control (8.8 mm periodicity in the 8 \(\times\) 8 version). A final design consideration was the area of the self-capacitive electrode to guarantee effective proximity detection. The maximum detectable distance should be in the order of the size of the electrode. We implemented an 8 cm\({}^{2}\) electrode in the 64 prexel sensor, aiming for our goal of 100 mm proximity detection range.
### _Manufacturing_
The manufacturing process is composed of nine steps, including assembly of all the sensor's elements, Fig. 2. These steps are presented here for the 16 prexel variant, being analogous for the 64 prexel one:
1. Copper tape and Velostat pieces were cut to form the sensor's internal elements. These include 8 prexel column electrodes, 2 prexel row electrodes, 1 Velostat layer and 1 self-capacitive copper electrode, Fig. 3 a);
2. Enamelled copper wires were soldered to each copper electrode, Fig. 3 b). It is important to avoid the flow of solder to areas that will be in direct contact with the semiconductive polymer. Solder build-ups outside those areas should also be kept to the bare minimum, as they will create pressure concentration zones and promote layer separation, decreasing the sensor's low-pressure sensitivity through a sharp increase in the electrical contact resistance between layers. Finally, all copper
parts were thoroughly cleaned with isopropyl alcohol, both before and after the soldering process;
3. To make the sensor's base layer, a strip of double-sided tape was unrolled on top of a flat surface with its adhesive side up and held in place with strips of painter's tape. We used a pencil to mark the measurements, Fig. 4 a);
4. The 8 column electrodes were glued to the base layer, Fig. 4 b). Their enamelled wires were also twisted and glued to the base layer beneath, Fig. 4 c). All wires must be inside the sensor's design dimensions;
5. The Velostat layer was laid on top of the 8 column electrodes, Fig. 4 d). It is essential to ensure proper insertion at the first attempt. Unsticking and relocating the polymer may result in contamination of it's surface with adhesive residues;
6. A second strip of double-sided tape was placed on the workbench with its uncovered sticky side up. This will be the sensor's middle insulating layer, separating the FSR array's row electrodes from the self-capacitive electrode. After appropriate measurements, both array's row electrodes were glued in place, Fig. 4 e);
7. The middle insulating layer, with the 2 previously installed electrodes, was placed on top of the Velostat layer and the lower cover of the adhesive was removed, Fig. 4 f);
8. The self-capacitive electrode was then installed, and all wires were arranged as flat as possible to reduce the risk of layer delamination, Fig. 4 g). A heat shrink tube was installed to provide further insulation and surface protection to the enamelled wires, Fig. 4 h);
9. The sensor was cut to shape and covered with its final layer of polycoated matt cloth tape which will provide surface protection, Fig. 4 i). The fully assembled prototype is shown in Fig. 4 j).
### _Data Acquisition System_
The Data Acquisition System (DAQ) is composed of three main components:
1. A custom-made Arduino shield;
2. The Arduino itself, which runs the sensor's transfer functions and communicates with an external serial client;
3. An external computer, which receives sensor data and interfaces with the robot's controller.
The shield, circuit in Fig. 5, features two 3-bit multiplexers (STMicroelectronics M74HC4051) to address the column-row pairs in the FSR array. This enables a maximum array size of \(2^{3}\,\times\,2^{3}\,=64\) pixels. Higher bit multiplexers will enable exponentially larger sensor arrays. We used a voltage divider circuit with a reference resistor of 1 k\(\Omega\) to measure the electrical resistance across each presel. A 1 M\(\Omega\) resistor, connected between the sending and receiving pins of the proximity sensing circuit, allows for adequate proximity sensitivity.
The Arduino (Uno) was used to: 1) establish the connection to the serial client, 2) measure the transient period of the self-capacitive circuit (and apply a transfer function to determine the approximate distance in mm), 3) address both multiplexers to connect to each presel in the FSR array, 4) measure voltage drop across each presel, calculate corresponding resistance and apply a transfer function to determine the force being applied, 5) send all values to the serial client, and 6) return to point 2).
Self-capacitance measurements were performed using the algorithm developed by Paul Bagder and released to the public domain in the form of the Arduino "CapacitiveSensor" Library (version v0.4) [23]. In short, a counter variable is first set to zero. Then, the send pin is set to high (+5V) and the counter is incremented in a loop until the self-capacitive electrode's potential reaches +2.5V, causing the receive pin's state to turn high. This charge cycle is then repeated a predefined \(n\) number of times to smooth out high frequency noise (we found
Fig. 4: a) Base layer, b) Glueing the column electrodes of the FSR array, c) Wire management (twisted and glued to the base layer beneath), d) Semiconductive polymer installation, e) Lower and upper halves of the FSR array, before assembly, f) Completed FSR array, g) Self-capacitive electrode, h) Heat shrink tube installation, i) Cutting the sensor assembly to final dimensions, j) Completed Sensors.
\(70\) to be appropriate), with the counter variable being further incremented in each cycle. The increment frequency of the counter variable was determined to be \(f=270\) kHz, so each unitary counter value corresponds to approximately 3.7 \(\mu\)s. The counter's final value is then used as a relative measure of transient time and thus a relative measure of capacitance, according to:
\[C=\frac{-\Delta t}{R\times ln(1/2)}=\frac{-counter}{n\times f\times R\times ln( 1/2)} \tag{1}\]
where \(C\) is the capacitance, \(\Delta t\) is the transient period, \(R\) is the value of the resistor connected in series with the self-capacitive electrode, \(n\) is the number of charge cycles and \(f\) is the frequency at which the counter variable is incremented during each charge cycle. Every time someone approaches the sensor, its self-capacitance increases, resulting in higher transient times and thus higher counter values.
Finally, an external computer was used to receive real-time data from the Arduino using a MATLAB-based platform, allowing for simultaneous real-time robot motion control using the KUKA Sunrise Toolbox (KST) [24].
## III Sensor Characterization and Experiments
A comprehensive theoretical approach for FSR modelling was proposed in [25]. However, it requires advanced material analysis techniques which may be inaccessible to some researchers. For that reason, we propose an empirical approach. The characterisation of the self-capacitive proximity response was achieved using a semi-empirical model. Table I details some of the main specifications and characteristics of the sensor.
### _FSR Empirical Model_
#### Iii-A1 Force-Electrical Resistance
The force-electrical resistance relationship was evaluated using a compression test stand designed for the purpose, Fig. 6. It features a rubber pressure test tip designed to closely emulate the area and pressure distribution of a human finger touching the sensor. In the
Fig. 5: DAQ circuit diagram (center) and assembled DAQ system (right).
Fig. 6: Experimental setup of the force-electrical resistance compression/hysteresis test.
first experiment, the sensor was subjected to forces ranging from 0 to 15 N in step increments of approximately 0.4 N. The resulting values of electrical resistance were recorded and plotted in Fig. 7. The test was repeated 6 times at an average room temperature of 25oC. Operation at higher temperatures will increase the sensor's force sensitivity, due to the softening of the piezoresistive polymer layer. This phenomenon was not modelled in this study, and no significant variation was observed in the 18oC to 25oC range. The sensor is expected to endure operation at temperatures ranging from -50oC up to 80oC, according to specifications provided by the manufacturers of both the piezoresistive layer (3M) and the adhesive tapes forming the sensor's structure (Advance Tapes).
Force-conductance data show an approximately linear behaviour, making linear regressions a good solution to obtain simple and computationally inexpensive empirical models. There are, however, some pointwise exceptions consistent to all 6 recorded data sets. These deviations occur mainly due to the layer delamination caused by manufacturing imperfections inside the sensor. To account for these deviations, high degree polynomial regressions can be used to better fit the force-conductance data. Choosing between linear and higher degree regressions requires a balance between computational cost and model accuracy. Also, as expected, the sensor proved to be inaccurate when subjected to forces lower than 0.5 N, mainly due to the delamination phenomena. Small pockets of air get between the semiconductive polymer and the electrodes inside the sensor, creating an unpredictable increase of contact electrical resistance. We did not take these low force/conductance values into consideration in our model, as they represent an unpredictable behaviour and were considered non-significant data. For our specific model, a high degree polynomial was fitted to the average force-conductance values across all 6 tests, and the standard deviations for each point were calculated. Confidence intervals of 95% were also calculated, assuming multiple measures of the same force follow a normal distribution, Fig. 8.
#### V-A2 Signal Drift
In a second experiment, a constant force was applied during a 3-hour test, to determine the sensor's signal drift. The data resulting from this test show an asymptotic decrease in electrical resistance over time, Fig. 9. Velostat's electrical resistance is mostly a function of material strain. This explains why the graph in Fig. 9 shows a similar behaviour to the stress-time graph of a viscoelastic material undergoing stress relaxation. Such behaviour can be modelled by a critically damped spring-damper system, whose position could be determined by:
\[x(t)=(x_{0}+(\dot{x}_{0}+\omega_{n}x_{0})\times t)\times e^{-\omega_{n}t} \tag{2}\]
where \(x_{0}\) is the system's initial position, \(\dot{x}_{0}\) is the system's initial velocity and \(\omega_{n}\) is the system's natural angular frequency. Adapting (2) to model the signal drift of our sensor, we have:
\[R(t)=(\Delta R+(A+B\times\Delta R)\times t)\times e^{-Bt}+R_{0}-\Delta R \tag{3}\]
where \(R_{0}\) is the initial electrical resistance, \(\Delta R\) is the difference between the initial and final electrical resistance measured, \(A\) and \(B\) are parameters that can be determined by a numeric method. The previous force-resistance and resistance-time models were then combined, generating a complete three-dimensional empirical model, Fig. 10.
### _FSR Dynamic Response_
#### V-B1 Hysteresis
Using the same setup shown in Fig. 6, 16 load-unload cycles were performed with forces ranging from 0 to 15 N at a rate of approximately 0.7 N/s, Fig. 11. Hysteresis errors ranged from 12 to 17%, averaging 15% across all 16 cycles. These values are in line with those of commercially available FSR's, suggesting that our sensor's additional layers and capacitive electrode do not significantly affect performance.
#### V-B2 Step Response
We have also evaluated the sensor's response to a tactile step load. The inherent elasticity of either a human finger or the rubber test-tip in the experimental setup would overshadow the sensor's response delay, so a rigid 1.1 kg wooden sphere was used instead the rubber test-tip. The sphere was rested on top of one prexel, with a nylon cable
supporting part of its weight. Then, the cable was cut so that the whole weight of the sphere was applied to the sensor. Measured conductance-time data are shown in Fig. 12. As would be expected from any FSR, conductance values in Fig. 12 are significantly higher than the ones previously observed in Fig. 7. This is due to the higher pressure created from by the sphere's small contact area. The delay time (from 0% to 50% of the steady state value) is 4.8 ms and the rise time (from 10% to 90% of the steady state value) is 34.5 ms, Table I.
### _Self-Capacitive Proximity Response_
The 16 preval (15 \(\times\) 200 mm\({}^{2}\)) sensor proved reasonably effective at detecting the presence of a human hand up to 50 mm away, while the larger 64 preval variant (90 \(\times\) 90 mm\({}^{2}\)) could detect presence at 100 mm. Being hybrid sensors, featuring both FSR and self-capacitive modes, they are able to distinguish between the presence and the touch of a human hand in their surroundings. As previously discussed, proximity readings start as a dimensionless time value, the counter. A semi-empirical model will directly correlate this value with distance values, in millimetres. Three different experiments were conducted.
In the first experiment, both sensors were placed on top of a non-conductive surface without any objects in close proximity. Proximity data were then recorded. This allowed us to determine a base value for each sensor. Results were an average counter value of 1486, with a standard deviation of 29 for the 16 preval sensor, and 1610 with a standard deviation of 9.6 for the 64 preval variant.
In the second experiment, a human hand gradually approached the 64 preval sensor until physical contact was established. Real-time hand distance was recorded using an electromagnetic tracker (Polhemus Liberty), which is accurate within + 0.5 mm. The whole process was repeated 4 times. Sensor counts plotted against tracker readings can be seen in Fig. 13. A 6th-order Butterworth low pass filter with a cutoff frequency of 1 Hz proved effective at smoothing the signal, although higher cutoff frequencies may be required depending on specific application. The voltage across a charging capacitor can be determined by:
\[V(t)=V_{\infty}\left(1-\exp\left(\frac{-t}{RC}\right)\right) \tag{4}\]
where \(V(t)\) is the voltage across the capacitor, \(V_{\infty}\) is the applied voltage, \(R\) and \(C\) are the circuit's resistance and capacitance, respectively.
From (4) it can be shown that the time necessary to reach a certain voltage across a capacitor is linearly dependent on its capacitance (ie. the time constant \(\tau\propto C\)). In a parallel plate capacitor, C is inversely proportional to the distance between electrodes. It is reasonable to assume that our sensor's self-capacitance is also inversely proportional to the hand's distance from it. Finally, the value of the counter variable is proportional to time, leading to:
\[counter=n\times f\times\Delta t=\frac{a}{x}+b \tag{5}\]
where \(n\) is the number of charge cycles, \(f\) is the incrementing frequency of counter, \(\Delta t\) is the transient period, \(x\) is the
Fig. 11: Hysteresis curve during loading and unloading (16 cycles).
Fig. 12: Dynamic step response to point load.
Fig. 10: Three-dimensional FSR model considering stress relaxation.
distance from the hand to the sensor, \(b\) is the previously determined counter base value, and \(a\) is a calibration parameter which can be determined experimentally using sensor readings over known distances. Experiments showed \(a\approx 2b\) to be a good first approximation. Fig. 13 shows the application of (5) to experimental data.
The third and final experiment was similar to the second, but with several objects instead of a human hand approaching the sensor: a small anodized aluminium rod, an aluminium calliper, a steel hammer and a 500 ml plastic water bottle (full). The sensor could not reliably detect the presence of any of these objects. This represents a limitation of the proposed sensor, but some interesting side-effects arise: the sensor becomes capable of easily distinguishing the touch of a human hand from the touch of any of these objects, by comparing values between its piezoresistive and self-capacitive modes of operation.
## IV Application in real robotic systems
The sensor was applied in two distinct robotic case studies: guiding a robot by hand (using tactile commands) and human-robot collision avoidance (measuring hand proximity). Owing to its flexibility, the 16 presel sensor was installed around the robot end-effector as if it was masking tape, Fig. 14. The collaborative robot is a 7-DOF KUKA iiwa R800 equipped with the Sunrise controller and interfaced using the KST Toolbox. Data were acquired from the sensor at 10 Hz for proximity readings, and 100 Hz for tactile robot control.
### _Robot Hand Guidance_
For the robot hand-guiding application, force-sensing measurements are acquired, treated and transformed into robot motion commands, i.e., translations in Cartesian space. Proximity sensing was also used to trigger the robot hand-guiding control mode.
The 16 presel sensor variant features a 2 \(\times\) 8 FSR array with 2 rows (A and B) and 8 columns (1, 2,..., 8), Fig. 14. Each column of the array, when touched, triggers a corresponding robot motion in x-y Cartesian space. Each finger touching the sensor spreads the pressure between one or both rows in the array (A and B), Fig. 14. Only the maximum force, applied to presel (A, \(n\)) or presel (B, \(n\)), is considered. Adding forces sensed across both rows would result in overmeasures. The sensor's width (15 mm) ensures that at least one of the two prexels in each column is fully covered by the human finger. To hand guide the robot along the z-axis, we calculate the pressure difference between both rows of prexels. The human finger tends to touch the sensor in the centre (the sensor's 15 mm width forces that scenario), leading to an approximately equal pressure distribution between FSR array rows. When the hand tries to push the robot upwards, or downwards, an angular momentum is created between the skin's surface (where a traction force is applied) and the bone inside. This momentum changes the balance of pressures, which can be calculated and used to indicate the human's intention to move the robot end-effector along the z-axis. The sensor is calibrated by attaching a reference frame to its structure (matrix of prexels whose position is fixed and known). The relative position of each presel is calculated in relation to the robot's local reference frame using homogeneous transformations. The sensor is sticked around the robot's tool flange in a known position and orientation relative to the robot's end-effector reference frame, Fig. 17 b).
The experimental test is shown in time-lapse, Fig. 17 b). The robot end-effector position along the x-axis is plotted against sensor force measurements in Fig. 15. This analysis along a single axis, x-axis in this case, facilitates the visualization and understanding of the system's behaviour which can be extrapolated to the other axes.
Five different users indicated they found the hand-guiding process to be more intuitive than the traditional teach-in robot programming (using the teach pendant). Three users are researchers and two are project engineers, all of them with basic skills in robotics. They were informed that the robot's end-effector could be moved by simply pushing on its tool flange (where the sensor was installed), without further explanation. As the robot started to move, all users intuitively pushed or pulled the robot flange in different directions, immediately and
Fig. 14: For x-y axis robot commands, only the maximum forces sensed across all lines of each column of the FSR array are considered, to avoid accidental overmeasures.
Fig. 13: Second experiment for the detection of a hand at different distances from the sensor. The hand is reliably detected at distances up to 100 mm.
intuitively realising how the system behaves/operates.
### _Human-Robot Collision Avoidance_
In order to avoid human-robot collisions, the hand proximity to the robot was monitored using the sensor's self-capacitive operational mode. For this specific experimental test, the sensor was attached to the robot gripper in a relatively flat surface. Experiments demonstrated that it can be applied in various locations of the robot arm, in both flat and moderately curved surfaces, keeping an efficient monitoring. Fig. 17 c) shows that when the sensor detects a human hand in close proximity, a command is issued to the robot controller, immediately moving the robot to a safe, pre-determined position. The system latency is approximately 100 ms and the maximum human-robot relative speed during the test was 60 mm/s. Fig. 16 shows the distance between the hand and the robot, measured by both the proposed sensor and an electromagnetic tracker (Polhemus Liberty), as well as the robot end-effector position. All data are unfiltered.
Results demonstrate that the sensor provides human-robot relative proximity data and by this way the robot reacts to avoid collision.
### _Sensor Robustness and Shortcomings_
The sensor showed significant deviations (over 50%) on proximity measurements, when placed at less than 100 mm from the robot's actuators (while the robot was moving). Preliminary simulations show a vast reduction electrostatic potential behind the shielded side of the \(8\times 8\) sensor, Fig. 18 d). This is expected to increase the sensor's robustness to electrostatic interference originated from behind it. Although implementation of this feature requires no modification to the sensor, further development of the DAQ circuit will be required. The current circuit is unable to simultaneously address all the FSR electrodes, a feature which would be necessary to use them as active shields.
Stress concentration and pre-stress may occur when the sensor is applied to curved surfaces. The sensor was mounted on a 30 mm constant-radius curved surface. In this case, we simply tared the sensor and achieved results within the same \(\sim\)10% error margin obtained over a flat surface. Creasing prexels around sharp edges should be avoided where possible to prevent stress concentration in the piezoresistive polymer. In cases where this is not feasible, local calibration of the creased prexels must be performed, and a decrease in maximum measurable force is to be expected due to the increased stress resultant from the reduced contact area between the object/human and the edge on which the sensor is applied. When applied to complex geometry surfaces, the normal distance between a hand within proximity range and the sensor varies according to surface topography, causing self-capacitance measurements to fluctuate accordingly. This effect can be simulated using the method of moments for electrostatics [26]. To illustrate this point, capacitance on the 8 cm\({}^{2}\) electrode was calculated when another 8 cm\({}^{2}\) grounded conductor is placed within proximity range, simulating a hand at distances ranging from 10 mm up to 150 mm, Fig. 18 a). The same simulation was then repeated with the self-capacitive electrode bent 90\({}^{\circ}\) and 135\({}^{\circ}\) around a 5 mm curvature radius, Fig. 18 b) and Fig. 18 c), respectively. Results are presented in Fig. 19, where the capacitance reduction with increasing curvature becomes clear. In practical terms, this results in the sensor not being capable of accurately determining hand-to-surface distance when applied to highly curved surfaces, being however still moderately effective at detecting hand presence within its proximity range.
## V Conclusion and Future Work
This paper has presented a novel mechanically flexible piezoresistive/self-capacitive hybrid force and proximity sensor, which proved to be multifunctional, easy to fabricate, highly inexpensive and easy to apply to complex-shaped robot structures. These characteristics demonstrated its potential application in collaborative robot interfaces. Experimental results, for both the piezoresistive and self-capacitive
Fig. 16: Human-robot collision avoidance demonstration. The 64 presel sensor triggers an emergency command to the robot when a human hand reaches within 100 mm of the robot’s end-effector. All data are unfiltered.
Fig. 15: Data recorded during robot hand guiding demonstration. Plotted forces and end-effector position are both relative to the x-axis direction, being the robot’s response analogous for the y-axis and z-axis directions.
operation modes, showed the sensor's versatility, flexibility (1 mm thickness), accuracy (reduced drift) and repeatability. Its fabrication method is simple and highly scalable while being flexible enough to accept process variations. Finally, the sensor was successfully applied in real-world scenarios, namely robot hand guiding and human-robot collision avoidance. Future work will be dedicated to testing 3D printer-based manufacturing processes based on conductive filaments and improvement of the sensor's proximity range and robustness to electromagnetic interference by utilizing the aforementioned FSR electrodes as active electrostatic shields during proximity measurement cycles.
|
2310.15481 | Detecting anomalous images in astronomical datasets | Environmental and instrumental conditions can cause anomalies in astronomical
images, which can potentially bias all kinds of measurements if not excluded.
Detection of the anomalous images is usually done by human eyes, which is slow
and sometimes not accurate. This is an important issue in weak lensing studies,
particularly in the era of large scale galaxy surveys, in which image qualities
are crucial for the success of galaxy shape measurements. In this work we
present two automatic methods for detecting anomalous images in astronomical
datasets. The anomalous features can be divided into two types: one is
associated with the source images, and the other appears on the background. Our
first method, called the Entropy Method, utilizes the randomness of the
orientation distribution of the source shapes and the background gradients to
quantify the likelihood of an exposure being anomalous. Our second method
involves training a neural network (autoencoder) to detect anomalies. We
evaluate the effectiveness of the Entropy Method on the CFHTLenS and DECaLS DR3
data. In CFHTLenS, with 1171 exposures, the Entropy Method outperforms human
inspection by detecting 12 of the 13 anomalous exposures found during human
inspection and uncovering 10 new ones. In DECaLS DR3, with 17112 exposures, the
Entropy method detects a significant number of anomalous exposures while
keeping a low false positive rate. We find that although the neural network
performs relatively well in detecting source anomalies, its current performance
is not as good as the Entropy Method. | Pedro Alonso, Jun Zhang, Xiao-Dong Li | 2023-10-24T03:17:36Z | http://arxiv.org/abs/2310.15481v1 | # Detecting anomalous images in astronomical datasets
###### Abstract
Environmental and instrumental conditions can cause anomalies in astronomical images, which can potentially bias all kinds of measurements if not excluded. Detection of the anomalous images is usually done by human eyes, which is slow and sometimes not accurate. This is an important issue in weak lensing studies, particularly in the era of large scale galaxy surveys, in which image qualities are crucial for the success of galaxy shape measurements. In this work we present two automatic methods for detecting anomalous images in astronomical datasets. The anomalous features can be divided into two types: one is associated with the source images, and the other appears on the background. Our first method, called the Entropy Method, utilizes the randomness of the orientation distribution of the source shapes and the background gradients to quantify the likelihood of an exposure being anomalous. Our second method involves training a neural network (autoencoder) to detect anomalies. We evaluate the effectiveness of the Entropy Method on the CFHTLenS and DECaLS DR3 data. In CFHTLenS, with 1171 exposures, the Entropy Method outperforms human inspection by detecting 12 of the 13 anomalous exposures found during human inspection and uncovering 10 new ones. In DECaLS DR3, with 17112 exposures, the Entropy method detects a significant number of anomalous exposures while keeping a low false positive rate. We find that although the neural network performs relatively well in detecting source anomalies, its current performance is not as good as the Entropy Method.
methods: analytical - methods: data analysis - techniques: image processing - surveys - telescopes - cosmology: gravitational lensing 0000-0002-2189-7405]Pedro Alonso
0000-0002-1881-7885]Jun Zhang
0000-0002-4882-7885]Xiao-Dong Li
## 1 Introduction
Anomalous images can be caused by atmospheric, environmental, or instrumental conditions. They can be categorized into source and background anomalies. Source anomalies are prominent in the vicinity of bright sources, distorting their shapes, whereas background anomalies generally manifest systematic wave-like patterns that distort the entire background of the CCD images. Fig. 1 and fig. 2 show several examples of source and background anomalies, respectively.
Anomalous images can be a source of systematics in many astronomical measurements. It could be a particularly important issue in weak lensing studies if they are mistakenly included in the dataset. This is because the cosmic shear signal is only of the order of a few percent, highly sensitive to the quality of the images. In past surveys, the number of exposures was small enough to check each one individually and discard the anomalous ones. More recent surveys, such as SDSS (York et al., 2000), DES (The Dark Energy Survey Collaboration, 2005), CFHTLenS (Heymans et al., 2013), HSC (Aihara et al., 2018), or KiDS (de Jong et al., 2013), typically produce tens of thousands of exposures, making human inspection impractical. As a result, the development of algorithms that can automatically detect anomalous images is becoming crucial.
Anomaly detection techniques are typically classified as supervised or unsupervised. Supervised methods use labeled data to directly learn the difference between normal and anomalous data. Common supervised anomaly detection techniques include support vector machines
(SVMs: Hearst et al., 1998) and neural networks. In most real case scenarios, however, the amount of anomalous data is generally much less than normal data, making it difficult to collect enough anomalous data for training. Therefore, anomaly detection algorithms have to rely solely on the information about normal data. Unsupervised learning algorithms first learn the common features that describe normal data and then use an anomaly detector to judge how similar new data is to the normal one. If the features of the new data are very different from those of normal data, it is classified as anomalous. Well-known unsupervised techniques used in anomaly detection include principal component analysis (PCA: Shlens, 2014), isolation forest (Liu et al., 2008), autoencoders (Rumelhart et al., 1986; Michelucci, 2022), k-Nearest Neighbors (KNN: Peterson, 2009), and K-means (MacQueen, 1967). In particular, visual anomaly detection is an active field of machine learning (see Yang et al. (2021) for a review) with applications in many fields, including astronomy (Han et al., 2022).
In this project, we present two methods to detect anomalous images in astronomical datasets. Our first method is analytical, and we call it the Entropy Method. Our second method uses a neural network called autoencoder to detect anomalies in an unsupervised way. We show the performances of the two methods and compare them. We search for anomalies in the CFHTLenS and DECaLS DR3 datasets.
The structure of this paper is as follows. SS2 describes our sources of data and introduces sources and background anomalies. SS3 presents the Entropy Method and its results. In SS4 we introduce our deep learning method and its results, and we conclude in SS5.
## 2 Data and Examples of Anomalies
In this paper, our main focus is on identifying anomalous exposures within two significant astronomical datasets. First, we use data from the The Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS: Heymans et al., 2012; Erben et al., 2013), which was specifically design for weak lensing studies as part of the CFHT Legacy Survey. Second, we use data from the third data release of the Dark Energy Camera Legacy Survey (DECaLS DR3). DECaLS is one of the imaging projects of the Dark Energy Spectroscopic Instrument (DESI: DESI Collaboration et al., 2016) and it is extensively used in many astronomical studies, including weak lensing.
CFHTLenS includes optical data in 4 fields (W1, W2, W3, W4), covering a total of 154 deg\({}^{2}\) in 5 optical bands (u, g, r, i, z), with the i-band showing better seeing conditions than the rest. We search for anomalies in the i-band of CFHTLenS, which consists of 1171 exposures. Each exposure is composed of 36 CCD images.
DECaLS DR3 includes optical data in three bands (g, r, z), covering a total of 4300 deg\({}^{2}\) in g-band, 4600 deg\({}^{2}\) in r-band, and 8100 deg\({}^{2}\) in z-band. We search for anomalies in the g, r, and z bands, which are composed of 3919, 5712, and 7481 exposures, respectively. Each exposure consists of 61 CCD images.
The i-band of CFHTLenS has undergone visual inspection, enabling us to directly benchmark the performance of our methods against human inspection. Furthermore, DECaLS DR3 provides us with a huge number of exposures to further test the performance of our methods. Combining the datasets from CFHTLenS and DECaLS DR3 forms an extensive and comprehensive collection of data, enhancing the robustness of our analysis and enabling us to draw more meaningful conclusions about anomalous exposures and weak lensing studies.
In the following we describe source and background anomalies and provide some examples from CFHTLenS and DECaLS DR3.
### Source anomalies
Fig. 1 shows examples of source anomalies, which are defined as those anomalies that distort the sources or their vicinity across entire exposures. They can be caused by various optical aberrations, such as coma, which produces sources with comet-like shapes, or astigmatism, which causes elongations on the sources. Other examples of optical aberrations that can distort the shape of sources include spherical aberration and field curvature. Thankfully, these optical aberrations have been extensively studied and modern telescopes can correct for them. Nevertheless, it is possible that most source anomalies in our data may be attributed to small shifts on the focal plane of the telescope during observations. Depending on the magnitude of this shift, it can introduce elongations on the sources (panels a and c of fig. 1) or even generate multiple source images (panel b of fig. 1). Other types of source anomalies are likely related to problems during the readout process. Vertical bright trails like those found in panels d and f of fig. 1 can be generated when the readout process starts while the shutter is still open. There are other source anomalies with very peculiar forms such as panel e of fig. 1.
Despite the diverse nature of source anomalies, we have observed that all of them exhibit a certain coherent anisotropy across the entire exposure. This anisotropy distorts the sources or their vicinity along a particular direction. This observation prompts us to utilize the
orientation of the sources across the entire exposure as our indicator for detecting source anomalies.
We perform a statistical study of the orientation of all the sources within each exposure to determine the likelihood of that exposure presenting a source anomaly. The detailed description of this methodology is given in SS3.1.
### Background anomalies
Background anomalies can introduce distortions in the CCD images, and there are two main types. First, light reflections inside the telescope can cause the background to present a series of circular shapes that cover a significant part of the exposure. Sometimes this effect is visible on single CCD images (panel b of fig. 2), and, in other cases, the anomaly only become noticeable when looking at the entire exposure (panel d of fig. 2). The second type of background anomaly manifests as frequent stripes in the CCDs, likely caused by electronic noise. These stripes exhibit a preferred direction, visible either on single CCDs (panel a of fig. 2) or when looking at the entire exposure (panel c of fig. 2).
Similar to source anomaly identification, we utilize the level of statistical anisotropy of background gradients to detect background anomalies. A more detailed explanation of the method is given in SS3.2.
## 3 Entropy method for detecting anomalies
Figure 1: Examples of source anomalies in CFHTLenS and DECaLS DR3.
Figure 2: Examples of background anomalies in DECaLS DR3.
In SS2 we have shown that both source and background anomalies distort the CCD images. In fact, these distortions can be quantified in terms of entropy. In a normal image, sources are randomly oriented, which corresponds to a state of high entropy. If the image presents a source anomaly, the majority of sources are oriented towards a particular direction. This corresponds to a state of reduced entropy. In the same way, the background gradients of a normal image should be random. Background anomalies induce spatially coherent distortions, which can significantly reduce the entropy of the background distribution. In this section we present a method that quantifies the entropy of the CCD images and use it as an indicator of the presence of anomalies.
### Source anomalies
To quantify the entropy of the sources in terms of their orientation. We define the source orientation as the angle between its long axis and the x-axis, and we call it \(\theta\). For each exposure, we construct a distribution of \(\theta\), including all sources in that exposure. In a normal image, the orientation of the sources should be random, resulting in a flat \(\theta\) distribution and a state of highest entropy. In the presence of a source anomaly, however, the sources will show a preferred orientation and the \(\theta\) distribution will not be flat, corresponding to a state of reduced entropy.
We quantify the anomaly likelihood as the deviation of each \(\theta\) distribution from the mean \(\theta\) distribution of all exposures, with a higher deviation indicating a higher likelihood of that exposure presenting a source anomaly. We calculate this deviation using the Kullback-Leibler (KL) divergence (Kullback, 1959; Cover & Thomas, 1991), which is commonly used in machine learning and information theory as a measure of the distance between two probability distributions. For each exposure, we divide its \(\theta\) distribution into N bins of equal size and define its KL divergence as,
\[\mathrm{KL}(P_{i}|\overline{P})=\sum_{j=1}^{N}P_{i}(\theta_{j})\cdot\ln\frac{ P_{i}(\theta_{j})}{\overline{P}(\theta_{j})}, \tag{1}\]
where \(P_{i}(\theta_{j})\) represents the value of the \(j^{th}\) bin in the \(\theta\) distribution of the \(i^{th}\) exposure and \(\overline{P}(\theta_{j})\) represents the \(j^{th}\) bin in the mean \(\theta\) distribution of all exposures.
Since it is assumed that the number of anomalous exposures is very small compared to the size of the entire dataset, the mean \(\theta\) distribution of all exposures can be safely used as a representation of the distribution of normal exposures.
To build the \(\theta\) distribution of each exposure, we follow these steps:
1. We center each source on a 64x64 stamp and calculate its ellipticity components as, \[e_{1}=\frac{Q_{20}-Q_{02}}{Q_{20}+Q_{02}}\] (2) and \[e_{2}=\frac{2Q_{11}}{Q_{20}+Q_{02}}.\] (3) \(Q_{ij}\) are the quadrupole moments defined as, \[Q_{ij}=\sum_{I(r)>I_{0}}I(r)x^{i}y^{j},\] (4) where \(r=\sqrt{x^{2}+y^{2}}\). The threshold \(I_{0}\) is set as \(I_{0}=0.02I(\vec{r}=0)\).
2. We calculate the angle between the long axis of the ellipse and the x axis as: \[\theta=\frac{\arctan(e2/e1)}{2}\] (5)
3. For each exposure, we create a distribution \(\theta\) including all the sources in that exposure.
To identify sources, we realize that there are many possible algorithms for background removal and source identification. However, these algorithms are usually computationally expensive. For the purpose of detecting anomalies we do not need a very accurate source identification method, since the anomalies are more noticeable around the brightest sources. Therefore, we do not need to select really faint sources, which require high level source identification models. We aim for a quick and relatively accurate background removal and source identification methods. In addition, it is important to note that, in contrast with common source identification algorithms, we do not require sources to be isolated within their stamp, meaning that other sources or bright pixels can contaminate the stamp. This is because many anomalies present multiple source images (panel b of fig. 1) or other bright pixels (panels d, e, and f of fig. 1), and this requirement would directly discard those sources and complicate the anomaly detection.
To build our background model, we select 1000 random points on the CCD image, and discard the brightest 10% and the darkest 10% to avoid source pixels and defective areas. We fit the remaining points to a polynomial of second order. This builds our background model that is subtracted from the CCD image.
After the background has been removed, we define each source center as a pixel that meets the following criteria:
* Its value is above \(4\sigma\)1. Footnote 1: We have explicitly tested setting higher thresholds for source selection, such as \(10\sigma\) and \(30\sigma\), which include only the brightest sources. They lead to very similar results in the detection of anomalies.
* It is brighter than all the other pixels within a 64x64 stamp centered in it.
* The number of pixels above \(2\sigma\) within that stamp is higher than 6.
### Background anomalies
Ideally, the background of a CCD image should exhibit a uniform distribution, with the background gradients being random, corresponding to a state of highest entropy. Background anomalies, however, introduce coherent features in the background, such as stripes or wave-like patterns, which distort the natural pixel uniformity and reduce the entropy of the background distribution.
Similar to source anomaly detection, we search for a parameter that can quantify the change of entropy in the background. We choose this parameter to be the angle between the gradient at a given background point and the x-axis, and we call it \(\theta\). In a normal image, if we sample over a large number of background positions, the \(\theta\) distribution should be flat, since the gradients should be random. However, in an image presenting a background anomaly, the gradients exhibit certain preferred orientations, implying a reduced entropy in the \(\theta\) distribution.
The \(\theta\) distribution is built following these steps:
1. We smooth the background by applying a 10x10 mean kernel on the CCD image, to enhance the wave-like features that define the anomaly and facilitate their detection.
2. We select 10000 random points from the smoothed image. For each point at position (i, j), we calculate its gradient components as \[D_{x}=\frac{1}{2}\sum_{m=i+1}^{i+5}\sum_{n=j-5}^{j+5}\frac{(I_{m,n}-I_{2i-m,2 j-n})}{m-i}\] (6) \[D_{y}=\frac{1}{2}\sum_{m=i-5}^{i+5}\sum_{n=j+1}^{j+5}\frac{(I_{m,n}-I_{2i-m, 2j-n})}{n-j},\] (7) where \(I_{m,n}\) is the value of the pixel at position (m, n). From the gradient components we derive the angle \(\theta\) between the total gradient and the x-axis as, \[\theta=\arctan\left(\frac{D_{y}}{D_{x}}\right)\] (8) 3. Similar to source anomalies detection, we form a distribution of \(\theta\), but this time on a CCD level rather than exposure level. This is because, unlike in source anomalies, we do not expect the same preferred direction for \(\theta\) on every CCD of the exposure.
We calculate the KL divergence (eq. 1) between the \(\theta\) distribution of each CCD and the mean \(\theta\) distribution of all CCDs in the dataset. For each exposure, we average the KL divergence of all its CCDs and use this value as our background anomaly likelihood indicator.
### Results
#### 3.3.1 CFHTLenS
Zhang et al. (2019) reported 13 anomalous exposures in the i-band of CFHTLenS2. Applying the Entropy Method to this dataset, we can successfully detect 12 of the 13 exposures identified by Zhang et al. (2019), only failing to detect the exposure "792436" of w3m1m2 (fig. 3), which presents extremely faint anomaly patterns that are only visible around very bright sources. From the anomalies reported by Zhang et al. (2019), we detect 5 as source anomalies and 7 as background anomalies (all exposures of field w2p2p2). In addition, we detect 10 new source anomalies; "705392", "705393", and "705394" of field w3m0m1, "732383", "732384", and "732385" of field w2p3p2, "797642" and "797643" of w3p1p2, "775160" of w2p1p3, and "853686" of field w3p3m3. Fig. 4 shows a few examples of the newly detected source anomalies. These anomalies present small elongations on the sources, creating a systematic anisotropy across the entire exposure, which is detected by our method but could easily be missed by human inspection. It is also possible that these elongations could be understood as being caused by an elongated PSF. However, as we discuss in SS3.3.2, we believe that these are actual anomalies.
Footnote 2: "827410" of field w1m2m2, ”792617” of w3m1m0, ”792436” of w3m1m2, ”987104” of w3p2m3, ”859948” and ”859950” of w4m1p0, and all 7 exposures of w2p2p2
#### 3.3.2 DECaLS Dr3
With more than 17000 exposures, DECaLS DR3 forms a significantly larger dataset than CFHTLenS, thus we do not check all the exposures by eye to evaluate the performance of our method. Instead, we only check the 10% of exposures with the highest likelihood of being anomalous according to our results. We analyze each band (g, r, z) independently.
Fig. 5 shows the results for source anomalies. We show the results for the 10% of exposures with highest KL
divergence, i.e., highest likelihood of being anomalous. We divide them into 10 bins of equal size and illustrate the true positive rate (TPR) for each bin, based on eye check. As shown in SS3.3.1, eye check can miss many anomalies that can be captured by our method, and therefore here we only use the eye check as a performance indicator, not as the absolute truth.
In fig. 5, dark colors represent the TPR including only prominent anomalies, i.e., easily observed by eye and undoubtedly anomalous. Light colors include cases in which the sources present a small but systematic elongation along a particular direction, which is very hard to detect by eye. As mentioned in SS3.3.1, these small elongations could also be seen as caused by an elongated PSF. A strong enough PSF reconstruction technique should be able to handle these elongated sources, but weaker PSF reconstruction techniques might not be able to properly model the PSF and this could introduce bias in shear measurements. We therefore show the TPR for the most restricted case (dark colors) and for the most conservative one (light colors). Fig. 6 presents four anomalous exposures found in DECaLS DR3, where two of them were classified as prominent anomalies and the other two as dubious.
It is important to note that most of these dubious exposures present very small elongations along the vertical direction. If these anomalies were caused by an elongated PSF, the orientation of the psf should be, although roughly constant along the CCD image, random from exposure to exposure. The preferred vertical direction in most anomalous exposures in the dataset seems to indicate that these elongations are actually caused by anomalies (possibly due to CCD electronics) and not by an elongated PSF.
Fig. 7 shows the results for background anomalies. We plot the top 10% of exposures with the highest likelihood of being anomalous, as in source anomalies. However, in this case we do not need to distinguish between obvious and dubious anomalous exposures, since background anomalies are less ambiguous for the human eye.
## 4 Deep Learning Method for Detecting Anomalies
In this method we use a neural network called autoencoder to detect source anomalies. We train the autoencoder to reconstruct source stamps and we use the quality of these reconstructions to quantify the likelihood of exposures presenting source anomalies. In this section we introduce the basic ideas of autoencoders, and then show our results.
Figure 4: Examples of anomalous exposures in CFHTLenS that were detected by the Entropy Method and missed by human inspection.
Figure 3: CCD patch of the exposure ’792436’ of the w3m1m2 field of CFHTLenS.
### Autoencoders
Autoencoder is a type of neural network that learns to reconstruct input data in an unsupervised manner (Rumelhart et al., 1986; Michelucci, 2022). It comprises two components: an encoder and a decoder. The encoder is a feed-forward neural network that compresses the input data into a lower-dimensional space, also known as latent space. The decoder performs the inverse process, transforming the data from the latent space back to the input dimension. The bottleneck layer, located at the last layer of the encoder, has some dimensionality constrains that guarantee that it has fewer neurons than the previous layers, to ensure that the autoencoder does not merely copy the input data.
The power of autoencoders lies in their ability to capture the primary features of the input data, commonly known as latent features. This is akin to Principal Component Analysis (PCA), but while PCA is a linear method, autoencoders are not restricted to linear relationships on the data and can capture more complex features. Autoencoders have a wide range of applications, including image denoising, feature extraction, and anomaly detection. They are particularly effective in certain image anomaly detection problems, due to their ability to work in an unsupervised way and therefore not requiring previous knowledge about the anomalies. Based on the assumption that the amount of anomalous data is very small compared to the size of the entire dataset, autoencoders automatically learn the features that describe "normal" data.
### Training data and network
Our dataset consists of 10000 source stamps from DECaLS DR3, each of size 64x64. We include sources from the g, r, and z bands.
Figure 5: True positive rate for the 10% of exposures with highest likelihood of presenting source anomalies according to the Entropy Method. The true positive rate is based on human inspection. We divide the exposures into 10 bins of equal size, for each band (g, r, z). Dark colors include only prominent anomalies, whereas light colors also include dubious anomalies.
Figure 6: Examples of prominent and dubious anomalous exposures identified in DECaLS by the Entropy Method.
The input of our neural network are the flattened source stamps, with size \(4096\)\((64*64)\). The encoder is composed by three dense layers with \(128\), \(64\), and \(32\) nodes respectively. The decoder is also composed by three dense layers with \(32\), \(64\), and \(128\) nodes, respectively. The output of the decoder has equal size to the input data. Fig. 8 presents a visual representation of our Autoencoder. Although this is a very simple model, it suffices for our purpose since the source stamps do not exhibit complex features and we aim to avoid overfitting.
We divide our dataset into training (\(\sim 80\%\)) and validation (\(\sim 20\%\)) sets and train our network for \(500\) epochs, reaching a stable validation loss. We select the model at the epoch with lowest validation loss as our final model to make predictions.
### Results
For each exposure in DECaLS DR3, we select all its sources and we feed them one by one to our trained model. The output of the model is the reconstructed version of the input source stamp. We calculate the mean squared error (MSE) between the original stamp and its reconstructed counterpart as:
\[\text{MSE}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\left(\hat{x}_{ij}-x_{ij }\right)^{2}, \tag{9}\]
where \(x_{ij}\) is the (i, j) pixel of the original source stamp and \(\hat{x}_{ij}\) is the (i, j) pixel of the reconstructed source stamp. \(N=64\) is the stamp size.
We finally calculate the mean MSE of all the sources within each exposure and use that value as our source anomaly indicator. A higher mean MSE indicates a higher likelihood of that exposure being anomalous.
Fig. 9 shows our results. We plot our results as in fig. 5 and fig. 7. Dark colors include only prominent anomalies whereas light colors also include dubious ones. Our model shows a good performance in detecting source anomalies, particularly the prominent ones. This is expected since prominent anomalies strongly distort the sources, and these look clearly different from all sources within normal images. However, dubious anomalies induce small elongations on the sources. These small elongations only become problematic when we look at the entire exposure, not to a single source. Single sources have intrinsic shapes of the same order as the elongations, and therefore single sources elongated by the anomaly simply look like normal sources, and their reconstruction is very accurate.
To conclude, although the autoencoder shows a decent performance in detecting source anomalies, it is not as effective as the Entropy Method. The autoencoder fails to detect many anomalous exposures that were identified by the Entropy Method and it shows a higher false positive rate.
In this study, we have presented two methods for detecting anomalous images in astronomical datasets. We tested our methods on the i-band of CFHTLenS and the g, r, and z bands of DECaLS DR3. Our first method, named as Entropy Method, is based on the fact that both source and background anomalies introduce systematic anisotropies on the CCD images. Leveraging this insight, we study the statistical orientations of sources to detect source anomalies and the statistical orientation of the gradients at background points to identify background anomalies. The Entropy Method has demonstrated excellent results in detecting anomalies in both CFHTLenS and DECaLS DR3. In CFHTLenS, the Entropy Method outperforms human inspection by detecting 12 of the 13 anomalous exposures previously identified by human inspection and detecting ten new ones. In DECaLS DR3, the Entropy Method has detected a substantial number of source and background anomalies while maintaining a low false positive rate. Our second method involved training an autoencoder to reconstruct source stamps, and we used the quality of these reconstructions as an indicator of the likelihood of exposures containing source anomalies. After evaluating this method on DECaLS DR3, we found that although the autoencoder performs relatively well, the performance of the Entropy Method is significantly better. The autoencoder fails to detect several anomalous exposures identified by the Entropy Method and exhibits a higher false positive rate.
Autoencoders are unsupervised, meaning that they do not use any information about the anomalies for training. We chose an unsupervised method to ensure fair comparisons with the Entropy Method. However, utilizing the anomalous exposures found by the Entropy Method to train a neural network in a supervised way could potentially increase the performance of anomaly detection. This would lead to a more general method that could combine the anomalous exposures from different datasets to perform a more generalized anomaly detection method, applicable to new datasets. Once new anomalous exposures are found, the neural network could be fine-tuned. This approach is left for a future work.
Overall, we believe that our research serves as a stepping stone for future advancements in anomaly detection techniques in astronomy. Our work lays the foundation for more sophisticated and efficient methods, potentially combining traditional statistical methods and machine learning methods to further enhance anomaly detection capabilities. As the field of astronomy continues to evolve with technological advancements and increasingly vast datasets, our methods are poised to play valuable roles in uncovering the secrets of the cosmos.
## 6 Acknowledgements
CFHTLenS is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
The Legacy Imaging Surveys of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO. The Photometric Redshifts for the Legacy Surveys (PRLS) catalog used in this paper was produced thanks to funding from the U.S. Department of Energy Office of Science, Office of High Energy Physics via grant DE-SC0007914.
Figure 9: True positive rate for the 10% of exposures with highest likelihood of presenting source anomalies according to our deep learning method. The true positive rate is based on human inspection. We divide the exposures into 10 bins of equal size, for each band (g, r, z). Dark colors include only prominent anomalies, whereas light colors also include dubious anomalies.
JZ is supported by the National Key Basic Research and Development Program of China (No.2018YFA0404504), and the NSFC grants (11621303, 11890691, 12073017), the science research grants from China Manned Space Project (No. CMS-CSST-2021-A01). XDL is supported by the China Manned Space Project with No. CMS-CSST-2021-A03 and No. CMS-CSST-2021-B01. The computations in this paper were run on the \(\pi\) 2.0 cluster supported by the Center of High Performance Computing at Shanghai Jiaotong University, and the Gravity supercomputer of the Astronomy Department, Shanghai Jiaotong University.
|
2306.08450 | Investigating Possible Binarity for GJ 229B | GJ 229B, the first type-T brown dwarf to be discovered, has presented a
tension between comparisons with evolutionary models and the
larger-than-expected mass and radius values derived from spectroscopic and
astrometric observations. We examine the hypothesis that GJ 229B is actually a
binary sub-stellar object by using two grid-based fits using evolutionary
models to explore the range of mass ratios of the possible binary components.
We find that the best-fit component values are most consistent with a roughly
2:1 binary mass ratio and an age range of 2-6 Gyr. The observed temperatures,
masses, and apparent radii match expected values from evolutionary models for a
binary much better than a single-object model, but more detailed observations
and modeling are needed to definitively confirm the binary hypothesis. | Alex R. Howe, Avi M. Mandell, Michael W. McElwain | 2023-06-14T11:46:59Z | http://arxiv.org/abs/2306.08450v1 | # Investigating Possible Binarity for GJ 229B
###### Abstract
GJ 229B, the first type-T brown dwarf to be discovered, has presented a tension between comparisons with evolutionary models and the larger-than-expected mass and radius values derived from spectroscopic and astrometric observations. We examine the hypothesis that GJ 229B is actually a binary sub-stellar object by using two grid-based fits using evolutionary models to explore the range of mass ratios of the possible binary components. We find that the best-fit component values are most consistent with a roughly 2:1 binary mass ratio and an age range of 2-6 Gyr. The observed temperatures, masses, and apparent radii match expected values from evolutionary models for a binary much better than a single-object model, but more detailed observations and modeling are needed to definitively confirm the binary hypothesis.
+
Footnote †: journal: ApJ Letters
## 1 Introduction
GJ 229B is noted for being the first type-T brown dwarf to be discovered (Nakajima et al., 1995)1, with a measured effective temperature well below the L/T transition, \(\sim\)1200 K. Yet its atmospheric structure and composition have been less thoroughly studied than now-more-standard benchmark objects such as the similar GJ 570D (e.g. Line et al., 2015). This may be owing to its peculiar spectrum, which shows unusually weak H\({}_{2}\)O and CH\({}_{4}\) features for its spectral type (Burgasser et al., 2006) and elevated levels of CO (Oppenheimer et al., 1998).
Footnote 1: Not to be confused with the Neptune-mass planet accompanying the host star, GJ 229Ab, sometimes written as GJ 229b (Tuomi et al., 2014).
Dynamical measurements of the mass of GJ 229B have also yielded surprising results, with the most precise recent analysis finding a value of \(71.4\pm 0.6\,M_{J}\)(Brandt et al., 2021). This measurement was based on Gaia EDR3 data combined with several other astrometric and radial velocity data sets, resulting in an uncertainty significantly less than previous results. Yet it is incompatible with evolutionary models, as an object of that mass could not cool to the observed temperature of GJ 229B within a Hubble time. Meanwhile, an independent analysis of radial velocities alone found a minimum mass of only 1.62 \(M_{J}\)(Feng et al., 2020). Reconciling such a small minimum mass with the astrometric results requires fine-tuning the orbit solution with an orientation within \(\sim\)2\({}^{\circ}\) of face-on.
A newer analysis of the combined astrometric and RV data set was undertaken with the addition of the Gaia-Hipparcos positional difference by Feng et al. (2022). This analysis found a dynamical mass for GJ 229B of \(60.42^{+2.34}_{-2.38}\,M_{J}\). This mass is somewhat more plausible in terms of fitting evolutionary models. However, it still strongly precludes fitting the observed dynamical mass, effective temperature, and luminosity (and by proxy radius) simultaneously.
Recently, we undertook a spectroscopic retrieval study of GJ 229B's atmosphere with our APOLLO atmospheric retrieval framework (Howe et al., 2022), supplemented by an analysis incorporating the Sonora-Bobcat (S-B) grid of brown dwarf evolutionary models by Marley et al. (2017). This study showed that both spectroscopic retrieval measurements and evolutionary model fits return dramatically lower masses than the dynamical fits, with a retrieved mass estimate of \(41.6\pm 3.3\,M_{J}\). We also obtained an unexpectedly large radius estimate \(1.105\pm 0.025\,R_{J}\) (which is also implied by the luminosity and effective temperature of the object), which is inconsistent with evolutionary models of either mass, both of which predict a more compact object.
With such a large disparity in mass measurements, the most natural explanation would appear to be that GJ 229B is in fact a binary brown dwarf of unequal mass, with one component contributing most of the total flux to the spectrum, while the other contributes just enough flux to boost the apparent radius. This sort of discrepancy has been observed before for other objects, with binarity suggested as a solution to anomalously large retrieved radii (Line et al., 2017) and overall flux (Burgasser et al., 2008), so a similar solution seems likely for GJ 229B. In this letter, we explore the possible parameter space for a binary GJ 229B using two different grid-fitting methods based on the S-B evolutionary models (Marley et al., 2017) and the APOLLO retrieval code, extending our analysis from Howe et al. (2022). First, we fit the observed spectrum of GJ 229B (Geballe et al., 1996; Noll et al., 1997; Schultz et al., 1998; Oppenheimer et al., 1998) to binary spectra derived from the S-B model grid directly. These atmosphere models are self-consistent, but do not account for the peculiar chemistry of GJ 229B or the super-Solar C/O ratio found by Howe et al. (2022). Second, we generate a grid of APOLLO forward models for a binary object based on values for radius, gravity, and temperature structure from the S-B evolutionary tracks, but using our retrieved molecular abundances from Howe et al. (2022). For completeness, we consider fits using both the \(M_{\rm tot}=71\,M_{J}\) and \(M_{\rm tot}=60\,M_{J}\) dynamical mass measurements.
For each of these grids, we compute goodness-of-fit statistics for comparison to the observed spectrum of GJ 229B, and we find that both grid fits return qualitatively similar results. In the \(M_{\rm tot}=71\,M_{J}\) case, we find a primary mass of 51 \(M_{J}\) (\(q=0.39\)) using both methods. The \(M_{\rm tot}=60\,M_{J}\) case shows modest differences, with \(M_{1}=46\,M_{J}\) (\(q=0.30\)) for S-B and \(M_{1}=49\,M_{J}\) (\(q=0.22\)) for APOLLO. Given this consistency between our two methods and the physical arguments against a single-object solution, we consider this to be significant evidence for a binary solution for this object.
The results from the two model grid fits are shown in Section 2, with an explanation of the limitations of our model in Section 3. We discuss the implications of our results in Section 4 and summarize our conclusions in Section 5. A full description of the dataset we use in this paper is included in Howe et al. (2022).
## 2 Grid fits based on evolutionary models
We constructed two grids of spectra for binary brown dwarfs based on the solar-metallicity evolutionary tracks of the Sonora-Bobcat model set (Marley et al., 2017). To create these spectra, we linearly interpolated the S-B tracks to a grid of 1 \(M_{J}\) in mass and 10 K in primary effective temperature. For each grid point in the resulting "S-B grid," we computed a primary spectrum and two secondary spectra by interpolating the nearest S-B spectra provided in the parameter space for two brown dwarfs of equal age and masses summing to 60 \(M_{J}\) and 71 \(M_{J}\). Likewise, for each grid point in the "APOLLO grid," we computed forward models using APOLLO for the same radii, surface gravities, and temperature profiles from the S-B models combined with our retrieved molecular abundances from Howe et al. (2022). Thus, we used the same evolutionary tracks for both grids, but two different, incomplete, but complementary chemistry models.
For each grid and each case (60 and 71 \(M_{J}\)), we summed the primary and secondary spectra together to create a grid of model binary spectra. We then computed the goodness of fit to the observed spectrum of GJ 229B using a reduced chi-squared statistic. We chose the reduced chi-squared statistic for this analysis for its relative simplicity. However, most other measures of goodness of fit such as the Bayesian likelihood, Bayesian information criterion, and APOLLO's own likelihood function produce virtually identical results because all of these measures are dominated by the error in the large number of spectral data points (which comprise nearly all of the \(>3200\) degrees of freedom) rather than other statistical factors.
Heat maps of the reduced chi-squared results for both grids are shown in Figure 1 for the \(M_{\rm tot}=71\,M_{J}\) case and Figure 2 for the \(M_{\rm tot}=60\,M_{J}\) case, plotting primary effective temperature versus mass and age, as well as secondary effective temperature versus mass. All of the fits were found to have \(\chi^{2}_{\nu}\geq 10\), but this is not an unexpected result given that we are not performing full retrievals (such as MCMC retrievals) on the spectrum. To assess the uncertainties in these fits, we adopt 1\(\sigma\) errors of \(\chi^{2}_{\nu}\leq 1.2\chi^{2}_{\nu,\rm min}\) based on an analysis of chi-squared statistics by Beringer et al. (2012). (See Figure 36.2 and associated chapter.) We mark reduced chi-squared contours on the plots of \(\chi^{2}_{\nu}=11\), 12, and 13. Based on our definition, the outer contour is roughly equivalent to our 1\(\sigma\) uncertainties, although they are slightly different for each local minimum.
All four grid fits result in a relatively broad minimum in \(\chi^{2}_{\nu}\) spanning a width of \(\sim\)15 \(M_{J}\). For example, for the S-B model grid, our 1\(\sigma\) uncertainties in primary mass span 44-63 \(M_{J}\) in the 71 \(M_{J}\) case and 39-54 \(M_{J}\) in the 60 \(M_{J}\) case. In all cases, the primary effective temperature falls in a significantly narrower range of \(\sim\)900-1000 K, which is consistent between the two total mass cases. Within each case, the S-B and APOLLO fits are qualitatively similar.
However, the distribution for APOLLO forward models is broader in effective temperature and occurs at slightly lower temperatures overall. This is likely due to the chemistry model used for the APOLLO model grid, which assumes a constant atmospheric composition with respect to temperature, muting the effect of changing \(T_{\rm eff}\) on the spectrum shape. Meanwhile, the realistic changes in chemistry in the S-B grid are likely to produce worse spectral fits at non-optimal temperatures, as would be expected.
The parameters of the best fits from our model grids are listed in Tables 1 and 2 for the 71 \(M_{J}\) and 60 \(M_{J}\) cases, respectively. We note that while the S-B grid has a single global minimum in reduced chi-squared, in both of our cases, the APOLLO grid produces three local minima along a line of increasing mass and temperature. For completeness, we list all three of the APOLLO minima for each case in Tables 1 and 2. These are compared with our best single-object free retrieval from Howe et al. (2022), and reduced chi-squared values are listed for all of the fits. Including the three APOLLO fits is also useful in that they are representative of our \(1\sigma\) uncertainty distributions as a whole.
In Table 2, we also include single-object fits to both of our grids, subject to the same dynamical mass constraint of \(60M_{J}\). These fits have higher reduced chi-squared values than the binary fits, so the binary solution is still favored in this case. For the 71 \(M_{J}\) case, the global minimum in \(\chi^{2}_{\nu}\) falls outside the distribution of allowed evolutionary models (and all models within the grid have very large \(\chi^{2}_{\nu}>10000\)), so we consider it to have no single-object solutions.
The most notable differences between the 60 \(M_{J}\) and 71 \(M_{J}\) cases are in the mass ratio of the binary and the inferred age of the system. The 9 \(M_{J}\) difference in mass between the two cases is partitioned roughly equally between the primary and secondary mass, with the overall effect being a significantly more extreme mass ratio in the 60 \(M_{J}\) case. The S-B best fit (which in both cases is also representative of the APOLLO best fits) in the 71 \(M_{J}\) case has a mass partition of 51 and 20 \(M_{J}\) for a mass ratio of \(q=0.392\). In the 60 \(M_{J}\) case, the mass partition is 46 and 14 \(M_{J}\), with \(q=0.304\). Given a smaller total mass, a smaller primary mass with nearly the same \(T_{\rm eff}\) has a larger radius and thus produces a larger fraction of the total flux of the binary, so a cooler and lower-mass secondary is needed to make up the difference.
Likewise, a lower mass primary would need a shorter time to cool to the same \(T_{\rm eff}\), resulting in a younger inferred age of the system. For the S-B best fit, the difference is about 1 Gyr, reducing from 4.5 Gyr in the 71 \(M_{J}\) case to 3.5 Gyr in the 60 \(M_{J}\) case. All of the fits listed in Tables 1 and 2 fall in the range of 2.6-6.1 Gyr, somewhat older than previous estimates (e.g. Nakajima et al. (2015)).
As a representative example of the spectra resulting from our grid fits, Figure 3 compares the best fit spectra from each grid (blue and gold) and their residuals (green and red) with the observed spectrum of GJ 229B in the 71
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r|r} \hline & \multicolumn{4}{c|}{Binary} & \multicolumn{4}{c}{Single} \\ \cline{2-7} & S-B & APOLLO 1 & APOLLO 2 & APOLLO 3 & S-B & APOLLO & Free Retrieval \\ \hline \hline Primary Mass (\(M_{J}\)) & \(51^{+12}_{-7}\) & \(45^{+8}_{-3}\) & \(51^{+1}_{-8}\) & \(56^{+6}_{-2}\) & No Solution & No Solution & \(41.6\pm 3.3\) \\ Primary \(T_{\rm eff}\) (K) & \(970^{+40}_{-50}\) & \(910\pm 60\) & \(950^{+20}_{-80}\) & \(960\pm 30\) & & \(869^{+5}_{-7}\) \\ Primary Radius (\(R_{J}\)) & \(0.811^{+0.045}_{-0.045}\) & \(0.83^{+0.009}_{-0.033}\) & \(0.809^{+0.033}_{-0.007}\) & \(0.788^{+0.008}_{-0.026}\) & & \(1.105\pm 0.025\) \\ Primary \(\log g\) & \(5.30^{+0.15}_{-0.12}\) & \(5.23^{+0.09}_{-0.05}\) & \(5.31^{+0.01}_{-0.11}\) & \(5.37^{+0.08}_{-0.02}\) & & \(4.93^{+0.02}_{-0.03}\) \\ \hline Secondary Mass (\(M_{J}\)) & \(20^{+7}_{-12}\) & \(26^{+3}_{-8}\) & \(20^{+8}_{-1}\) & \(15^{+2}_{-6}\) & & \\ Secondary \(T_{\rm eff}\) (K) & \(478^{+210}_{-216}\) & \(596^{+7}_{-17}\) & \(469^{+176}_{-32}\) & \(364^{+38}_{-130}\) & & \\ Secondary Radius (\(R_{J}\)) & \(0.940^{+0.069}_{-0.039}\) & \(0.909^{+0.036}_{-0.013}\) & \(0.938^{+0.006}_{-0.040}\) & \(972^{+0.036}_{-0.019}\) & & \\ Secondary \(\log g\) & \(4.77^{+0.20}_{-0.46}\) & \(4.91^{+0.08}_{-0.17}\) & \(4.77^{+0.18}_{-0.03}\) & \(4.61^{+0.08}_{-0.30}\) & & \\ \hline Mass Ratio (\(M_{2}/M_{1}\)) & \(0.392^{+0.221}_{-0.265}\) & \(0.578^{+0.212}_{-0.238}\) & \(0.392^{+0.298}_{-0.027}\) & \(268^{+0.047}_{-0.122}\) & & \\ Flux Ratio (\(F_{2}/F_{1}\)) & \(0.079^{+0.224}_{-0.070}\) & \(0.219^{+0.073}_{-0.135}\) & \(0.080^{+0.106}_{-0.001}\) & \(0.031^{+0.012}_{-0.025}\) & & \\ Age (Gyr) & \(4.5^{+4.7}_{-1.4}\) & \(3.8^{+1.8}_{-0.5}\) & \(4.8^{+0.7}_{-1.4}\) & \(6.1^{+4.0}_{-0.6}\) & & \(>\)1.0 \\ \(\chi^{2}_{\nu}\) & 10.37 & 11.81 & 11.14 & 11.54 & & 8.15 \\ \hline \end{tabular}
\end{table}
Table 1: The best fit binary models from the Sonora-Bobcat (S-B) model set and our APOLLO forward model grid with a total mass of 71 \(M_{J}\). Bolometric flux ratios are calculated from the Stephan-Boltzmann law. The retrieved parameters are compared with our best fit free retrieval for a single object (Howe et al., 2022). No solutions for a single object with a global \(\chi^{2}_{\nu}\) minimum lie within either of the model grids, which we interpret as having no solution with physically plausible parameters. Uncertainties are defined by \(\chi^{2}_{\nu}\leq 1.2\chi^{2}_{\nu,{\rm min}}\) based on an analysis by Beringer et al. (2012).
case. Both grid fits match the observed spectrum at many wavelengths; however, they both show errors in different wavelength ranges. The S-B fit shows significant residuals in the J- and M-bands, while the APOLLO fit shows significant residuals mainly in the Y- and K-bands. Note that features in the residual spectra are exaggerated in regions of low observational uncertainties, so their magnitude is more indicative of the observational precision than actual absorption features.
## 3 Limitations of the Model
The Sonora-Bobcat (S-B) model set provides a complete grid of spectra only for brown dwarf models of solar metallicity and C/O ratio. Therefore, a detailed grid fit to an observed spectrum must make compromises with regard to the chemistry. Our S-B grid fit to GJ 229B assumes an essentially Solar composition, which is inconsistent with both the peculiar spectrum of this object and the retrieved molecular abundances from Howe et al. (2022). Our measured super-Solar C/O ratio of 1.13 is consistent with other retrievals of T dwarfs (e.g. Line et al. (2015)), but is well outside the _a priori_ parameters of generic brown dwarf model sets such as Sonora-Bobcat.
Additionally, the temperature-pressure profiles in both forward model grids are interpolated from the S-B models. They are not informed by our retrieved T-P profile from Howe et al. (2022), and in the context of the grid, it is _de facto_ a two-parameter model controlled only by \(T_{\rm eff}\) and \(\log g\). This is analogous to the parametric T-P profile used in Howe et al. (2022), which we found to return relatively poor goodness-of-fit statistics, even in a full ensemble retrieval.
Figure 1: Goodness of fit to observed spectra of GJ 229B of our binary forward model grids, with a total mass of 71 \(M_{J}\), calculated from the Sonora-Bobcat (S-B) models (Marley et al., 2017) (left) and from out APOLLO forward models (right). For the primary, the cutoff on the left edge is based on a mass greater than 50% of the total mass of the system, and the cutoff on the right is based on an age less than a Hubble time. For the secondary, the low temperature cutoff is based on the limitations of the molecular cross section tables used by APOLLO. The outermost contours are representative of our adopted 1\(\sigma\) uncertainties, although they are slightly different for each local minimum.
Meanwhile, our APOLLO grid assumes a constant molecular composition for brown dwarfs of different temperatures, as a full thermochemical model of the atmosphere is beyond the scope of APOLLO, which is designed around a free free free retrieval from Howe et al. (2022).
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r|r} \hline & \multicolumn{4}{c|}{Binary} & \multicolumn{4}{c}{Single} \\ \hline & S-B & APOLLO 1 & APOLLO 2 & APOLLO 3 & S-B & APOLLO & Free Retrieval \\ \hline \hline Primary Mass (\(M_{J}\)) & \(46^{+8}_{-7}\) & \(38^{+0.5}_{-2}\) & \(44^{+1}_{-3}\) & \(49^{+5}_{-3}\) & 60 & 60 & \(41.6\pm 3.3\) \\ Primary \(T_{\rm eff}\) (K) & \(960^{+30}_{-40}\) & \(900^{+20}_{-50}\) & \(930^{+10}_{-50}\) & \(950\pm 30\) & \(1000^{+10}_{-20}\) & \(970\pm 20\) & \(869^{+5}_{-7}\) \\ Primary Radius (\(R_{J}\)) & \(0.835^{+0.061}_{-0.033}\) & \(0.872^{+0.011}_{-0.002}\) & \(0.841^{+0.013}_{-0.003}\) & \(0.818^{+0.014}_{-0.023}\) & \(0.777^{+0.001}_{-0.002}\) & \(0.773^{+0.003}_{-0.002}\) & \(1.105\pm 0.025\) \\ Primary \(\log g\) & \(5.23^{+0.13}_{-0.18}\) & \(5.11^{+0.01}_{-0.08}\) & \(5.21^{+0.01}_{-0.05}\) & \(5.28^{+0.06}_{-0.04}\) & \(5.412\pm 0.002\) & \(5.416\pm 0.003\) & \(4.93^{+0.02}_{-0.03}\) \\ \hline Secondary Mass (\(M_{J}\)) & \(14^{+7}_{-8}\) & \(22^{+2}_{-0.5}\) & \(16^{+3}_{-1}\) & \(11^{+3}_{-5}\) & & & \\ Secondary \(T_{\rm eff}\) (K) & \(411^{+174}_{-169}\) & \(597^{+38}_{-7}\) & \(449^{+64}_{-30}\) & \(346^{+56}_{-108}\) & & & \\ Secondary Radius (\(R_{J}\)) & \(0.990^{+0.059}_{-0.051}\) & \(0.944^{+0.002}_{-0.018}\) & \(0.975^{+0.002}_{-0.021}\) & \(1.011^{+0.028}_{-0.024}\) & & & \\ Secondary \(\log g\) & \(4.57^{+0.30}_{-0.50}\) & \(4.81^{+0.11}_{-0.01}\) & \(4.64^{+0.09}_{-0.07}\) & \(4.45^{+0.12}_{-0.29}\) & & & \\ \hline Mass Ratio (\(M_{2}/M_{1}\)) & \(0.304^{+0.234}_{-0.193}\) & \(0.879^{+0.088}_{-0.021}\) & \(0.364^{+0.100}_{-0.030}\) & \(0.224^{+0.080}_{-0.113}\) & & & \\ Flux Ratio (\(F_{2}/F_{1}\)) & \(0.047^{+0.120}_{-0.031}\) & \(0.227^{+0.036}_{-0.001}\) & \(0.073^{+0.116}_{-0.006}\) & \(0.027^{+0.017}_{-0.020}\) & & & \\ Age (Gyr) & \(3.5^{+1.8}_{-1.0}\) & \(2.6\pm 0.3\) & \(3.4^{+0.2}_{-0.4}\) & \(4.3^{+1.4}_{-0.6}\) & \(7.0^{+0.4}_{-0.2}\) & \(7.7^{+0.5}_{-0.4}\) & \(>\)1.0 \\ \(\chi^{2}_{\nu}\) & 9.97 & 11.22 & 10.92 & 10.25 & 11.25 & 24.56 & 8.15 \\ \hline \end{tabular}
\end{table}
Table 2: Same as Table 1, except with a total mass of 60 \(M_{J}\). Single-object solutions for both grid fits are listed alongside the free retrieval from Howe et al. (2022).
Figure 2: Same as Figure 1, except with a total mass of 60 \(M_{J}\).
retrieval philosophy for a single object. Therefore, neither of the model grids used in this analysis fully simulate the atmospheric chemistry of GJ 229B. A fully self-consistent chemical model with super-Solar C/O would be needed to model the spectrum of a binary brown dwarf to a greater accuracy than we have done in this paper.
The limitations of the chemical model for the APOLLO grid are especially significant for the secondary spectrum, which is much cooler than the effective temperature retrieved for a single-object fit. Thus, the atmospheric chemistry is expected to be quite different from our assumed abundances. In performing the fits with the APOLLO grid, we
Figure 3: Comparison of the observed spectrum of GJ 229B (black) with best fit binary spectra from the S-B model grid (blue) and our APOLLO forward model grid (gold), in the 71 \(M_{J}\) case. Residuals for each fit are plotted on the same scale below the x-axis in green and red, respectively.
assume that the relatively smaller flux from the secondary means that it will produce a proportionately smaller error in the overall spectrum.
An additional limitation of the APOLLO-based grid was the inability of APOLLO to model atmospheres of very cold (old and low-mass) secondaries due to the low-temperature limits of our molecular optical cross-section tables. This prevented us from fully exploring the parameter space at ages \(\gtrsim 10\) Gyr in the 71 \(M_{J}\) case and \(\gtrsim 6\) Gyr in the 60 \(M_{J}\). However, the contour plots in Figures 1 and 2 indicate that the best-fit models are clustered at younger ages in the region of the parameter space we did explore.
The reduced chi-squared values for our grid fits in this work are consistently significantly larger than that of our free retrieval of the single-object solution for GJ 229B in Howe et al. (2022). We attribute this to our less accurate treatment of the chemistry in both model grids, as described above, and to the simplified T-P profiles. The goodness-of-fit when performing a free retrieval using a parametric T-P profile was comparable to that of our grid fits in this work, which may be a result of both models being underdetermined to fully fit the observed spectrum.
We do not attempt a full retrieval of the prospective binary spectrum in this work, as would be required to make an accurate comparison of the goodness-of-fit between the single and binary models. The number of variables involved in retrieving properties of two independent objects from a blended spectrum is prohibitive and beyond the scope of the current version of APOLLO, and a free retrieval on a blended spectrum would likely have too many degeneracies to produce unambiguous results. Again, a full chemical model for a binary object would be needed to constrain the parameter space.
Nonetheless, we see that both of our grid fits produce similar results for best fits in \(M\)-\(T_{\rm eff}\)-space, and they also fit complementary wavelength regimes of the observed spectrum. The consistency of this result across two methods emphasizing different chemistry increases our confidence in the viability of our method to infer the properties of a binary brown dwarf in this context.
## 4 Discussion
### Physical Argument for a Binary Solution
Our investigation of binarity for GJ 229B was motivated by multiple lines of evidence that render a single-object solution improbable, if not outright unphysical, given our current understanding of brown dwarf evolution. First, the combination of observed luminosity and effective temperature, even prior to dynamical mass measurements, implies an unusually large radius for a single object. Our single-object retrieval in Howe et al. (2022) found a radius of 1.105\(\pm 0.025\)\(R_{J}\). When combined with our retrieved \(T_{\rm eff}=869^{+5}_{-7}\) K, based on the S-B models, this suggests a very young and low mass object with an age of \(<0.5\) Gyr and a mass near the deuterium-burning limit.
While this solution could fit the luminosity and effective temperature of GJ 229B, it is wildly inconsistent with the very large dynamical mass measurements made in recent years. A massive brown dwarf of 71 \(M_{J}\) would not be able to cool to the observed temperature of GJ 229B in a Hubble time, and the S-B evolutionary tracks include no valid solutions for GJ 229B with this mass. A 60 \(M_{J}\) brown dwarf could cool sufficiently in \(\sim\)7-10 Gyr. However, these solutions still fail with high confidence to replicate our retrieved radius, setting an upper bound of \(R\leq 0.77\,R_{J}\). In contrast, a binary object with an unequal flux ratio would replicate the apparent oversized-radius much more easily.
### Inferred Mass Ratio of GJ 229B in the Context of Other Brown Dwarf Binaries
Our inferred mass ratio for a potential binary GJ 229B is notable in that it is fairly unequal, with a best-fit solution of \(q=0.39\) in the 71 \(M_{J}\) case for both grid fits and an even more extreme \(q=0.30\) in the 60 \(M_{J}\) case for the S-B fit (albeit with large uncertainties). In contrast, the most notable well-constrained dynamical mass ratios measured for binary brown dwarfs are all near \(q\sim 0.8\)(Chen et al., 2022; Bedin et al., 2017; Zapatero Osorio et al., 2004). However, inferred mass ratios for other binaries based on evolutionary models are more varied, with several estimated to be near \(q\sim 0.5\)(Faherty et al., 2020; Gonzales et al., 2020; Reiners et al., 2010).
Population-level studies of brown dwarf binaries are subject to greater systematic uncertainties because dynamical masses are generally not available, forcing them to rely on evolutionary models and mass-luminosity relations. Early population studies along these lines yielded conflicting results. Burgasser et al. (2003) did a survey of T dwarf binaries and found significant numbers of systems down to \(q\sim 0.4\), based on a mass-luminosity relation. However, in a later study, Burgasser et al. (2007) estimated 77% of very-low-mass (VLM, \(<0.1\,M_{\odot}\)) binaries have \(q\geq 0.8\); but this study reported low completeness for unequal ratios of \(q\leq 0.6\), and their best-fit model suggested that mass ratios as low as
\(q\sim 0.4\) may be plausible for VLM binaries. In a later study, Dupuy and Liu (2017) measures mass ratios for 19 binary brown dwarfs and found that most of them had \(q\gtrsim 0.8\), and all of them had \(q\gtrsim 0.6\).
These population studies do not show strong evidence for highly unequal binary brown dwarf mass ratios. However, some of them do extrapolate the binary brown dwarf population to be consistent with a mass ratio for GJ 229B of \(q=0.39\). More promising is that the uncertainties in our analysis allow a much more equal mass ratio, reaching as high as \(q=0.79\) for the APOLLO 1 fit in the 71 \(M_{J}\) case. This indicates that there are a significant number of solutions that are consistent with population studies, so this is not a large obstacle to a binary interpretation for GJ 229B.
### Detectability of a Binary
The definitive proof of binarity for this object would of course be direct observation, either by resolving the separate components, or by observing the reflex motion of the primary. At least one and possibly both of these methods should be able to significantly constrain the parameters of the binary. At a distance of 5.76 pc, the fact that the two components are not resolved in _HST_ images (Nakajima et al., 1995) suggests that their projected separation is \(<0.5\) AU. However, we note that adaptive optics measurements with ground-based telescopes stand a fair chance of being able to resolve the components or measure the motion of the centroid. Typical adaptive optics systems will offer a resolution as small as \(\sim\)0.2 AU at that distance, and the GRAVITY instrument on VLT may be able to resolve an order of magnitude closer (GRAVITY Collaboration et al., 2017). As for radial velocity characterization, the apparent face-on orientation of the system as a whole as derived from astrometry suggests that any signal may be weak. However, if the orbit of the binary is significantly tilted relative to their mutual orbit around GJ 229A, their orbital motion at such small separations (at least several km s\({}^{-1}\) for the primary) should be well within the capability of ground-based spectroscopy.
For both of the most recent dynamical mass measurements of GJ 229B, none of the S-B evolutionary models can replicate the dynamical mass, effective temperature, and luminosity simultaneously for a single object, whereas a binary solution does so quite easily. Therefore, we feel confident in proposing it as a viable solution to the puzzles surrounding this object. We further note that our binary fits consistently predict an older age range for the system of 2-6 Gyr. In the absence of direct dynamical or spectroscopic evidence for binarity, an asteroseimic age measurement of GJ 229A could help to further constraint the parameter space for a binary brown dwarf companion.
## 5 Conclusions
Both spectroscopic retrievals and SED considerations combined with evolutionary models consistently suggest a smaller mass for the late T-dwarf GJ 229B than the dynamical mass of \(71.4\pm 0.6\,M_{J}\)(Brandt et al., 2021), or \(60.42^{+234}_{-2.38}\,M_{J}\)(Feng et al., 2022), suggesting that it may be a binary object. We have employed grid-based methods using the Sonora-Bobcat evolutionary models (Marley et al., 2017) and the APOLLO retrieval code (Howe et al., 2022) to estimate the mass ratio of a prospective binary GJ 229Ba and Bb. Our best fit solutions for the S-B grid fits (which have the smallest reduced chi-squared values) give a mass ratio of \(0.39^{+0.22}_{-0.27}\) for the 71 \(M_{J}\) case and \(0.30^{+0.23}_{-0.19}\) for the 60 \(M_{J}\) case. While these ratios are fairly unequal, they do overlap with the ranges inferred for resolved brown dwarf binaries from some population studies. These fits are also consistent with an intermediate age range for the system of 2-6 Gyr.
There is a significant probability that precise astrometric and/or radial velocity measurements will be able to confirm the binarity of GJ 229B. In the absence of such measurements, a blended spectrum retrieval incorporating an equilibrium chemistry model may be able to significantly refine the mass ratio estimated in this work.
## Acknowledgments
ARH was supported by an appointment to the NASA Postdoctoral Program at NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. ARH also acknowledges support by NASA under award number 80GSFC21M0002 through the CRESST II cooperative agreement. AMM acknowledges support from GSFC Sellers Exoplanet Environments Collaboration (SEEC), which is funded in part by the NASA Planetary Science Division's Internal Scientist Funding Model. This work was partially supported by the GSFC Exoplanets Spectroscopy Technologies (ExoSpec), which is part of the NASA Astrophysics Science Division's Internal Scientist Funding Model. We thank Beth Biller and Robert Haring-Kaye for helpful conversations. |
2306.07763 | NAVER LABS Europe's Multilingual Speech Translation Systems for the
IWSLT 2023 Low-Resource Track | This paper presents NAVER LABS Europe's systems for Tamasheq-French and
Quechua-Spanish speech translation in the IWSLT 2023 Low-Resource track. Our
work attempts to maximize translation quality in low-resource settings using
multilingual parameter-efficient solutions that leverage strong pre-trained
models. Our primary submission for Tamasheq outperforms the previous state of
the art by 7.5 BLEU points on the IWSLT 2022 test set, and achieves 23.6 BLEU
on this year's test set, outperforming the second best participant by 7.7
points. For Quechua, we also rank first and achieve 17.7 BLEU, despite having
only two hours of translation data. Finally, we show that our proposed
multilingual architecture is also competitive for high-resource languages,
outperforming the best unconstrained submission to the IWSLT 2021 Multilingual
track, despite using much less training data and compute. | Edward Gow-Smith, Alexandre Berard, Marcely Zanon Boito, Ioan Calapodescu | 2023-06-13T13:22:30Z | http://arxiv.org/abs/2306.07763v1 | # NAVER LABS Europe's Multilingual Speech Translation Systems
###### Abstract
This paper presents NAVER LABS Europe's systems for Tamasheq-French and Quechua-Spanish speech translation in the IWSLT 2023 Low-Resource track. Our work attempts to maximize translation quality in low-resource settings using multilingual parameter-efficient solutions that leverage strong pre-trained models. Our primary submission for Tamasheq outperforms the previous state of the art by 7.5 BLEU points on the IWSLT 2022 test set, and achieves 23.6 BLEU on this year's test set, outperforming the second best participant by 7.7 points. For Quechua, we also rank first and achieve 17.7 BLEU, despite having only two hours of translation data. Finally, we show that our proposed multilingual architecture is also competitive for high-resource languages, outperforming the best unconstrained submission to the IWSLT 2021 Multilingual track, despite using much less training data and compute.
## 1 Introduction
The vast majority of speech pipelines are developed for _high-resource_ languages, a small percentage of languages that have ample amounts of annotated data available Joshi et al. (2020). However, the assessment of systems' performance based only on high-resource settings can be problematic, since it fails to reflect the real-world performance these approaches will have in diverse and smaller datasets. Moreover, as around half of the world's languages are considered to be not only _low-resource_, but also from oral tradition (i.e., without a written form), there is an urgent need for speech technology that can operate robustly in such _low-resource settings_Bird (2011). In this context, the _IWSLT conference1_ proposes low-resource speech translation (ST) challenges that allow the speech community to realistically benchmark ST approaches using diverse and representative datasets. This paper describes NAVER LABS Europe's (NLE) submission to two of the language pairs from the IWSLT 2023 Agarwal et al. (2023) Low-Resource Track: Tamasheq-French (_Taq-Fr_) and Quechua-Spanish (_Que-Es_).
Footnote 1: [https://iwslt.org/](https://iwslt.org/)
Most successful approaches for tackling scenarios where ST data is scarce perform transfer learning across languages and modalities, leveraging multilingual pre-trained models for both speech and text Anastasopoulos et al. (2022). However, due to the large number of parameters of current Transformer-based Vaswani et al. (2017) approaches, training such systems is computationally expensive and not accessible to everyone. **NLE's submission focuses on a multilingual parameter-efficient training solution that allows us to leverage strong pre-trained speech and text models to maximize performance in low-resource languages.**
We present new SOTA results for the _Taq-Fr_ pair (17 hours of training data) that represent a 57% BLEU increase compared to the results achieved by Khurana et al. IWSLT 2022 post-evaluation.2 This same system achieves 23.6 BLEU on the IWSLT 2023 test set, an improvement of 7.71 BLEU compared to the second best result submitted this year. We also present SOTA results in the unconstrained setting for the _Que-Es_ pair (2 hours of training data), while maintaining most of the performance in the _Taq-Fr_ pair. In addition, to showcase the usefulness of our parameter-efficient multilingual solution we evaluate it on the high-resource setting of the IWSLT 2021 Multilingual Task Anastasopoulos et al. (2021). We find that our approach outperforms the best IWSLT 2021 submission (FAIR, Tang et al., 2021), despite training considerably fewer parameters (-64%), and using substantially
less training data and compute.
This paper is organized as follows. We first describe the architecture and training settings of our multilingual ST systems in Section 2. We next list the resources we use in Section 3. Section 4 presents our results in both low and high-resource settings. Lastly, we highlight the zero-shot potential of our approach in Section 5 and present our concluding remarks in Section 6.
## 2 System Description
In this work we focus on a parameter-efficient training solution that allows us to input the features from a pre-trained speech representation model into a pre-trained multilingual MT model, producing translations from both speech and text in multilingual settings. This setting also allows us to leverage automatic speech recognition (ASR; i.e. speech-to-transcript) data. The general architecture is presented in Figure 1. The architecture is considered _parameter-efficient_ because a small portion of its parameters are trained (bottom encoder layers and small adapters layers).
Architecture.We initialize our models with a pre-trained multilingual MT model, which we adapt to the ST task by inputting features extracted with a frozen pre-trained speech representation model. The MT model is also frozen, except for the bottom 2 or 3 encoder layers and small adapter modules (those introduced by Bapna and Firat (2019), with bottleneck dimension 64) added after each encoder and decoder layer. As we show in our results, the fine-tuned encoder layers are able to map the speech features into the representation space of the pre-trained MT model and the adapters can help with domain adaptation (and possibly help alleviate the length mismatch). At inference, this model can be used for MT with very little memory overhead: the convolutional layers and adapters are disabled, and the bottom encoder layers are swapped with those of the initial pre-trained model.
Training settings.We train on 4 V100 GPUs (80GB) for up to 200 000 updates, with a maximum batch size of 4 000 source features (or 80 seconds of audio) and accumulated gradients over two batches.3 We sample language pairs with a temperature of 3.4 We validate every 5 000 updates and perform early stopping on valid BLEU for the language pair(s) of interest, with a patience of 5, averaging model weights across the last 3 checkpoints.5 We find best results using a single convolutional layer with stride 2, which downsamples the sequence of speech features by a factor of 2. The other hyperparameters are listed in Appendix Section A.1.
Figure 1: An illustration of our multilingual ST architecture as described in Section 2. The bold arrow path corresponds to the speech-to-text training path. At decoding time, we can choose between producing speech-to-text or text-to-text translations. Figure best seen in color.
## 3 Resources
### Pre-trained Speech Representation Models
We experiment with different versions of two speech representation models: HuBERT Hsu et al. (2021) and wav2vec 2.0 (Baevski et al., 2020). We do not fine-tune these models in any of our configurations, but instead use them as feature extractors (see Figure 1). Because of this, our models are sensitive to the layer we extract features from. Pasad et al. (2021) argue that, for wav2vec 2.0 models that are not fine-tuned on ASR, speech features from _middle_ layers tend to have a higher abstraction from the speech signal, which is beneficial to downstream tasks. The results from Boito et al. (2022) seem to confirm this observation holds for low-resource ST. To the best of our knowledge, there is no similar investigation for HuBERT models.6
Footnote 6: We hypothesize that layer selection is less important for HuBERT architectures due to the multi-iteration approach that increases signal abstraction at each iteration.
Table 1 presents the speech representation models we experiment with. The _Tamasheq_ model is a monolingual wav2vec 2.0 Base model trained on 243 h of Tamasheq speech. The _Niger-Mali_ is a wav2vec 2.0 Base model trained on the same Tamasheq speech data plus 111 h of French, 109 h of Fulfulde, 100 h of Hausa, and 95 h of Zarma. This gives 658 h in total. The data for both models is sourced from the Niger-Mali audio collection Boito et al. (2022). The unreleased _mHuBERT-Tamasheq_ model uses this same audio collection for training, while also including Common Voice Ardila et al. (2020) data in four other languages English, French, Arabic and Kabyle), resulting in 5 069 h of speech. _XLSR-53_ (56k hours) and _XLS-R_ (500k hours) are massively multilingual wav2vec 2.0 Large models covering 53 and 128 languages, respectively. Neither of these two multilingual models have seen Tamasheq or Quechua speech during training.7
Footnote 8: With NLLB, 44k tokens are enough for a 100% coverage of the training data (mTEDx, TED-LIUM, Quechua, Tamasheq), or 35k when restricting to our _Taq-Fr_ setting. This represents a reduction of more than 200M parameters.
### Pre-trained Multilingual MT Models
To initialize our ST models, we first experimented with mBART for many-to-many translation (mBART50NN; Tang et al., 2020), but found the NLLB-200 models Costa-jussa et al. (2022) to give better results. We experiment with the dense NLLB models of various sizes: the distilled 600M-parameter and 1.3B-parameter versions, and the 3.3B-parameter version. We end up using the larger versions in our submissions (1.3B and 3.3B). Note that NLLB covers 202 languages, including Tamsheq and Quechua, which is not the case for mBART. At the same model size, despite covering more languages, NLLB is also a stronger machine translation model overall than mBART. Also, unlike mBART, it is not English-centric.
Contrary to Tang et al. (2021), we keep the original mBART or NLLB vocabularies of size 250k and do not train any embeddings. Instead, like Berard et al. (2021), we find that it is possible to filter the vocabulary at test time to only cover the languages of interest, significantly reducing the memory footprint of the model with a minor reduction in performance.8 We can also filter the vocabulary and embeddings before ST fine-tuning and achieve the same performance as with the full vocabulary without needing to train any embeddings. See Table 14 in Appendix for a comparison of these approaches. In order to study the zero-shot translation capabilities of our models (i.e., translating to languages and language pairs unseen at training), we do not apply vocabulary filtering to the configurations presented in the main paper.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **\# params** & \begin{tabular}{c} **Transformer** \\ **layers** \\ \end{tabular} &
\begin{tabular}{c} **Feature** \\ **dimension** \\ \end{tabular} \\ \hline
**Tamasheq**Boito et al. (2022) & 95M & 12 & 768 \\
**Niger-Mali**Boito et al. (2022) & 95M & 12 & 768 \\
**miftsBERT-Tamasheq** & 95M & 12 & 768 \\ \hline
**XLSR-53**Comenau et al. (2021) & 317M & 24 & 1024 \\
**XLS-R**Bahu et al. (2022) & 317M & 24 & 1024 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Speech representation models. The top portion presents _Tamasheq-dedicated_ models, while the bottom lists large _general purpose_ multilingual models.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Task** & **Source** & **Target** & **hours:minutes** & **\# utterances** \\ \hline ASR & Quechua & Quechua & 51:39 & 8,301 \\ \hline ST & Quechua & Spanish & 2:42 & 698 \\ ST & Tamasheq & French & 15:43 & 5,025 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Speech Translation (ST) and Speech Recognition (ASR) data provided by the organizers (train+valid). The ASR data is outside of the constrained setting.
### Datasets
We tackle the low-resource setting by building multilingual systems that utilize both ASR and ST data in the languages of interest (Tamasheq and Quechua), and in high-resource directions whose target language is of interest (French and Spanish). Note that we also include X\(\rightarrow\)English data, as we initially planned to participate in the Irish-English task. Including more data in high-resource languages has several advantages. Firstly, it has a regularization effect that prevents us from immediately overfitting the low-resource training data. Secondly, this enables knowledge transfer from common target languages and from similarly-sounding source languages.9 Thirdly, as we build multilingual ST systems by mapping the speech representation vectors into the same space as the multilingual MT model, our goal is to produce a model that is _as multilingual as possible_, not specializing in one specific language. Our results show that training on multiple languages at once achieves this effect, while also producing good zero-shot ST results.
Footnote 9: Manual inspection revealed that audio from both datasets presents some degree of target language borrowing (e.g., Spanish words present in the Quechua speech, French words present in the Tamasheq speech).
Table 2 presents statistics for the datasets provided by the IWSLT 2023 organizers. The _Que-Es_ dataset10 is an unreleased dataset prepared for this year's challenge. It corresponds to a translated subset of the Quechua ASR data ("Siminchik") from Cardenas et al. (2018). The _Taq-Fr_ dataset was introduced by Boito et al. (2022). Table 3 presents statistics for the datasets in high-resource languages. English ASR data comes from TEDLIUMv2 (Rousseau et al., 2014), and the other data comes from mTEDx (Salesky et al., 2021). Appendix Table 15 lists the datasets used in each of our submissions. In Section 4.3, we also run experiments in the setting of the IWSLT 2021 Multilingual Task to measure how good our approach is on high-resource languages. The datasets used for this setting are presented in Appendix Table 10.
Footnote 10: We are aware the dataset reference is _Que-Spa_. We chose to use the ISO 639-1 two letters abbreviation for Spanish for consistency with the other datasets used in this work.
## 4 Experiments and Results
All our submissions to the low-resource ST task are in the _unconstrained_ setting, due to the use of pre-trained models, and from training on data in other languages. The datasets used in each submission are listed in Appendix Table 15. This section is organized as follows. We present our _Taq-Fr_ results (4.1) with a detailed ablation study justifying our architectural choices. We then present our _Que-Es_ results (4.2). Lastly, we evaluate and analyze our approach in a high-resource setting (4.3).
### Tamasheq-French Results
We submit two systems that have _Taq-Fr_ as the only low-resource language pair (**primary** and **contrastive 1**). Additionally, we take our primary submission for _Que-Es_, which has also been trained on _Taq-Fr_, and submit this as **contrastive 2**. The top portion of Table 4 gives the test BLEU scores, and the top portion of Appendix Table 11 presents the valid BLEU scores. Table 12 shows statistics (average and standard deviation) over multiple runs when applicable.
_Taq-Fr_ BLEU) as our contrastive submission. We then ensembled the three runs as our **primary** submission. Finally, **constrastive 2** is the ensemble model used as primary submission to the _Que-Es_ task, which covers both low-resource languages, and combines _XSL-R Large_ with _NLLB 3.3B_.
Results.Our primary submission significantly outperforms the previous state of the art of 13.2 BLEU (+7.5 BLEU) on the IWSLT 2022 test set by Khurana et al. (2022).11 It also ranks first in this year's edition, with +7.7 BLEU over the second best primary submission. Our contrastive submissions rank second and third (beating the second best primary submission by +5.4 and +2.8 BLEU).
Footnote 11: Here we are referencing the model pre-trained using the Niger-Mali dataset that was presented at JSALT 2022: [https://www.clsp.jhu.edu/jsalt-2022-closing-presentations/](https://www.clsp.jhu.edu/jsalt-2022-closing-presentations/)
#### 4.1.1 Ablation Study
In Appendix Table 18 we compare our **contrastive 1** model (the non-ensembled version of our primary submission) with other architectures trained on the same data to validate our choice of hyperparameters.
Speech features.The wav2vec 2.0 models trained with Tamasheq (_Niger-Mali_ and _Tamasheq_) largely outperform the well-known massively multilingual models (_XLSR-53_ and _XLS-R_) on _Taq-Fr_ (e.g. +2.5 BLEU _Tamasheq_ compared to _XLS-R L_). These models are larger and trained on considerably more data, but do not include any Tamasheq speech. Similar to previous works (Pasad et al., 2021; Boito et al., 2022), when extracting features from wav2vec 2.0 we find that the 8th layer gives better results than the 11th (penultimate) layer (+2.5 BLEU for _Niger-Mali_).
For HuBERT, on the contrary, features from the 11th layer give the best results (+0.2 BLEU compared to 8th layer). When using the _right layer_, we find that wav2vec 2.0 outperforms HuBERT (+2.7 BLEU _Niger-Mali_ compared to _mHuBERT-Taq_).
Finally, _Niger-Mali_ is as good on _Taq-Fr_ as the _Tamasheq_ wav2vec 2.0, but performs considerably better on _Fr-En_ (+4.1 BLEU), probably because it was trained with French audio. The best _Fr-En_ performance is achieved with _XLS-R L_. We find worse performance on _Fr-En_ with _XLS-R XL_ (-2.0 BLEU), but this may be due to layer selection.
Pre-trained MT model.The larger the model used for initialization, the better the performance (even more so for _Fr-En_). However, we find that the gain from using NLLB 3.3B over NLLB 1.3B is too small to justify the increase in model size and decoding latency (3 times slower). At the same model size, NLLB 600M performs considerably better than mBART (+1.7 BLEU on _Taq-Fr_, +3.6 BLEU on _Fr-En_).
Trained parameters.Fine-tuning too many encoder layers results in overfitting, which hurts _Taq-Fr_ and _Fr-En_ performance. On the other hand, fine-tuning just 1 or 2 layers instead of 3 does _not_ result in a large BLEU drop. Similarly, adapter modules are not always needed. Disabling decoder adapters does not degrade _Taq-Fr_ performance (+0.2 BLEU), but results in a slight drop in _Fr-En_ performance (-0.9 BLEU), which could be attributed to a domain adaptation effect (to the mTEDx domain). Disabling encoder adapters has more impact on performance for _Taq-Fr_ (-0.8 BLEU), with similar effect on performance for _Fr-En_ (-1.0 BLEU). Section 4.3 shows that these adapters are important for domain adaptation.
Convolutions.The number of convolutional layers does not impact performance much (range of 1.1 BLEU on _Taq-Fr_ and 3.2 BLEU on _Fr-En_ for 0 to 3 layers), but it can have a large impact on decoding speed: each layer divides the input length by a factor of 2 resulting in a roughly 3.5\(\times\) speed-up from 0 to 3 layers. Interestingly, even though it was trained on much shorter sequences, the MT model seems to adapt quite well to any input length, even without any convolutions - we achieve a better _Taq-Fr_ result without any convolutions, but a worse _Fr-En_ result.12 However, models with fewer convolutional layers seem to converge faster (as shown in Appendix Figure 2).
Footnote 12: Without any convolution, the speech feature to target token ratio is **12:1**.
Stacked layers.While our approach described in Section 2 fine-tunes some parameters of the pretrained MT model, we can instead plug new Transformer layers at the bottom of the encoder, without changing any existing parameter. These "stacked layers" result in slightly larger models but are conceptually simpler, as they try to map the speech features into the same representation space as the input text embeddings of the MT model. Appendix Table 17 compares this architecture with the one used in our submission to the _Taq-Fr_ task. We see
that it performs similarly well (sometimes better) and that it does not add any noticeable decoding latency. We can even reach the same _Taq-Fr_ performance as our contrastive submission by just adding a single Transformer layer plus one convolution layer and small adapters (28M trained parameters in total). Finally, disabling all adapters only results in a small BLEU drop, suggesting that it is indeed possible to map the speech features into the text input space, with only one Transformer layer. This is surprising, considering that the input to this layer is 6 times as long as the target sequence on average.
### Quechua-Spanish Results
The test and validation scores of our submissions to the _Que-Es_ task are reported in the second half of Table 4 and 11, respectively. Because these models are also trained on _Taq-Fr_ data, we additionally report their performance on that task.
**System description.** As we do not have a speech feature extractor specialized to Quechua speech, our **contrastive 1** submission uses a massively multilingual wav2vec 2.0 model: XLS-R Large (18th layer). Compared to our Tamasheq submission, it is also initialized with a larger MT model (NLLB 3.3B), which we found to perform better in this setting. The training settings are the same as for the Tamasheq models, except that we only fine-tune the bottom 2 encoder layers (instead of 3) and validate every 2 500 updates, since this larger model tends to converge faster. Another difference is that we train on both Tamasheq and Quechua data (in addition to the mTEDx and TEDLIUM data). Like in our Tamasheq submission, we train 3 models with different random seeds and ensemble them as our **primary** submission. Our **constrastive 2** submission uses a single model with the same training settings, but starts from a smaller pre-trained MT model (NLLB 1.3B).
**Results.** Our primary submission in the _Que-Es_ task also ranked first, with 17.7 BLEU on the official test set. The full ranking results were not communicated in time to this camera-ready. They will be made available later through the conference findings paper Agarwal et al. (2023).
**Data contamination.** We found shortly after our submission that all the audio files used in the official test and validation sets are also present in the ASR training data shared by the organizers for the unconstrained setting. This means that our _Que-Es_ ST models are evaluated in an unrealistic setting, where they are tasked to translate Quechua utterances of which they already know the transcription into Quechua. For this reason, we filtered the ASR data to remove all audio files also present in the validation and test sets for _Que-Es_, and we re-trained models on this filtered data.13 While our official submission results presented in Table 4 use the "contaminated" dataset for comparison with the other submissions, we think any future comparison to our work should be done with the updated results in Appendix Table 11. Note that similar care should be taken with the results of other participants.
Footnote 13: In the updated version, we use NLLB 1.3B by default instead of NLLB 3.3B, like for _Taq-Fr_. Appendix Table 11 presents _uncontaminated_ results.
### Results and Analysis in a High-Resource Setting
The results of our ablation studies (Section 4.1.1) seem to indicate that our models are reasonably good on _Fr-En_ translation, even though we do early stopping and tune our hyper-parameters based on _Taq-Fr_ performance. Here, we further investigate the performance of our approach on high-resource ST by training models in the setting of the IWSLT 2021 Multilingual Task Anastasopoulos et al. (2021). This task evaluates the performance of multilingual ST models in 4 _training directions_, for which in-domain training data is provided, and 3 _zero-shot directions_, for which no training data is provided.
We use _XLS-R Large_ as the speech feature extractor, experiment with both _NLLB 1.3B_ and _NLLB 3.3B_ as the MT model, and perform early stopping based on the average validation BLEU across the 4 official training directions. We train our models on all the mTEDx language pairs that are not zero-shot, along with TED-LIUM (English ASR) and the Tamasheq and Quechua data (see Table 15). Note that the use of pre-trained models and English ASR means our models fall into the unconstrained setting.
Table 5 presents our results on this task, compared with the best unconstrained submission (FAIR; Tang et al., 2021).14 We find that both our models outperform FAIR's ensemble submission in the training directions, even though they require substantially less compute and data to train, and they are not ensembled. In the zero-shot direc
tions, our NLLB 1.3B version performs worse than FAIR's ensemble, which is not surprising since they used training data for the zero-shot language directions (from other datasets), whilst we do not.15 We find that using the larger NLLB 3.3B model for initialization considerably improves our zero-shot results.
Footnote 15: NLLB has been pretrained on these language pairs for MT, but we do not train on ST data for them.
#### 4.3.1 Incremental Learning
A limitation of our approach for low-resource ST is that we need to know in advance (when training the multilingual ST model) the set of low-resource languages to cover. Here, we show that it is possible to add a new low-resource language into an existing model without re-training it, similar to what has been previously done by Berard (2021) for text-to-text MT. We train a model following the IWSLT 2021 setting presented above, but without any Tamasheq or Quechua data. Then, we attempt to adapt it to _Taq-Fr_ using four different approaches: **1)** adding adapters of dimension 64 in the bottom layers and training all adapters (including in the decoder layers and top encoder layers); **2)** adding adapters of dimension 256 in the bottom layers and fine-tuning all adapters; **3)** adding adapters of dimension 256 in the bottom layers and training only those; **4)** adding adapters of dimension 256 in the bottom layers and training both those and the convolutional layer.
We keep the same training settings as before, except that: we train on _Taq-Fr_ data only; we train only the parameters mentioned above; we validate more often (every 1 000 updates); and we disable checkpoint averaging. Table 6 shows the performance of these four incremental training methods, compared to training on the entire language set from scratch. Even though incremental training does not perform quite as well, it appears to be a viable option that can achieve decent results. Lastly, we highlight that our experiments were limited to these four incremental learning settings (without hyper-parameter search), and that better results may be obtained with other parameter-efficient adaptation methods, or with more regularization.
#### 4.3.2 Multimodality and Domain Transfer
Since our systems are initialized with an MT model, of which just a few encoder layers are modified, it is straightforward to use our ST models for text-to-text translation: we just need to store both the MT and ST bottom layers and route tokens through the MT ones (see Figure 1). However, one question that remains is whether the ST adapters can be used for text-to-text decoding.
As an investigation of this, Appendix Table 19 measures the MT performance (NLLB 1.3B) on the IWSLT 2021 test sets (same domain as the mTDx training data) with and without the ST adapters. Surprisingly, we see that not only can we use these adapters for both text and speech modalities, but they actually improve the MT scores (+2.7 BLEU on average), even though they were only trained with ST and ASR data. This suggests that the fine-tuned bottom layers are able to fully map the speech representations into the text represen
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Model** & **New params** & **Taq-Fr** \\ \hline Joint training & 0 & **21.06** \\ \hline Adapters 64 (all) & 6.4M & 17.60 \\ Adapters 256 (all) & 15.9M & 18.18 \\ Adapters 256 (bottom) & 1.6M & 19.24 \\ Conv + Adapters 256 (bottom) & 2.5M & 19.13 \\ \hline \hline \end{tabular}
\end{table}
Table 6: BLEU scores on the _Taq-Fr_ validation set, when training jointly with IWSLT 2021 and Tamasheq data; versus incremental (2-stage) training. The “New params” columns give the number of Tamasheq-specific parameters added.
\begin{table}
\begin{tabular}{l|c|c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{
\begin{tabular}{c} **Total** \\ **params** \\ \end{tabular} } & \multicolumn{4}{c|}{**Training directions**} & \multicolumn{2}{c}{**Zero-shot directions**} \\ & & **Es-En** & **Fr-En** & **Fr-Es** & **Pt-En** & **Pt-Es** & **It-En** & **It-Es** \\ \hline FAIR at IWSLT 2021 & \multicolumn{2}{c|}{700M} & 40.4 & 36.4 & 34.4 & 29.0 & 34.4 & 28.4 & 34.6 \\ (Tang et al., 2021) & \multicolumn{2}{c|}{3\(\times\)700M (ensemble)} & 42.2 & 38.7 & 36.5 & 31.0 & **38.2** & **29.4** & **37.3** \\ \hline XLS-R + NLLB 1.3B & 317M + 1.38B & 70M & 43.7 & 39.4 & 38.0 & 31.5 & 35.9 & 28.9 & 35.0 \\ XLS-R + NLLB 3.3B & 317M + 3.36B & 115M & **44.0** & **39.9** & **38.3** & **33.1** & 38.1 & 29.3 & 36.9 \\ \hline XLS-R + NLLB 1.3B, ASR + MT cascade & 41.8 & 35.6 & 34.4 & 29.7 & 35.8 & 29.3 & 35.2 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on the IWSLT 2021 Multilingual task. We report BLEU scores on the IWSLT 2021 test sets. Our NLLB 1.3B and 3.3B models took respectively 34 and 46 h to train on 4 V100 GPUs, while FAIR’s models each took 7 days to train on 8 V100 GPUs. Also note that FAIR’s models were trained on much larger amounts of data, **including data for the “zero-shot” directions** (which, in their case is only zero-shot w.r.t the in-domain TED data).
tation space and that the adapters further improve performance by allowing domain adaptation of the MT model (which is hard to do at the very bottom layers). Note that the encoder adapters seem to be the most important ones, which is consistent with the findings of Cooper Stickland et al. (2021) that adapting the encoder is the most effective strategy for domain adaptation. Lastly, we highlight that adapting the MT model directly with MT data (mT-EDx's transcriptions and translations) gives even better results (+4.6 BLEU on average), but this cross-modality domain transfer is an interesting by-product of our parameter-efficient approach.
## 5 Zero-Shot Capabilities
Throughout this paper we have argued that one advantage of the multilingual models we propose is their potential for zero-shot translation, a setting in which a system produces translation in an unseen language pair by leveraging its existing knowledge of both languages. In Section 4.3 we showed that our models are competitive with the best submission to IWSLT 2021 on the three zero-shot high-resource language pairs, despite the fact that these pairs were not truly zero-shot for that system. In this section, we further illustrate the zero-shot capabilities of our models by translating Tamasheq speech in two settings: **1)** target language seen during both MT pre-training and ST adaptation (English); **2)** target language only seen during MT pre-training (Korean).
Evaluation settings.To score BLEU and chrF16 in the chosen target languages, we use a commercial translation service to translate the French side of the IWSLT 2022 test set to English and Korean. Note that this is only a _silver-standard_ made of synthetic data, and thus the evaluation will inevitably be biased.17 Our goal is solely to assess whether our systems have _some_ zero-shot ST abilities. We evaluate our _Taq-Fr_**contrastive 1** system, and variants of this system with fewer or larger adapters. We compare with a _cascade_ baseline, in which we first perform _Taq-Fr_ ST, followed by _Fr-En_ or _Fr-Ko_ MT using the text-to-text path from Figure 1. In this setting, the adapters are disabled during MT.
Footnote 17: For instance, we observe that these generated translations contain both the Korean transliteration in Hangul of named entities and the original version in the Latin script. This will likely penalize our produced translation during scoring.
Results.In Table 7, we measure the zero-shot translation capabilities of our approach on this silver-standard test set. We evaluate four models: our **contrastive 1** submission presented in Section 4.1, and variants of this model with increased adapter size, adapters only in the encoder, or no adapters. We compare against a cascade baseline that is not zero-shot, which consists in translating the Tamasheq speech into French text and then translating this text into English or Korean.
We observe that, in the case of English, which was seen during ST adaptation, adapters can be helpful (+2 BLEU over the cascade baseline). On the other hand, for Korean, unseen during ST adaptation, systems with adapters in the decoder (first two rows) perform worse, as they likely bring some degree of _language confusion_. Results are even worse with larger adapters, with over 40% of output sentences being in the wrong language. In this setting, the best results are achieved with only encoder adapters or no adapters at all (-1 BLEU compared to the baseline).
Appendix Table 13 measures the percentage of output sentences in the correct language and the percentage of Hangul versus Latin character in each system's outputs. We find that models with
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Adapter** & **Encoder** & **Decoder** & **Taq-Fr** & **Taq-En** & **Taq-Ko** & **Taq-Fr** & **Taq-En** & **Taq-Ko** \\
**Size** & **Adapters** & **Adapters** & **BLEU** & **BLEU** & **BLEU** & **chrF** & **chrF** & **chrF** \\ \hline
64 & ✓ & ✓ & 19.1 & **17.1** & 12.6 & 44.2 & 40.8 & 18.2 \\
128 & ✓ & ✓ & 19.2 & 16.7 & 9.6 & **44.7** & 40.3 & 14.5 \\
64 & ✓ & ✗ & **19.3** & 16.8 & 14.6 & 44.4 & **42.4** & 21.5 \\ ✗ & ✗ & ✗ & 17.5 & 16.2 & 14.4 & 43.0 & 40.8 & 21.5 \\ \hline \hline ST (contrastive 1) + MT (NLLB 1.3B) cascade & ✗ & 15.0 & **15.7** & ✗ & 38.6 & **22.2** \\ \hline \hline \end{tabular}
\end{table}
Table 7: BLEU and chrF results for _Taq-[Fr, En, Ko]_ using **contrastive 1** and its variants (models trained without adapters or with larger adapters), on the IWSLT 2022 _Taq-Fr_ test set or silver-standard Korean and English references obtained with MT. The last row is a cascade of speech translation followed by text translation (Taq\(\rightarrow\)Fr\(\rightarrow\)X).
adapters in the decoder (first two rows) generate more Latin characters. Note that the ideal translation is not necessarily 100% Hangul, as it might sometimes be best to keep the foreign named entities in the Latin alphabet. Table 8 illustrates this with a few examples of translations from our **contrastive 1** system.
## 6 Conclusion
In this paper we presented our parameter-efficient multilingual systems as submissions to the IWSLT 2023 Low-Resource Task in the Tamasheq-French and Quechua-Spanish language pairs. The architecture we propose has several advantages: it is computationally and data efficient, it allows the same model to do both speech-to-text and text-to-text translation (or transcription), it maximizes knowledge transfer to improve low-resource performance, and it has good zero-shot translation capabilities. Our submissions reach a new state of the art performance, winning both speech translation challenges, especially for Tamasheq-French, where we outperform the previous state of the art by more than 7 BLEU points.
Future work will include a comprehensive evaluation of the ASR capabilities of our architecture, and the investigation of adapters inside the speech representation model. Moreover, when the speech representation model is frozen, a more in-depth analysis of the optimal layer is needed.
## Acknowledgements
This work was partially funded by the European Horizon 2022 project UTTER (Unified Transcription and Translation for Extended Reality), under grant agreement No 101070631.
|
2302.04945 | Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples | Uncertainty analysis in the outcomes of model predictions is a key element in
decision-based material design to establish confidence in the models and
evaluate the fidelity of models. Uncertainty Propagation (UP) is a technique to
determine model output uncertainties based on the uncertainty in its input
variables. The most common and simplest approach to propagate the uncertainty
from a model inputs to its outputs is by feeding a large number of samples to
the model, known as Monte Carlo (MC) simulation which requires exhaustive
sampling from the input variable distributions. However, MC simulations are
impractical when models are computationally expensive. In this work, we
investigate the hypothesis that while all samples are useful on average, some
samples must be more useful than others. Thus, reordering MC samples and
propagating more useful samples can lead to enhanced convergence in statistics
of interest earlier and thus, reducing the computational burden of UP process.
Here, we introduce a methodology to adaptively reorder MC samples and show how
it results in reduction of computational expense of UP processes. | Danial Khatamsaz, Vahid Attari, Raymundo Arroyave, Douglas L. Allaire | 2023-02-09T21:28:15Z | http://arxiv.org/abs/2302.04945v1 | # Efficient Propagation of Uncertainty via
###### Abstract
Uncertainty analysis in the outcomes of model predictions is a key element in decision-based material design to establish confidence in the models and evaluate the fidelity of models. Uncertainty Propagation (UP) is a technique to determine model output uncertainties based on the uncertainty in its input variables. The most common and simplest approach to propagate the uncertainty from a model inputs to its outputs is by feeding a large number of samples to the model, known as Monte Carlo (MC) simulation which requires exhaustive sampling from the input variable distributions. However, MC simulations are impractical when models are computationally expensive. In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others. Thus, reordering MC samples and propagating more useful samples can lead to enhanced convergence in statistics of interest earlier and thus, reducing the computational burden of UP process. Here, we introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
## 1 Introduction
In many engineering applications, decision-making processes rely on numerical simulation models. Most often, inputs to numerical models have some sort of uncertainty that induce uncertainty in model outputs. Thus, characterization, propagation, and analysis of uncertainty is a crucial step in any model development task. Understanding uncertainties enables providing a confidence measure to evaluate the applicability of different computational models for decision-making. Uncertainty quantification (UQ) and uncertainty propagation (UP) are recognized as essential components in many engineering applications where UQ refers to understanding uncertainty sources and UP refers to determining output uncertainty of a model due to uncertainties of input variables.
The most common and simplest approach to propagate the uncertainty from input to output is by feeding a large number of inputs to numerical models, known as Monte Carlo (MC) simulation. Based on the strong law of large
numbers and the central limit theorem, convergence in the distribution of a quantity of interest is expected. MC integration methods are known as the gold standard approach to carry out UP [1; 2]. However, the computational expense associated with MC simulations makes such methods prohibitive and impractical in many engineering applications. To mitigate the computational burden of MC simulations, other methods have been developed such as importance sampling [3] and adaptive sampling [4]. Other approaches to carry out UP are local expansion-based methods [5] that are weak against large variability of inputs, functional expansion-based methods [6], and numerical integration-based methods [7]. Change of probability measure from a desired input distribution is another technique to handle UP problems [8; 9; 10]. There are different ways to transfer a proposal measure to a target measure and one widely used method is the use of Radon-Nikodym (R-N) derivative [11; 12]. A change of measure based on R-N theory is performed by calculating importance weights using the density ratio of target to proposal densities. Note that although the density ratio cannot be calculated via a closed-form expression when underlying probability distributions are unknown, the R-N theory still applies. Accordingly, a sample-based approach has been proposed in Ref.[9]. The idea is to generate a large number of hypercubes different in size all over the input space. The density ratio of target and proposal samples is calculated by counting samples inside each hypercube. Next, a system of linear equations, one equation per hypercube, is solved to obtain the weights. Via sampling from weighted proposal samples, the empirical distribution of target samples is approximated. Another approach is proposed in Ref. [13] that works with determinable empirical distribution functions. They calculate importance weights by minimizing the L\({}_{2}\)-norm between a weighted proposal empirical distribution and a target distribution function. Although this approach claims to be effective in high-dimensional and large-scale problems, as many samples occupy the boundaries of high-dimensional spaces, numerical ill-conditioning eventually happens. Although implementing a change of measure method enables efficiency gains by skipping the propagation of target samples to computational models, it requires the availability of previously simulated data using the same model on identical input-output spaces. In scenarios where no such set of data or proposal samples exist, there is no choice but directly propagating target samples through computational models.
In this study, we propose an efficient approach to mitigate the computational burden of MC simulation methods for uncertainty propagation purposes. Assume that there exist a large set of samples yet to be propagated through a computational model to obtain the empirical distribution of the model's outputs. While all samples are important on average, the hypothesis here is some samples can be more useful in representing the empirical distribution of all samples. In other words, the Addition or elimination of a particular sample has an impact on the empirical distribution of all the samples, but this impact is not similar for every sample. Herein, the goal is to determine the importance of samples based on their role in defining the empirical distribution of all the samples. Therefore, it is possible to re-order samples based on their importance to be propagated through a model sequentially. Our approach suggests an efficient use of resources by picking the most informative samples when evaluation of all samples is not practical.
The rest of the paper proceeds as follows. In Sec. 2, we introduce the proposed framework to reorder samples of a given set based on their importance in representing the empirical distribution of all samples. Next, in Sec. 3, the application is demonstrated on an engineering problem that requires running a computationally expensive simulation model. Finally, in Sec. 4, we provide concluding remarks and discuss avenues of future works.
## 2 Methodology
In this section, we discuss our proposed method in detail and provide algorithms for easy implementation of the sequentially optimal sampling concept. The method can be applied to any set of samples regardless of the dimensionality and distribution of samples.
In algorithm 1, different steps of the method are stated. Assuming that we have available a set of samples S yet to be propagated through a model. We are interested to determine the importance of each sample to re-order samples accordingly. Thus, by sequentially propagating them through a model of interest, we assure once the computational resources are exhausted, we obtain the empirical distribution of a quantity of interest with the most similarity to the case where all samples had been propagated. The algorithm starts by initializing sets P and R to represent sets of sequentially picked samples and the samples yet to be picked respectively. At every iteration, samples from the set R are temporarily augmented to the set P one by one. The goal is to find the sample that minimizes the dissimilarity between empirical distributions of the temporarily updated set of picked samples and set S. Here, we use the Wasserstein metric for this purpose where **W** = \([w_{1},w_{2},...,w_{d}]\) is the vector that entry \(w_{i}\) indicates Wasserstein distance
between two empirical distributions in \(i^{t^{h}}\) dimension of a \(d\) dimensional space. At every iteration, the minimizer of \(||\textbf{W}||_{1}:=\sum\limits_{i=1}^{d}w_{i}\) is picked to be added to the set P and to be removed from the set R. In algorithm 1, function "Wass" takes samples from both sets and calculates Wasserstein distance. We suggest using the Manhattan distance of Wasserstein metric (L\({}_{1}\)-norm) to calculate Wasserstein distance to avoid the dominance of large Wasserstein distance of a single dimension which may cause diminishing reductions in Wasserstein distances in other dimensions. We call this technique as "Adaptive Sampling Method".
```
given: sample set S={s\({}_{1},\)s\({}_{2},...,\)s\({}_{n}\) set of sequentially picked samples P={} \(\leftarrow\) S - P whileR \(\neq\varnothing\)do \(\text{s}_{\text{picked}}\) = \(\operatorname*{argmin}_{s_{i}\in\text{R}}||\text{Wass(S,P+s}_{\text{i}})||_{1}\) P \(\leftarrow\) P+spicked R \(\leftarrow\) S - P endwhile
```
**Algorithm 1** Adaptive sampling to re-order planned Monte Carlo samples
By implementing algorithm 1, assuming the goal is to re-order \(n\) samples, the algorithm has to complete \(\frac{n(n+1)}{2}-1\) iterations which exponentially increases with the number of samples. In such cases, instead of identifying the best sample at each iteration, it is suggested to look for the best batch of samples to update the set P. Algorithm 2 shows different steps in the batch setting. Considering \(b\) as the batch size, \(k\) different batches of samples are randomly generated by picking \(b\) random samples from the set R. Then, instead of augmenting a single sample, a batch of samples is temporarily augmented to the set P to calculate the Wasserstein distance between the sets S and P. The best batch of samples is determined to update the set P and to be removed from the set R accordingly.
```
given: sample set S={s\({}_{1},\)s\({}_{2},...,\)s\({}_{n}\) set of sequentially picked samples P={} batch size \(b\) number of batches to generate \(k\) R \(\longleftarrow\) S - P whileR \(\neq\varnothing\)do \(\text{b}_{\text{picked}}\) = \(\operatorname*{argmin}_{b_{i}\subseteq\text{R}}||\text{Wass(S,P+b}_{i})||_{1}\) P \(\longleftarrow\) P+bpicked R \(\longleftarrow\) S - P endwhile
```
**Algorithm 2** Adaptive sampling to re-order planned Monte Carlo samples in batch setting
## 3 Demonstration
A moving boundary problem for the study of interface evolution during spinodal decomposition in alloys is used to demonstrate the framework developed in this study. The model is based on a free energy model for heterogeneous medium accounting for bulk and interfacial free energies,
\[F^{tot}(c,\nabla c)=\int_{V}[f_{bulk}+\frac{\kappa}{2}(\nabla c)^{2}]dV \tag{1}\]
where \(c\) is alloy composition, \(\kappa\) is gradient energy coefficient, and \(f_{bulk}\) is the bulk free energy function given as,
\[f_{bulk}=W(c-c_{\alpha})(c-c_{\beta}) \tag{2}\]
where \(W\) is the barrier height of phase transformation, and \(c_{\alpha}\) and \(c_{\beta}\) are the equilibrium composition of the product phases that are set to 0.3 and 0.7, respectively. Through high-throughput phase-field simulations, time series of synthetic microstructures will be generated for the investigation of parameter space on the microstructure landscape of a hypothetical alloy during isothermal thermal annealing. The boundary value problem follows:
\[\begin{array}{ll}\frac{\partial c}{\partial t}=\nabla.\bigg{\{}M\nabla \big{(}\frac{\partial f_{bulk}}{\partial c}-\kappa\nabla^{2}c\big{)}\bigg{\}}& \begin{array}{c}0<x,y<L_{x},L_{y}\\ 0<t<t^{*}\end{array}\\ \begin{array}{c}\text{BC:}\\ \text{IC:}\end{array}&\begin{array}{c}c(0,y,t)=c(L_{x},y,t)\\ c(x,y,0)=c^{*}+A\zeta\end{array}&\begin{array}{c}c(x,0,t)=c(x,L_{y},t)\\ \end{array}\\ \end{array} \tag{3}\]
where \(M\) is the inherently positive effective atomic mobility of the species. The lengths of the simulation domain are set to \(L_{x}=L_{y}=200\) with the grid size of \(256\times 256\) and \(t^{*}\) is the final model run time. BC and IC denote the used boundary and initial conditions, respectively. \(c^{*}\) is the initial average value of the alloy composition perturbed by a constant noise magnitude \(A\), and \(\zeta\) is a Gaussian random number with the interval of \([-1,+1]\). The simulations were carried out using combinations of \([c^{*},W,\kappa,M]\) parameter sets. The material properties, such as barrier height of transformation, mobility, and gradient energy coefficient for a given alloy with composition (\(c^{*}\)) are often highly uncertain or not available. As a result, a prior distribution with a certain physical range is necessary to be taken into account. Our assumed distributions for these parameters are shown in Fig. 1.
Figure 1: Proposed distributions for input parameters of the phase-field model (i.e., \([c^{*},W,\kappa,M]\))
In this phase-field model, the direct output is time-series images of microstructures, each with a dimension of \(256\times 256\). For further evaluation of the microstructures, these images are often condensed into a reduced set of physical and non-physical Quantities of Interest (QoI). The conventional reduction of image information is often a one-way transfer without the possibility of inverse transfer from QoI to microstructure image. Due to this condensation, materials' properties and performance are subject to significant uncertainty. Our study determines the area fraction of the phases, the composition of each phase, and the characteristic length scale of the microstructure from radially averaged Fast Fourier Transform (FFT) spectra [14]. The probability density functions for these QoIs are extracted for a constant heat treatment duration (i.e., \(t^{*}\)) and are shown in Fig. 2. These posterior distributions show a diverse range of values for quantities of interest. For instance, the sharp peaks in Fig. 2(c) and (d) show the equilibrium composition of 0.3 and 0.7 for the two product phases as dictated by the free energy. Some simulations, however, could also produce non-equilibrium composition values due to an uncertain set of kinetic and thermodynamic parameters.
Therefore, due to inherent parametric uncertainties, nonlinearity, and difficulties in the post-processing of microstructure images, phase-field simulations are computationally expensive to run. Moreover, advanced phase-field models often combine several order parameters and multiphysics interactions (e.g., thermal, electrical, mechanical, magnetic). This increased degree of complexity in the numerical and parametric calculation of these models often results in the uncertain evaluation of the material's property and performance space during modeling real-world processes (e.g., additive manufacturing [15], memristive materials for brain-like (neuromorphic) computing [16], electrodeposition reaction kinetics in battery materials [17], solder interconnect joint formation [18] and electromigration [19], microstructure evolution in thermoelectric materials for energy conversion [20], to name a few). It is important to note that Eq. 3
Fig. 3: Manhattan distance of Wasserstein metric between empirical distributions of sequentially picked samples (adaptive and random sampling) and all samples
Fig. 2: Probability density functions of Quantities of Interest (QoI) extracted from microstructure images generated by the phase-field model. (a) Area fraction of phases, (b) radially averaged FFT structure descriptor and composition of (c) phase \(\alpha\) and (d) phase \(\beta\) in the simulation domain
(i.e., Cahn-Hilliard equation) and some of its variants are also relevant to phenomena other than phase separation in materials. For instance, tumor growth [21], population dynamics [22], image processing [23] and even the irregular structure in Saturn's rings [24] are some noteworthy examples.
We seek to enhance the traditional Monte Carlo sampling methods to enable their use when faced with computationally expensive phase-field models. We, therefore, consider the problem of enhancing the convergence rate of Monte Carlo simulations by creating algorithms that ensure optimal convergence of a sequentially sampled input vectors. Here, we have available a set of 5000 samples, and we implement the adaptive sampling method in the batch setting to re-order the samples. We then sequentially propagate them through the model. The Batch size is set to 200 and at every iteration, 20,000 different batches are generated (note that in the last iteration, only 200 samples remain in the set R, so it can be skipped by simply augmenting the last batch to the set P). The simulations are replicated 100 times. The Manhattan distance of the Wasserstein metric (L\({}^{1}\)-norm) is plotted in Fig. 3. For comparison purposes, the result of a random sampling policy is also illustrated. There are two key points in Fig. 3: first, using the adaptive
Fig. 4: Mean and confidence interval of Wasserstein distance between empirical distributions of sequentially picked samples (adaptive and random sampling) and all samples in the output space
sampling policy to pick the best samples, the Wasserstein distance between the sets of picked samples and all samples is significantly smaller compared to the random sampling policy. This means the same measure of similarity between 2 sets is achieved using a much less number of samples. This emphasizes the fact that some samples are more useful (more informative about the distribution). Thus, based on these results, the same inference about distribution is made using less number of samples if they are picked optimally. The second key point is the narrow confidence interval of the adaptive sampling method. This essentially indicates that almost the same set of samples is consistently recognized at different replications. Note that, multiple replications only apply to the batch setting as at every replication, different sets of batches are generated whereas if the samples are picked one by one, all replication essentially return exactly the same order of samples.
In the next step, samples are propagated through the model to obtain the empirical distributions of all quantities of interest. Figure 4 illustrates Wasserstein distances comparing the adaptive sampling and random sampling policies. Here, the confidence intervals for both policies are wider since the distances between two samples in the input space and output space are different. However, still significant efficiency gains are observed comparing the required number of propagated samples to achieve the same Wasserstein distance in adaptive and random sampling policies.
As mentioned earlier, one can search for the most useful batch of samples to pick among different generated batches. To investigate the impact of batch size on the performance of the framework, we have performed adaptive sampling using different batch sizes.
The results are depicted in Fig. 5. As we reduce the batch size (increasing resolution), it improves the similarity between the empirical distributions of sequentially picked samples and all samples. However, note that even with the largest batch size, after one iteration, the difference is minimized since the framework recognizes and picks a batch with the most useful samples anyway. Therefore, if the goal is to pick the smallest number of samples possible, a smaller batch size is beneficial while for more relaxed conditions, a larger batch size can also do the job. The best result is achieved when samples are picked sequentially one by one. The trade-off here is smaller batch sizes require more iterations to complete the process. In our problem, since the number of samples is not drastically large (5,000 samples), Wasserstein metric calculations take almost the same computational time at any iteration, thus the wall-time increases almost linearly with respect to the number of iterations.
## 4 Conclusions and Future Work
Although MC simulations suggest a simple approach to propagate uncertainty from a model inputs to its outputs, running thousands of simulations is impractical in many engineering applications. In this work, we introduced the concept of re-ordering MC samples based on their usefulness in representing the empirical distribution of all samples. In this sense, while all samples are important on average, some samples are more informative. We proposed to
Figure 5: Comparison of Wasserstein distances between sequentially picked and all samples at different batch sizes
determine the importance of samples based on their impact on the Wasserstein distance between the sets of all and sequentially picked samples. The more informative a sample is, the more reduction in Wasserstein distance is observed. After re-ordering all samples, they are sequentially propagated through a computational model. The results show significant efficiency gains in comparison to random sample propagation. We also provided the method in the batch setting to decrease computational time by recognizing informative batches of samples instead of testing samples one by one. The results of simulations using different batch sizes suggest that using smaller batch sizes increases efficiency by effectively picking only highly informative samples. However, the batch size effect will be diminished after a few iterations as the framework will pick batches with important samples quickly.
In this work, the assumption is that samples are generated in the first place and then, we re-order samples before propagating them to computational models. The subject of future work is to propose methods to sample from a distribution consciously instead of generating random samples from input variable distributions. Therefore, efficiency gains are expected in scenarios where sampling from distributions can be computationally demanding.
## Acknowledgements
The authors acknowledge the support of the National Science Foundation through Grant No. CDSE-2001333 and DMR-1905325, as well as ARPA-E through contract DE-AR0001427. Calculations were carried out at the Texas A&M High-Performance Research Computing (HPRC) Facility.
|
2307.08779 | Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation | Low-light conditions not only hamper human visual experience but also degrade
the model's performance on downstream vision tasks. While existing works make
remarkable progress on day-night domain adaptation, they rely heavily on domain
knowledge derived from the task-specific nighttime dataset. This paper
challenges a more complicated scenario with border applicability, i.e.,
zero-shot day-night domain adaptation, which eliminates reliance on any
nighttime data. Unlike prior zero-shot adaptation approaches emphasizing either
image-level translation or model-level adaptation, we propose a similarity
min-max paradigm that considers them under a unified framework. On the image
level, we darken images towards minimum feature similarity to enlarge the
domain gap. Then on the model level, we maximize the feature similarity between
the darkened images and their normal-light counterparts for better model
adaptation. To the best of our knowledge, this work represents the pioneering
effort in jointly optimizing both aspects, resulting in a significant
improvement of model generalizability. Extensive experiments demonstrate our
method's effectiveness and broad applicability on various nighttime vision
tasks, including classification, semantic segmentation, visual place
recognition, and video action recognition. Code and pre-trained models are
available at https://red-fairy.github.io/ZeroShotDayNightDA-Webpage/. | Rundong Luo, Wenjing Wang, Wenhan Yang, Jiaying Liu | 2023-07-17T18:50:15Z | http://arxiv.org/abs/2307.08779v3 | # Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation
###### Abstract
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks. While existing works make remarkable progress on day-night domain adaptation, they rely heavily on domain knowledge derived from the task-specific nighttime dataset. This paper challenges a more complicated scenario with border applicability, i.e., **zero-shot** day-night domain adaptation, which eliminates reliance on any nighttime data. Unlike prior zero-shot adaptation approaches emphasizing either image-level translation or model-level adaptation, we propose a similarity min-max paradigm that considers them under a unified framework. On the image level, we darken images towards minimum feature similarity to enlarge the domain gap. Then on the model level, we maximize the feature similarity between the darkened images and their normal-light counterparts for better model adaptation. To the best of our knowledge, this work represents the pioneering effort in jointly optimizing both levels, resulting in a significant improvement of model generalizability. Extensive experiments demonstrate our method's effectiveness and broad applicability on various nighttime vision tasks, including classification, semantic segmentation, visual place recognition, and video action recognition. Our project page is available at [https://red-fairy.github.io/ZeroShotDayNightDA-Webpage/](https://red-fairy.github.io/ZeroShotDayNightDA-Webpage/).
## 1 Introduction
Deep neural networks are sensitive to insufficient illumination, and such deficiency has posed significant threats to safety-critical computer vision applications. Intuitively, insufficient illumination can be handled by low-light enhancement methods [23, 30, 34, 56, 63, 60], which aim at restoring low-light images to normal-light. However, enhancement models do not necessarily benefit downstream high-level vision tasks as they are optimized for human visual perception and neglect the need for machine vision.
Much existing literature has focused on improving machine vision performance at night through domain adaptation. By aligning the distribution statistics between the nighttime and daytime datasets through image translation [2, 12, 45], self-supervised learning [52, 53], or multi-stage algorithms [46, 47, 10], these methods have greatly improved models' performance in nighttime environments. The primary assumption of domain adaptation is that the target domain data is readily available. Nevertheless, obtaining data from the task-specific target domain may be challenging in extreme practical application scenerios such as deep-space exploration and deep-sea analysis.
To reduce the requirement on target domain data, **zero-shot** domain adaptation has emerged as a promising research direction, where adaptation is performed without accessing the target domain. Regarding day-night domain adaptation, the primary challenge is learning illumination-robust representations generalizable to both day and night modalities. To accomplish this goal under zero-shot constraints, Lengyel _et al_. [29] proposed a color invariant convolution for handling illumination changes. Cui _et al_. [8] designed a Reverse ISP pipeline and generated synthetic nighttime images with pseudo labels. However, image-level methods simply consider synthetic nighttime as pseudo-labeled data and overlook model-level feature extraction; model-level methods focus on adjusting model architecture but neglect image-level nighttime characteristics. Neither is
Figure 1: Left: Illustration of our similarity min-max framework for zero-shot day-night domain adaptation. Right: Our framework achieves state-of-the-art results on multiple downstream high-level vision tasks without seeing real nighttime images during training.
effective enough capture the illumination-robust representations that could bridge the complex day-night domain gap.
From this point of view, we devise a similarity min-max framework that involves two levels, as illustrated in Figure 1. On the image level, we generate a synthetic nighttime domain that shares minimum feature similarity with the daytime domain to enlarge the domain gap. On the model level, we learn illumination-robust representations by maximizing the feature similarity of images from the two domains for better model adaptation.
Intuitive as it seems, solving this bi-level optimization problem is **untrivial**. Directly solving it may yield unsatisfactory results, _e.g._, meaningless images filled with zero values or identical features given all inputs. Therefore, we develop a stable training pipeline that can be considered a sequential operation on both the image and the model. Regarding the image, we propose an exposure-guided module to perform reliable and controllable nighttime image synthesis. Regarding the model, we align the representation of images from day and night domains through multi-task contrastive learning. Finally, our model achieves day-night adaptation without seeing real nighttime images.
Our framework can serve as a plug-and-play remedy to existing daytime models. To verify its effectiveness, we conduct extensive experiments on multiple high-level nighttime vision tasks, including classification, semantic segmentation, visual place recognition, and video action recognition. Results on various benchmarks demonstrate our superiority over the state-of-the-art.
Our contributions are summarized as follows:
* We propose a similarity min-max framework for zero-shot day-night domain adaptation. Feature similarity between the original and darkened images is minimized by image-level translation and maximized by model-level adaptation. In this way, model's performance in nighttime is improved without accessing real nighttime images.
* We develop a stable training pipeline to solve this bi-level optimization problem. On the image level, we propose an exposure-guided module to perform reliable and controllable nighttime image synthesis. On the model level, we align the representation of images from day and night domains through multi-task contrastive learning.
* Our framework universally applies to various nighttime high-level vision tasks. Experiments on classification, semantic segmentation, visual place recognition, and video action recognition demonstrate the superiority of our method.
## 2 Related Works
**Low-Light Enhancement.** A straightforward approach to improve the model's performance in low light is brightening the test low-light images. Early non-learning practices exploit image processing tools such as histogram equalization [40] or image formation theories such as Retinex Theory [44]. Recent literature mainly takes advantage of the advance in deep learning. Trained on paired day-night data, some methods [55, 33, 56] simulate the image decomposition process of Retinex Theory. Others introduce adversarial learning [23] to support unpaired training. Zero-DCE [16, 30] designs a curve-based low-light enhancement model and trains in a zero-reference way. Advanced techniques, including frequency decomposition [24], feature pyramids [60, 63], and flow models [54] are also adopted in recent papers.
**Day-Night Domain Adaptation.** Nighttime high-level vision has attracted increasing attention in recent years. Apart from pre-processing with enhancement models, day-night domain adaptation is also a viable solution. YOLO-in-the-dark [47] introduces the glue layer to mitigate the day-night domain gap. MAET [8] exploits image signal processing (ISP) for nighttime image generation and uses both synthetic and real nighttime images for training. HLA-face [52] proposes a joint high-low adaptation framework driven by self-supervised learning. Others [37, 45, 57, 2, 54] employ Generative Adversarial Network (GAN) to transfer labeled daytime data to nighttime.
**Zero-Shot Day-Night Domain Adaptation.** Beyond Conventional adaptation, **zero-shot** approaches consider an even stricter condition where real nighttime images are inaccessible. For general tasks, existing methods either draw supports from extra task-irrelevant source and target domain data pairs [39, 51] or require underlying probability distribution of the target domain [20], which are inapplicable to our settings. For the day-night task, Lengyel _et al._ propose the Color Invariant Convolution (CIConv) [29] to capture illumination-robust features. MAET [8] can be viewed as zero-shot when real nighttime images are discarded during finetuning. Besides, domain generalization methods [1, 6, 19, 26, 31, 38, 62] also apply to our settings since they do not know target domains, but they are too general to handle the complex day-night domain gap.
Despite these advances, low-light enhancement concentrates on human vision and disregards downstream nighttime vision tasks. Conventional adaptation methods require task-specific nighttime datasets, which creates extra burdens on data collection and limits their generalizability to multiple tasks. Prior zero-shot adaptation methods fail to consider image-level and model-level jointly. In this paper, we propose a novel similarity min-max framework that could outperform existing methods by a large margin.
## 3 Similarity Min-Max Optimization
This section introduces our approach for zero-shot day-night domain adaptation. We first explain our motivation,
then introduce the overall framework and detailed designs.
### Motivation
Existing methods, generally categorized into Operator-based and Darkening-based as shown in Figure 2, come across troubles in the day-night domain adaptation problem. Operator-based methods [29] rely on the manually designed operators _at the model level_ to handle illumination variations, which are not adaptive to real complex scenarios. Darkening-based methods transfer labeled daytime data to nighttime by ISP [8] or GAN [2, 28, 45, 46] only _at the image level_. However, the former is sensor-dependent and cannot generalize across devices and datasets, while the latter requires data from the task-specific nighttime domain and thus fails to generalize to our zero-shot setting.
Intrinsically, the most critical issue of existing methods is their ignorance of the mutual effect between **pixels** and **features**. In our work, we make the first systematic investigation on this issue and propose a similarity min-max framework that thoroughly exploits the information from two sides. In detail, _at the pixel (image) level_, we minimize the feature similarity between original and darkened images by day-to-night translation. While _at the feature (model) level_, we maximize the feature similarity by representation alignment. This joint optimization leads to representations more robust to illumination changes.
We formulate our framework as follows. Denote the feature extractor of the downstream model as \(F(\cdot)\). Being robust to illumination requires the extracted feature of a daytime image \(I\) and its nighttime version \(D(I)\) to be similar, where \(D(\cdot)\) represents a darkening process. The limitation of existing darkening-based methods is that their \(D\) does not consider the co-effect of \(F\). So we introduce additional constraints on \(D\): we require \(D\) to minimize the similarity between the day feature \(F(I)\) and the night feature \(F(D(I))\). This way, we guide the darkening process with high-level vision, forming a unified framework of \(D\) and \(F\). At this point, we can integrate \(D\) and \(F\) as a min-max optimization problem:
\[\max_{\theta_{F}}\min_{\theta_{D}}\quad\mathrm{Sim}(F(I),F(D(I))), \tag{1}\]
where \(\theta_{D}\) and \(\theta_{F}\) denote the parameters in \(D\) and \(F\), and \(\mathrm{Sim}(\cdot,\cdot)\) measures the similarity between features.
However, trivial solutions exist in Eq. (1), such as \(D\) generating entirely black images and \(F\) extracting identical features for all inputs. We add regularizations to \(D\) and \(F\) accordingly to address this problem:
\[\max_{\theta_{F}}\min_{\theta_{D}}\ \mathrm{Sim}(F(I),F(D(I)))+\mathcal{R}_{D} (\theta_{D})-\mathcal{R}_{F}(\theta_{F}), \tag{2}\]
where \(\mathcal{R}_{D}\) and \(\mathcal{R}_{F}\) are intended to prevent model collapse.
How to design \(\mathcal{R}_{D}\) and \(\mathcal{R}_{F}\) properly is the key to solving Eq. (2). The following will introduce how we design \(\mathcal{R}_{D}\) and \(\mathcal{R}_{F}\) and build up the whole learning framework.
### Image-Level Similarity Minimization
This section describes our design for the darkening module \(D\). We want \(D\) to satisfy three properties:
* **Stability**. First and foremost, we need to prevent the similarity min-max optimization from collapsing, _i.e_., applying proper \(\mathcal{R}_{D}\) in Eq. (2).
* **Generalization**. \(D\) should represent a generalized darkening process so the downstream model can learn useful knowledge from \(D(I)\) to handle unseen nighttime scenes.
* **Flexibility**. We additionally expect flexible control over the degree of darkening, which could enable us to create diverse inputs beneficial for optimizing \(F\).
We design an exposure-guided pixel-wise mapping algorithm to satisfy the above properties. Unlike widely-used image-to-image darkening approaches [2, 46, 28] that rely heavily on real nighttime images, pixel-wise mapping adjusts images using a pre-selected function with learnable parameters. We empirically found that, by setting proper constraints on the mapping function, we can naturally avoid obtaining trivial solutions in the similarity min-max optimization (_stability_) and guarantee \(D\) follows a typical low-light process (_generalization_). Finally, we add an exposure guidance mechanism for better _flexibility_. The detailed design will be illustrated as follows.
**Darkening Process.** We first define a general function for tone mapping. Given an image \(I\in[0,1]^{C\cdot H\cdot W}\), we use a non-linear mapping \(f\): \([0,1]\rightarrow[0,1]\) and a pixel-wise adjustment map \(\mathcal{A}\in[0,1]^{C\cdot H\cdot W}\) to process the image:
\[D^{0}(I)=f(I,\mathcal{A}). \tag{3}\]
Typically, \(f\) should be monotonically increasing to preserve contrast and satisfy \(f(1,\alpha)=1\) for all \(\alpha\) to avoid information loss (_e.g_., gamma correction). However, the latter constraint \(f(1,\alpha)=1\) no longer holds for darkening. Therefore, we propose an auxiliary pixel-wise adjustment using
Figure 2: Comparison between different learning paradigms. D and N denote the daytime and nighttime domains, respectively. (a) Operator-based. (b) Darkening-based. (c) Our method.
a monotonic increasing function \(g\): \([0,1]\rightarrow[0,1]\) parameterized by another adjustment map \(\mathcal{B}\in[0,1]^{C\cdot H\cdot W}\). Note that \(g\) only serves as a complement and should be simple to avoid taking over the role of \(f\). The overall darkening process is formulated as:
\[D(I)=g^{-1}(f(g(I,\mathcal{B}),\mathcal{A}),\mathcal{B}). \tag{4}\]
Both \(\mathcal{A}\) and \(\mathcal{B}\) are estimated by a mapping estimator conditioned on the input image \(I\).
To guarantee \(D\) represents a darkening process (_i.e_., \(D(I)<I\)), \(f\) should additionally satisfy convexity. Specifically, we let \(f\) be the iterative quadratic curve [16]: \(f(x)=h^{(8)}(x)\), \(h(x,\alpha)=\alpha x^{2}+(1-\alpha)x\), and \(g\) be the dividing operation: \(g(x,\beta)=x/\beta\) in our implementation.
Other kinds of curve forms are also considered and tested. Still, we empirically found that quadratic curves could bring slightly better results (results in Sec. 4.2).
Besides, to enable flexible control over the exposure level, we feed an exposure map \(E\) to the mapping estimator with \(I\), yielding the corresponding darkened image \(D(I,E)\). During training, the darkening module is encouraged to align the pixel value of \(E\) and \(D(I,E)\). We use \(D(I)\) and \(D(I,E)\) interchangeably for simplicity.
**Similarity Minimization.** The training objective of module \(D\) involves two parts: similarity minimization and regularization. For the former, we directly reduce the distance between features:
\[\mathcal{L}_{D}^{sim}=\frac{\langle F(I),F(D(I))\rangle}{||F(I)||_{2}\cdot||F( D(I))||_{2}}, \tag{5}\]
where \(\langle\cdot,\cdot\rangle\) is the inner product between two vectors.
The regularization term consists of four losses. Besides a color consistency loss \(\mathcal{L}_{col}\)[16] that corrects color deviations, three additional losses are proposed to regularize \(D\):
Firstly, conditional exposure control is adopted to align the exposure map with the corresponding generated image:
\[\mathcal{L}_{c-exp}=\sum_{1\leq i\leq H,1\leq j\leq W}|\hat{D}_{i,j}(I,E)-E_{i,j}|, \tag{6}\]
where \(\hat{D}(I,E)\) is the channel-wise average of \(D(I,E)\). During training, each exposure map \(E\) has identical entries uniformly sampled between \([0,0.5]\).
Then we add constraints on \(\mathcal{A}\). Intuitively, \(\mathcal{A}\) represents the degree of illumination reduction. Illumination usually varies slowly across a scene but encounters rapid variations from object to object. Following this property, we apply a loose total variance loss:
\[\mathcal{L}_{ltv}(\mathcal{A}) =\sum_{c\in\{R,G,B\}}(h(|\nabla_{x}\mathcal{A}^{c}|)^{2}+h(|\nabla _{y}\mathcal{A}^{c}|)^{2}), \tag{7}\] \[h(x) =\max(\alpha-|x-\alpha|,0), \tag{8}\]
where \(\nabla_{x},\nabla_{y}\) are gradient operations along the horizontal and vertical axis, respectively, and \(\alpha\) is a hyperparameter. Compared with the original total variance loss where \(h\) is the identity function, our loose version allows the network to predict values of greater difference for adjacent pixels, which is common on objects' boundaries.
Finally, we adopt \(\mathcal{L}_{flex}(\mathcal{B})=1-\mathcal{B}\) to avoid model fitting to the exposure solely by \(g\).
The overall training objective for \(D\) is:
\[\mathcal{L}_{D} =\lambda_{D}^{sim}\mathcal{L}_{D}^{sim}+\mathcal{R}_{D}, \tag{9}\] \[\mathcal{R}_{D} =\lambda_{c-exp}\mathcal{L}_{c-exp}+\lambda_{col}\mathcal{L}_{col }+\lambda_{ltv}\mathcal{L}_{ltv}+\lambda_{flex}\mathcal{L}_{flex}. \tag{10}\]
### Model-Level Similarity Maximization
The darkening module \(D\) grants us access to a synthetic nighttime domain. In this section, we exploit \(D\) to learn illumination-robust representations.
Figure 3: Our proposed similarity min-max framework for zero-shot day-night domain adaptation. (a) We first train a darkening module \(D\) with a fixed feature extractor to generate _synthesized_ nighttime images that share minimum similarity with their daytime counterparts. (b) After obtaining \(D\), we freeze its weights and maximize the day-night feature similarity to adapt the model to nighttime.
Contrastive learning [5, 17] is a self-supervised learning paradigm that contrasts positive and negative image pairs. However, images of the same class in classification or adjacent scenes in segmentation will form false negative pairs, thus hurting the model's performance. To alleviate these burdens, BYOL [15] proposes a non-negative variant that only aligns the feature between positive image pairs \(\{v,v^{+}\}\):
\[\mathcal{L}_{\text{BYOL}}(v,v^{+})=2-\frac{2\cdot\langle z(q(F(v))),q^{\prime}( F^{\prime}(v^{+}))\rangle}{||z(q(F(v)))||_{2}\cdot||q^{\prime}(F^{\prime}(v^{+})) ||_{2}}, \tag{11}\]
where \(q,q^{\prime}\) are projection heads, and \(z\) is the prediction head. Both of them are MLPs with a single hidden layer. Note that \(F^{\prime}\) and \(q^{\prime}\) share the same architecture and weight initialization with \(F\) and \(q\) but receive no gradient and are updated by exponential moving average (EMA).
**Similarity Maximization.** Motivated by BYOL, we maximize the feature similarity between synthetic nighttime and daytime domains by non-negative contrastive learning. Given a daytime image \(I\) and an exposure map \(E\), we formulate the training objective as follows:
\[\mathcal{L}_{F}^{sim}=\mathcal{L}_{\text{BYOL}}(I,D(I,E))+\mathcal{L}_{\text{ BYOL}}(D(I,E),I). \tag{12}\]
Note that the measure of feature similarity is different between Eq. (5) and Eq. (12). Directly applying Eq. (5) to train \(F\) brings poorer results due to potential feature degeneration. In comparison, the asymmetric projection head and stop gradient policies prevent the feature extractor \(F\) from collapsing, _i.e_., working as the regularization \(\mathcal{R}_{F}\) in Eq. (2) together with the task loss (introduced below).
Moreover, different from \(E\) in Eq. (6), we use a compound exposure map \(E^{\prime}\) instead. \(E^{\prime}\) is first initialized with identical entries uniformly sampled between \([0,0.2]\) for simulating nighttime illumination. This range is the same for all downstream tasks, which does not introduce task-relevant prior. Then, we add pixel-wise noise \(z_{1}\) and patch-wise noise \(z_{2}\) to \(E\) to simulate exposure discrepancy. Overall, \(E^{\prime}\) can be represented as:
\[E^{\prime}=\mathcal{U}(0,0.2)+z_{1}+z_{2}. \tag{13}\]
See the supplementary for details on noise injection.
Besides \(\mathcal{L}_{F}^{sim}\), we add task-specific supervision \(\mathcal{L}_{task}\) on both the original daytime and synthetic nighttime domain. The final training objective for \(F\) is:
\[\mathcal{L}_{F}=\lambda_{F}^{sim}\mathcal{L}_{F}^{sim}+\lambda_{task}\mathcal{ L}_{task}. \tag{14}\]
### Overall Training Pipeline
Having introduced the image-level similarity minimization (Section 3.2) and model-level similarity maximization (Section 3.3), this section discusses the overall pipeline, as shown in Figure 3.
An intuitive idea is training \(D\) and \(F\) alternately like GAN [13, 64]. Nevertheless, balancing \(D\) and \(F\) increases the difficulty of parameter tuning and makes the optimization process unstable. We adopt a simple but effective two-step strategy to solve this problem: we first train \(D\) and keep \(F\) frozen, then train \(F\) and keep \(D\) frozen. Compared with the alternate strategy, our step-wise approach improves the performance on nighttime image classification (elaborate in Section 4.2) from 63.84% to 65.87%.
We could also explain the merits of our min-max framework from the perspective of adversarial training [14, 35]. Module \(D\) first produces the worst-case examples regarding feature similarity. Then, our model could learn the illumination-robust features by learning on these cases through similarity maximization. This technical commonality further justifies our motivation to build the similarity min-max framework.
Across all downstream tasks, the feature extractor and task module are initialized by daytime pre-trained models. We first freeze the feature extractor and train the darkening module (image-level translation). Then, we keep the darkening module fixed and train the feature extractor and task module jointly (model-level adaptation).
### Empirical Justifications on Darkening Module
Simulating nighttime conditions without accessing real nighttime images is the key to our framework. Particularly, nighttime conditions bring semantic changes in addition to illumination changes, _e.g_., the dark environment with artificial lights on the second real nighttime image in Figure 4. However, an accurate simulation is extremely difficult since our prior knowledge is limited to "low illumination". Fortunately, unlike typical day-to-night image synthesis pro
Figure 4: t-SNE [50] visualization of images’ feature extracted by the original daytime model and our adapted model on CODaN [29]. Red, green, and blue dots represent the feature of daytime, _synthesized_ nighttime, and _real_ nighttime images, respectively. We only color the instances from the “Car” category for better visual quality. Additional visualization results are shown in the supplementary.
cesses [41] which target the human visual experience, ours only care about the distribution of darkened images in the feature space. Leaving aside visual quality, we are pleased to find that the feature distribution of our synthesized nighttime domain is similar to that of the real nighttime domain as visualized in Figure 4(a). This observation demonstrates that our darkening process can characterize the night domain from the model-level perspective.
Thanks to this property, the feature discrepancy between daytime and _real_ nighttime domain is significantly reduced after model-level adaptation (red and blue dots in Figure 4). This discovery is consistent with the Maximum Mean Discrepancy (MMD) between the feature distribution of day and night modalities, which is 0.020 and 0.014 for the original and adapted models, respectively. We provide implementation details and additional empirical analysis using saliency maps in the supplementary.
## 4 Experiments
This section provides the implementation details, benchmarking results, and ablation analysis of our method.
### Implementation Details
Our framework widely applies to various nighttime vision tasks. In the following, we evaluate our method with four representative tasks: image classification, semantic segmentation, visual place recognition, and video action recognition. Only daytime data are accessible for training and validation, while nighttime data are only used during evaluation. We benchmark our method with three categories of methods that require no dataset-specific target domain data: low-light enhancement, zero-shot day-night domain adaptation, and domain generalization. For low-light enhancement, enhancement models are trained on their original datasets. Then we adopt them as a pre-processing step to assist the daytime baseline. The results of our method is the average of three independent trails. Additional details are provided in the supplementary.
### Nighttime Image Classification
We first consider one of the most fundamental vision tasks: image classification. CODaN [29] is a 10-class dataset containing a training set of 10000 daytime images and a test set with 2500 daytime and nighttime images, respectively. We validate models on the daytime test set and evaluate them on the nighttime test set. The backbone is ResNet-18 [18].
Benchmarking results are shown in Table 1. Enhancement methods restore input low-light images from the human visual perspective while keeping the model untouched, resulting in limited performance gains. Domain generalization methods are designed for general tasks and perform poorly in unseen nighttime environments. MAET [8] relies on degrading transformation with sensor-specific parameters, which suffers from poor generalizability. CIConv [29] adopts learnable color invariant edge detectors, which are not robust to the complex illumination variation in real scenarios. In contrast, our method outperforms state-of-the-art methods by a large margin (60.32% v.s. 65.87%), demonstrating our unified framework could obtain features more robust to illumination shifts.
**Ablation Studies.** We conduct ablation studies to justify our framework design in Table 2. Firstly, we study how to design the darkening module \(D\) given \(\mathcal{L}_{D}^{sim}\). The model-level adaptation stage (Section 3.3) remains the same for fair comparisons. Firstly, we replace our darkening module
\begin{table}
\begin{tabular}{l|c} \hline \hline Method & Top-1 (\%) \\ \hline ResNet-18 [18] & 53.32 \\ \hline
**Low-Light Enhancement** & \\ \hline EnlightenGAN [23] & 56.68 \\ LEDNet [63] & 57.40 \\ Zero-DCE++ [30] & 57.96 \\ RUAS [33] & 58.36 \\ SCI [34] & 58.68 \\ URetinexNet [56] & 58.72 \\ \hline
**Domain Generalization** & \\ \hline MixStyle [62] & 53.12 \\ IRM [1] & 54.52 \\ AdaBN [31] & 54.25 \\ \hline
**Zero-Shot Day-Night Domain Adaptation** & \\ \hline MAET\(\dagger\)[8] & 56.48 \\ CIConv [29] & 60.32 \\
**Ours** & **65.87** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 classification accuracy on the CODaN nighttime test set [29]. \(\dagger\) denotes our re-implementation with both the original and synthesized image fed into the task module.
\begin{table}
\begin{tabular}{l|l|c} \hline \hline Category & Method & Top-1 (\%) \\ \hline Baseline & Vanilla ResNet-18 & 53.32 \\ \hline Module \(D\) & Brightness adjustment & 57.96 \\ Heuristic & Gamma correction & 63.96 \\ \hline Module \(D\) & Reciprocal curve & 62.60 \\ Learnable & Gamma curve & 64.16 \\ \hline Similarity & w/o \(\mathcal{L}_{D}^{sim}\) and \(\mathcal{L}_{F}^{sim}\) & 64.08 \\ Loss & w/o \(\mathcal{L}_{D}^{sim}\) & 64.56 \\ & w/o \(\mathcal{L}_{F}^{sim}\) & 64.88 \\ \hline Full version & - & **65.87** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation studies for module \(D\) and similarity losses. We report the Top-1 accuracy on the CODaN [29] nighttime test set.
with heuristic image adjustment approaches, such as brightness adjustment (Brightness in PIL1) and gamma correction (\(D(I)=I^{\gamma}\)). We implement these two approaches using a fixed darkening hyperparameter chosen after multiple trials and report the best score.
Footnote 1: [https://pillow.readthedocs.io/en/stable/reference/ImageEnhance.html](https://pillow.readthedocs.io/en/stable/reference/ImageEnhance.html)
Next, we test other possible curve forms for \(f\). Both gamma curve (\(f(x,\alpha)=x^{\frac{1}{n}},\alpha\in(0,1]\)) and reciprocal curve (\(f(x,\alpha)=\frac{(1-\alpha)\cdot x}{1-\alpha\cdot x},\alpha\in[0,1)\)) bring slightly worse results than the iterative quadratic curve. Please refer to the supplementary for implementation details of these ablations and additional results on the segmentation task.
Finally, we test our framework's performance when one or both of the similarity loss is absent. We find that either similarity loss alone can boost the model's nighttime performance while combining them achieves the best result.
### Nighttime Semantic Segmentation
Next, we explore a more challenging nighttime vision task: semantic segmentation. We adopt RefineNet [32] with ResNet-101 backbone as the baseline. The daytime training dataset is Cityscapes [7], containing 2975 images for training and 500 images for validation, all with dense annotations. The nighttime testing datasets are Nighttime Driving [9] and Dark-Zurich [46]. These two datasets contain 50 coarsely annotated and 151 densely annotated nighttime street view images.
We benchmark our method in Table 3. Low-light enhancement methods yield worse results than the baseline because they perform poorly on street scenes with complex light sources. Domain generalization methods fail to mitigate the huge day-night domain gap, leading to unsatisfactory results. Note that RobustNet [6] adopts DeepLab-v3 [4] architecture, which is superior to RefineNet [32] adopted in our implementation. Among zero-shot adaptation methods, MAET [8] injects too much noise into images, leading to severe performance degradation. CIConv yields better results, but the improvement is limited. In comparison, our approach improves the mIoU scores to 44.9% on Nighttime Driving and 40.2% on Dark-Zurich.
Figure 5 shows qualitative segmentation results on two nighttime datasets. Low-light enhancement methods perform poorly on nighttime street scenes. Our method better extracts information hidden by darkness and thus generates more accurate semantic maps.
### Visual Place Recognition at Night
Then we explore visual place recognition, which aims to retrieve images that illustrate the same scene of a query image from an image pool. Unlike classification and segmentation, place recognition methods are not end-to-end during inference. We extend our method based on GeM [43] with ResNet-101 backbone. In GeM, the network receives a tuple of images \(\{p,q,n_{1},\cdots,n_{k}\}\) as input, in which the query \(q\) only matches \(p\). The network is trained on a contrastive loss, similar to the model-level stage in our framework. We retain the image-level stage and modify the model-level stage in our implementation. We first train the darkening module \(D\) as usual. Then, we consider \(D(p)\) as an additional matching for \(p\), _i.e_., an input tuple contains two positive samples (instead of one) and \(k\) negative samples. We train our network on the Retrieval-SfM dataset [43] and evaluate it on the Tokyo 24/7 dataset [49], which contains city views in multiple illumination conditions and viewing directions.
Performance is reported as mean Average Precision
\begin{table}
\begin{tabular}{l c} \hline \hline Method & mAP (\%) \\ \hline \multicolumn{3}{l}{**Zero-Shot Day-Night Domain Adaptation**} \\ \hline EdgeMAC [42] & 75.9 \\ U-Net jointly [21] & 79.8 \\ GeM [43] & 85.0 \\ CIConv-GeM [29] & 88.3 \\
**Ours-**GeM** & **90.4** \\ \hline \multicolumn{3}{l}{**Day-Night Domain Adaptation**} \\ (night images are available for training) & \\ \hline U-Net jointly [21] & 86.5 \\ EdgeMAC + CLAHE [21] & 90.5 \\ EdgeMAC + U-Net jointly [21] & 90.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Visual place recognition results on Tokyo 24/7 [49].
\begin{table}
\begin{tabular}{l|c c} \hline \hline Method & Nighttime Driving & Dark-Zurich \\ \hline RefineNet [32] & 34.3 & 30.6 \\ \hline \multicolumn{3}{l}{**Low-Light Enhancement**} \\ \hline EnlightenGAN [23] & 25.2 & 24.9 \\ Zero-DCE++ [30] & 32.7 & 28.3 \\ RUAS [33] & 25.1 & 23.4 \\ SCI [34] & 28.6 & 25.7 \\ URetiinexNet [56] & 28.1 & 24.0 \\ LEDNet [63] & 27.6 & 26.6 \\ \hline \multicolumn{3}{l}{**Domain Generalization**} \\ \hline AdaBN [31] & 37.2 & 31.1 \\ RobustNet [6] & 33.0 & 34.5 \\ SAN-SAW [38] & 28.1 & 16.0 \\ \hline \multicolumn{3}{l}{**Zero-Shot Day-Night Domain Adaptation**} \\ \hline MAET [8] & 28.1 & 26.4 \\ CIConv [29] & 41.2 & 34.5 \\
**Ours** & **44.9** & **40.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Semantic segmentation results on Nighttime Driving [9] and Dark-Zurich [46], reported as percentage mIoU scores.
(mAP) in Table 4. Results of comparison methods are borrowed from [21] and [29]. Our method outperforms all zero-shot methods and is comparable to conventional domain adaptation methods. As shown in Figure 6, the baseline method gets fooled by the night's appearance, while our model finds the correct daytime image.
### Low-Light Video Action Recognition
Although initially designed for images, our method also applies to video tasks. Here we consider an 11-class low-light video action recognition task. Normal light training data consists of 2.6k normal light video clips from HMDB51 [27], UCF101 [48], Kinetics-600 [25], and Moments in Time [36]. We evaluate our model on the official test split of the ARID dataset [58]. The action recognizer is I3D [3] based on 3D-ResNet [11].
We extend our method to video as follows. When training the darkening module, we input frames extracted from video clips. \(\mathcal{A}\) and \(\mathcal{B}\) in Eq. (4) is estimated for every individual frame. We calculate \(\mathcal{L}_{D}^{sim}\) between video clips and other losses between frames. When generating low-light videos, frames are separately fed into the curve estimator while sharing the same exposure map \(E^{\prime}\).
We report the results as Top-1 accuracy. As shown in Table 5, video enhancement methods StableLIVE [59], SMOID [22], and SGZ [61] yield a limited performance gain. Meanwhile, our approach boosts models' performance by 4.38%, demonstrating our superiority on videos.
## 5 Conclusion
In this paper, we propose a novel approach for zero-shot day-night domain adaptation. Going beyond a simple focus on the image-level translation or model-level adaptation, we observe a complementary relationship between two aspects and build our framework upon the similarity min-max paradigm. Our proposed method can significantly boost the model's performance at nighttime without accessing the nighttime domain. Experiments on multiple datasets demonstrate the superiority and broad applicability of our approach.
Figure 5: Semantic segmentation results. For each group, the first row: Nighttime Driving [9], the second row: Dark-Zurich [46].
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Top-1 (\%) \\ \hline I3D [3] & 47.02 \\ \hline
**Low-Light Video Enhancement** & \\ \hline StableLIVE [59] & 45.08 \\ SMOID [22] & 47.27 \\ SGZ [61] & 46.42 \\ \hline
**Domain Generalization \&** \\
**Zero-Shot Day-Night Domain Adaptation** & \\ \hline AdaBN [31] & 46.17 \\
**Ours** & **51.52** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Video action recognition results on ARID [58].
Figure 6: Qualitative visual place recognition results. (a) A night query from the Tokyo 24/7 dataset [49]. (b) Image retrieved by GeM [43]. (c) Image retrieved by our method.
## Acknowledgements
This work is supported by the Fundamental Research Funds for the Central Universities and the National Natural Science Foundation of China under Contract No.61772043 and a research achievement of Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). This research work is also partially supported by the Basic and Frontier Research Project of PCL and the Major Key Project of PCL.
|
2304.09137 | Exact mapping from the $(3+1)$-dimensional Skyrme model to the
$(1+1)$-dimensional sine-Gordon theory and some applications | A remarkable exact mapping, valid for low-enough energy scales and close to a
sharp boundary distribution of hadronic matter, from the $(3+1)$-dimensional
Skyrme model to the sine-Gordon theory in $(1+1)$ dimensions in the attractive
regime is explicitly constructed. Besides the intrinsic theoretical interest to
be able to describe the prototype of nonintegrable theories (namely, quantum
chromodynamics in the infrared regime) in terms of the prototype of integrable
relativistic field theories (namely, sine-Gordon theory in $(1+1)$ dimensions),
we will show that this mapping can be extremely useful to analyze both
equilibrium and out-of-equilibrium features of baryonic distributions in a
cavity. | Fabrizio Canfora, Marcela Lagos, Pablo Pais, Aldo Vera | 2023-04-18T17:17:11Z | http://arxiv.org/abs/2304.09137v4 | # In and out-of-equilibrium features of hadronic
###### Abstract
The entanglement entropy dynamics of baryonic layers in \(3+1\) dimensions in the low-energy limit of quantum chromodynamics is computed. The evolution after quantum quenches can be described explicitly. In particular, it is shown analytically that the von Neumann and Renyi entropies may display undamped oscillations in time, whose frequencies can be calculated in terms of the baryonic distribution of the hadronic matter in the cavity. Moreover, the Loschmidt amplitude and the fidelity and work distribution can be derived as well. These results, which are entirely unexpected in non-integrable theories such as the Skyrme model, are achieved thanks to a remarkable mapping from any \((3+1)\)-dimensional hadronic distribution of matter in a cavity close to its boundary to the sine-Gordon theory in \(1+1\) dimensions. In the attractive regime, such results are valid for low-enough energy scales.
_Keywords--_ Skyrme model; Nuclear Physics; Non-equilibrium QFT
Introduction
In condensed matter physics, a deeper understanding of the phase diagram, as well as of the entanglement and its evolution in many bodies, is one of the most important topics, both theoretically and experimentally [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15](detailed reviews are [16, 17, 18, 19, 20, 21, 22, 23]). This issue lies at the crossroad between statistical physics, quantum computation and quantum field theory. In low-dimensional systems, many powerful exact results have been derived. For instance, the references mentioned above indicate that the asymptotic entanglement of a large subsystem has been related to thermodynamic entropy in a stationary state. It has also been possible to connect the growth of entanglement with the capability of a classical computer to simulate non-equilibrium quantum systems with matrix product states. Integrable models in \(1+1\) dimensions offer a unique window on interacting systems, allowing a detailed understanding of the time evolution of entanglement [9, 10, 11, 12, 13, 14].
These concepts are essential in quantum field theory (QFT) as well, with implications from black hole physics to scattering amplitudes and quantum chromodynamics (QCD) [24, 25, 26, 27]. However, except for conformal field theory and integrable models [28, 29, 30, 31, 32, 33, 34, 35, 36], at first glance, it looks impossible to obtain exact results either on the phase diagram (such as the ones in [28, 29, 35, 36]) or on entanglement dynamics (such as the ones in [9, 10, 11]) of strongly interacting QFT. A theoretical dream is to derive those results in the low-energy limit of QCD, where perturbation theory is useless.
This work shows that one can derive results similar to the ones in \((3+1)\)-dimensional QCD [9, 10, 11, 28, 29, 35, 36], allowing a direct computation of the entanglement entropy, as well as of its evolution after a quench of a baryonic distribution in a three-dimensional cavity. Since only refined numerical techniques are commonly employed in analyzing the phase diagram of QCD at low temperatures and finite baryon densities [44, 45, 46, 47], the present analytic results are highly relevant and produce novel sharp predictions which can, in principle, be tested.
The starting point is the Skyrme theory, which (at leading order in the 't Hooft expansion [48, 49, 50]) represents the low-energy limit of QCD. The dynamical field of the Skyrme action [51] is a \(SU(N)\)-valued scalar field \(U\) (here, we will consider the two-flavors case). This action possesses both small excitations describing pions and topological solitons describing baryons [52, 53, 54, 55, 56], being the baryonic charge a topological invariant. Skyrme theory has always been considered the prototype of non-integrable models where the powerful non-perturbative results available in quantum many-body physics [57, 58, 59, 60, 61] cannot be applied. However, the techniques developed in [62, 63, 64, 65, 66, 67] allowed for the first time a successful analytic description of non-homogeneous baryonic condensates at finite baryon density, in good qualitative agreement with the available phenomenological results in [68] and references therein. This framework allows describing \((3+1)\)-dimensional baryonic layers confined in a cavity in terms of the sine-Gordon theory (SGT), which is the prototype of a relativistic integrable field theory in \(1+1\) dimensions. Hence, the results in [30, 31, 32, 33, 34, 35, 36, 37] developed for the SGT at finite temperatures and in [9, 10, 11, 12, 13, 14] for SGT out-of-equilibrium produce novel analytic results in the low-energy limit of QCD at low temperatures. In particular, this method allows computing the entanglement entropy, the time-oscillations of the von Neumann and Renyi entropies, the Loschmidt amplitude and so on of any hadronic distribution confined in a cavity whose energy and baryon densities are homogeneous in two spatial directions (but not necessarily in the third).
Summary of the results
The action for the \(SU(2)\)-Skyrme model is given by
\[I[U] = \frac{K}{4}\,\int_{\cal M}d^{4}x\,\sqrt{-g}\;{\rm Tr}\left(R_{\mu}R ^{\mu}+\frac{\lambda}{8}G_{\mu\nu}G^{\mu\nu}\right)\;, \tag{1}\] \[R_{\mu}=U^{-1}\nabla_{\mu}U=R_{\mu}^{a}t_{a}\;,\quad G_{\mu\nu} =\left[R_{\mu},R_{\nu}\right]\,,\]
where \(U(x)\in SU(2)\), \(g\) is the metric determinant, \(\nabla_{\mu}\) is the partial derivative, and \(t_{a}=i\sigma_{a}\) are the generators of the \(SU(2)\) Lie group, being \(\sigma_{a}\) the Pauli matrices. The space-time manifold is split in \({\cal M}={\rm R}\times\Sigma\), and the Skyrme couplings \(K\) and \(\lambda\) are positive constants fixed experimentally.
The topological current \(J_{\mu}\) and the baryonic charge \(Q_{B}\) are defined, respectively, as
\[J^{\mu} = e^{\mu\nu\alpha\beta}{\rm Tr}(R_{\nu}R_{\alpha}R_{\beta})\;,\] \[Q_{B} = \frac{1}{24\pi^{2}}\int_{\Sigma}J^{0}\;. \tag{2}\]
where the last integral is performed on \(\Sigma\), the boundary of \({\cal M}\). Geometrically, a non-vanishing \(J^{\mu}\) measures the "genuine three-dimensional nature" of the configuration since (in order to have \(J^{\mu}\neq 0\), at least locally) \(U\) must encode three independent degrees of freedom. For instance, if two of the three degrees of freedom needed to describe \(U\) depend on the same coordinate, then \(J^{\mu}\) vanishes identically.
We want to analyze the intriguing phenomena which occur when a finite amount of baryons live within a cavity of finite spatial volume \(V=8\pi^{2}L^{2}L_{x}\), therefore we consider the metric of a box
\[ds^{2}=-dt^{2}+dx^{2}+L^{2}(d\mathfrak{y}^{2}+d\mathfrak{z}^{2})\;, \tag{3}\]
where \(L_{x}\) and \(L\) are constants representing the size of the box in the directions longitudinal and orthogonal to \(x\), respectively. The coordinates have the following ranges (see [62, 63]):
\[0\leq x\leq L_{x}\,,\;\;\;\;\;0\leq\mathfrak{y}\leq 2\pi\,,\;\;\;\;\;0\leq \mathfrak{z}\leq 4\pi\;. \tag{4}\]
Note that the coordinate \(x\) has a length dimension, while the other two coordinates, \(\mathfrak{z}\) and \(\mathfrak{z}\), are dimensionless (since the length scale \(L\) has been explicitly shown in the metric). This helps to analyze the interplay between the two scales, \(L\) and \(L_{x}\), in the following computations. In order to apply the known results on SGT mentioned above, the limit \(L_{x}\gg L\) has to be considered. In this case, one would describe a sort of "hadronic wire" (a cavity much longer in one spatial direction than in the other two). We will choose \(\mathfrak{y}\) and \(\mathfrak{z}\) as the "homogeneous coordinates". When \(L_{x}\) is not large, one has to use the exact available results on SGT on a finite interval [38, 39].
The Skyrme action and the corresponding field equations can be written explicitly in terms of the \(SU(2)\)-valued field (as any element of \(SU(2)\) can be written in the Euler representation)
\[U=\exp\left(t_{3}\,F\right)\exp\left(t_{2}\,H\right)\exp\left(t_{3}\,G\right)\;, \tag{5}\]
where \(F=F\left(x^{\mu}\right)\), \(G=G\left(x^{\mu}\right)\) and \(H=H\left(x^{\mu}\right)\) are the three scalar degrees of freedom of the Skyrme field (traditionally, in this parametrization, the field \(H\) is called _profile_).
It is well known that many baryonic distributions, especially at low energies, possess a sharp boundary. Namely, the energy and baryon densities decay exponentially fast to zero. Thus, in practice, one can define a surface that separates the region where the energy and baryon densities are different from zero from the region where these densities vanish. _In a neighbourhood of any point close to such a surface_, the energy and baryon densities can only depend on the spatial coordinate orthogonal to the surface and on time (we will comment more on this point in the following sections). This is why it is so interesting to study hadronic distributions that are homogeneous in two spatial directions. The analysis
of the energy-momentum tensor shows that the only Ansatz able to describe energy and baryon densities homogeneous in two spatial directions [64] is
\[H(x^{\mu})=H(t,x)\,\quad F(x^{\mu})=\frac{q}{2}\,\mathfrak{h}\,\quad G(x^{\mu})= \frac{p}{2}\,\mathfrak{z}. \tag{6}\]
The above Ansatz has several remarkable properties. First, the three coupled non-linear Skyrme field equations reduce consistently to just one partial differential equation (PDE) for the profile; such PDE is the sine-Gordon equation in \(1+1\) dimensions. Second, the topological charge density is non-trivial, leading to arbitrarily high baryon number. Third, suppose the energy/temperature scale is less than \(1/L\). In that case, the only relevant degrees of freedom are fluctuations \(\delta H(t,x)\) of \(H(t,x)\), which only depend on \(t\) and \(x\), as all the other possible fluctuations of the field \(U\) have energies larger than \(1/L\). These features of hadronic layers will be the key to deriving novel properties and sharp predictions on their behavior at finite density using known results in the literature on SGT.
### Effective sine-Gordon theory
We will consider \(p=q\), with \(B\equiv p^{2}>0\), to reduce the complexity of the formulas. Nevertheless, all the present results generalize easily to cases where \(p\) and \(q\) are arbitrary integers. The on-shell Lagrangian and the baryon density, \(\rho_{B}\,\equiv\,J_{0}\), corresponding to the Ansatz in Equation (6), give rise to the following effective SGT (we dropped out a constant term in the action, which does not affect the theory)
\[I_{\rm SG}\ =\ \int L_{\rm SG}\,dtdx\,=\,\int\left(-\frac{1}{2}\partial^{ \mu}\varphi\partial_{\mu}\varphi+M_{0}\left(\cos(\beta\,\varphi)-1\right) \right)\,dtdx\,\qquad\varphi\ =\frac{4}{\beta}H\, \tag{7}\]
\[\rho_{B}=-\frac{3\beta}{4}B\,\sin\left(\frac{\beta}{2}\varphi\right)\partial_ {x}\varphi\, \tag{8}\]
with
\[M_{0}=\frac{\pi^{2}K}{8L^{2}}B^{2}\lambda\,\quad\beta\ =\ \frac{2}{\pi\left[K\left(2L^{2}+B\, \lambda\right)\right]^{1/2}}\, \tag{9}\]
where constant terms have been discarded. From the above, the complete set of Skyrme field equations are reduced to the sine-Gordon equation for the \(\varphi\) field,
\[\Box\varphi-M_{0}\beta\sin(\beta\varphi)\ =0\,\qquad\Box\equiv-\partial_{t}^{2 }+\partial_{x}^{2}\, \tag{10}\]
which, in the static case, can be reduced to a quadrature
\[dx=\frac{d\varphi}{\sigma(E_{0},\varphi)}\,\qquad\sigma(E_{0},\varphi)=\pm \bigg{(}\frac{E_{0}}{L^{2}}-2M_{0}\cos(\beta\varphi)\bigg{)}^{\frac{1}{2}}\, \tag{11}\]
where \(E_{0}>2L^{2}M_{0}\) is an integration constant.
Hence, one can deduce many intriguing analytic results on this \((3+1)\)-dimensional distribution of baryonic matter confined in a cavity by using the classic SGT results in [9, 10, 11, 12, 13, 14], provided the energy/temperatures scale is less than \(1/L\). The boundary conditions for \(H\) (or \(\varphi\)) are fixed by requiring that the baryonic charge in Equation (2) is the integer \(B\),
\[\begin{array}{c}H(t,0)=n\frac{\pi}{2}\\ H(t,L_{x})=(m+\frac{n+1}{2})\,\pi\end{array}\qquad\Leftrightarrow\qquad \begin{array}{c}\varphi(t,0)=2n\frac{\pi}{\beta}\\ \varphi(t,L_{x})=\frac{2}{\beta}\ (n+(2m+1))\ \pi\end{array}\, \tag{12}\]
where \(n\) and \(m\) are integers.
If \(H\) satisfies either \(H(t,L_{x})=H(t,0)+n\,\pi\), or \(H(t,L_{x})+H(t,0)=m\,\pi\), for \(n,m\in\mathds{Z},\quad\forall\,t\), the total baryonic charge vanishes. Despite this, such configurations
the boundary conditions in Equation (12). For static configurations, \(\varphi(t,x)=\varphi(x)\), the integration constant \(E_{0}\), according to Equations (4), (9), (11) and (12), is fixed by
\[\pm\int_{0}^{2\pi/\beta}\frac{d\varphi}{[E_{0}-2L^{2}\,M_{0}\,\cos(\beta\varphi )]^{\frac{1}{2}}}=\frac{L_{x}}{L}\ \, \tag{13}\]
where we have taken, for simplicity, \(n=m=0\) in Equation (12). It is easy to see that the above equation always has a solution if \(L_{x}\) is finite. Indeed, if \(L_{x}\) is small (compared to \(L\)) one can take a large \(E_{0}\) to make the left-hand side of Equation (13) small as well. If \(L_{x}\) is large (but not divergent) one can have the left-hand side of Equation (13) large by choosing
\[E_{0}=2L^{2}\,M_{0}+\varepsilon\,\qquad 0<\varepsilon\ll 1\, \tag{14}\]
so that the denominator comes close to have a zero when \(\varphi=0\), or \(\varphi=2\pi\,\beta\). The \(\frac{L_{x}}{L}\to\infty\) case corresponds to the limit in which \(\varepsilon=0\).
The results in [9, 10, 11, 12, 13, 14, 14, 22, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] can now be used. The presence of both the two directions \(\mathfrak{n}\) and \(\mathfrak{z}\) orthogonal to \(x\) and the baryonic number manifests itself in the effective sine-Gordon couplings \(M_{0}\) and \(\beta\). Thus, according to Equation (9), whether or not the effective SGT describing the hadronic distribution is in the attractive phase (\(AP\)), repulsive phase (\(RP\)) or critical free-Fermion phase (\(FFP\)), can be deduced from the factor \(\frac{\beta^{2}}{4\pi}\), explicitly
\[AP:\frac{1}{\pi^{3}K\,(2L^{2}+B\,\lambda)}<1\,\ \ RP:\frac{1}{\pi^{3}K\,(2L^{2}+B \,\lambda)}>1\,\ \ FFP:\frac{1}{\pi^{3}K\,(2L^{2}+B\,\lambda)}=1. \tag{15}\]
Since \(B\) is a positive integer, \(K\lambda\) is around \(1/6\) (see Ref. [54]) and \(\pi^{3}>27\), then the effective theory is always in the attractive regime and, therefore, the results in Ref. [10, 11] can be applied here. In practice, the number of breathers
\[\frac{1}{\xi}=\frac{1-\beta^{2}}{\beta^{2}}\, \tag{16}\]
which is either \(\frac{1}{\xi}-1\) or \(\left[\frac{1}{\xi}\right]\) depending on whether \(\frac{1}{\xi}\in\mathbb{Z}\) or not, is always bigger than two, already for \(B\geq 4\).
Figure 1: Energy density of baryonic layers in a cavity.
### In and out-of-equilibrium implications
The description of \((3+1)\)-dimensional hadronic layers in a cavity presented above in terms of the SGT in \(1+1\) dimensions offers unprecedented possibilities. When the energy/temperatures scale is low enough, the equilibrium and out-of-equilibrium properties of these configurations can be computed using the effective SGT with coupling constants in Equations (7) and (9). It is worth reminding that, if one is close enough to the boundary of a baryonic distribution of matter (such that, in one spatial direction, the energy and baryon densities drop very rapidly to a very small value while, in the two orthogonal directions, the energy and baryon densities are almost homogeneous), then the present description in terms of SGT is actually generic.
_The first type_ of exact results, which can be "imported" from SGT in the analysis of baryonic distributions in a cavity, has to do with the mass spectrum of the theory, with the equilibrium correlation functions at finite temperatures (but smaller than \(1/L\)) [28, 29] and with the phase diagram [4, 5, 18, 19, 20, 22, 23]. If \(L_{x}/L\) is large enough, all such analytic results can be applied directly to the effective SGT in Equations (7) and (9). In this way, one can get the exact excitations' spectrum of the hadronic distribution as well as the corresponding low temperatures' correlation functions.
_The second type_ is the computation of the entanglement entropy [12, 28, 29]. In particular, the above references together with the present mapping imply that the entanglement entropy \(S_{E}\) of hadronic layers confined in a cavity is \(S_{E}=\frac{c_{\rm eff}}{3}\ln\left(\frac{l}{a}\right)\); where \(a\) is the UV cut-off beyond which the Skyrme model is not valid anymore, and \(l\) is the size of the finite interval in the \(x\) direction, of which we are computing the corresponding entanglement entropy. The effective central charge \(c_{\rm eff}\) of the theory can be estimated following Refs. [28, 29].
_The third_ (and, perhaps, most surprising) _type_ has to do with out-of-equilibrium properties, such as the dynamics of entanglement entropy after a quantum quench and the Loschmidt Echo. These quantities are entirely out of reach of any standard perturbative approach based on QCD in \(3+1\) dimensions, especially at finite baryon density. Nevertheless, the present mapping allows using directly the results in Refs. [9, 10, 11, 12, 13, 14, 21, 22, 23]. These results, together with the present mapping, allow concluding that one can obtain exact analytical predictions for the time evolution of the entanglement in generic hadronic layers confined in a three-dimensional cavity. Even more, following Ref. [9] (as in the present case, the effective SGT is always in the attractive regime with more than two breathers), the von Neumann and Renyi entropies display undamped oscillations in time, whose frequencies can be taken exactly from Refs. [10, 11]. For instance, the quantum quench can be realized by either changing \(B\) or \(L\) (moreover, the number of oscillatory modes grows with \(B\)). Finally, following Ref. [22], the Loschmidt amplitude, the fidelity and work distribution can be computed explicitly.
## 3 Technical details
### Euler angles parametrization and energy-density
The field equations of the system, obtained varying the action in Equation (1) with respect to the \(U\) field, are
\[\nabla_{\mu}\left(R^{\mu}+\frac{\lambda}{4}[R_{\nu},G^{\mu\nu}]\right)=0\;. \tag{17}\]
Equations (17) are generally a set of three coupled non-linear partial differential equations. The energy-momentum tensor of the theory is given by
\[T_{\mu\nu}=-\frac{K}{2}{\rm Tr}\left[R_{\mu}R_{\nu}-\frac{1}{2}g_{\mu\nu}R^{ \alpha}R_{\alpha}+\frac{\lambda}{4}\left(g^{\alpha\beta}G_{\mu\alpha}G_{\nu \beta}-\frac{1}{4}g_{\mu\nu}G_{\sigma\rho}G^{\sigma\rho}\right)\right]\;. \tag{18}\]
We are interested in finite density effects: in particular, we want to describe hadronic layers confined to a cavity, where the Euler angles parametrization for the Skyrme field is particularly convenient. The wording "hadronic layers" refers to distributions of energy density \(T_{00}\) and baryon density \(J^{0}\), which are homogeneous in two spatial coordinates. Still, they depend non-trivially on the third spatial coordinate and on time. The interest in such configurations lies (at the very least) in the following facts.
_First_, it is possible to define the "boundary" of the distribution of nuclear matter as the surface in space where the energy density and baryon density drop exponentially fast to zero (or to a very small constant value). In a small enough neighborhood of a point on such boundary, the energy density and baryon density will not depend on the two spatial coordinates tangent to the boundary. In contrast, they will depend very sensitively on the spatial coordinate orthogonal to the boundary (since both \(T_{00}\) and \(J^{0}\) drop rapidly to zero along this spatial direction). Hence, the configurations discussed here are quite generic and useful close to the boundary _of any distribution of hadronic matter_ (see Figure 2).
_Second_, such structures are known to appear in numerical simulations at finite baryon density, and there is robust phenomenological evidence supporting their presence in neutron stars (see [68] and references therein). Consequently, the starting point is the metric of a cavity in Equation (3) and Equation (4).
In the present setting, to apply the powerful results on sine-Gordon theory (SGT) in and out of equilibrium mentioned above, the limit \(L_{x}\gg L\) has to be considered. This choice represents a sort of "hadronic wire": a cavity which is much longer in one spatial direction than in the other two (see Figure 3). We will choose \(\mathfrak{\eta}\) and \(\mathfrak{z}\) as the "homogeneous coordinates" (namely, the coordinates which do not appear explicitly in \(T_{00}\) and \(J^{0}\)). When \(L_{x}\) is not large compared to \(L\), one must use the exact available results on SGT either on a finite interval or on \(S^{1}\) developed in [38, 39] and references therein (we will come back to this case in a future publication).
As previously mentioned, the Skyrme field can be written explicitly in the Euler angles parametrization as in Equation (5). A direct computation shows that, in terms of this
Figure 2: A schematic representation of the baryonic layer in the coordinates \((x,\mathfrak{\eta},\mathfrak{z})\) of (4). Taking a specific solution, if we make a zoom-in, we find a surface perpendicular to the direction where \(\rho_{B}\) drops very rapidly to zero, in this case, \(\hat{x}\), separating two distinct regions: \(\rho_{B}\neq 0\) (interior) and \(\rho_{B}=0\) (exterior). As the profile \(H(x,t)\) does not depend on the spatial dimensions \(\mathfrak{\eta}\) and \(\mathfrak{z}\), at point \(p\), the tangent space \(T_{p}\) (where the coordinate basis \(\{\frac{\partial}{\partial_{\mathfrak{\eta}}},\frac{\partial}{\partial_{ \mathfrak{\eta}}}\}\) belongs to) is _approximately_ the baryonic layer.
parametrization, the Skyrme action reads
\[I(H,F,G)= -\frac{K}{2}\int d^{4}x\sqrt{-g}\bigg{\{}(\nabla H)^{2}+(\nabla F)^{ 2}+(\nabla G)^{2}+2\cos(2H)(\nabla F\cdot\nabla G)\] \[-\lambda\big{(}2\cos(2H)((\nabla H\cdot\nabla F)(\nabla H\cdot \nabla G)-(\nabla H)^{2}(\nabla F\cdot\nabla G)\big{)}\] \[+4\sin^{2}(H)\cos^{2}(H)((\nabla F\cdot\nabla G)^{2}-(\nabla F)^ {2}(\nabla G)^{2})\] \[+(\nabla H\cdot\nabla F)^{2}+(\nabla H\cdot\nabla G)^{2}-(\nabla H )^{2}(\nabla F)^{2}-(\nabla H)^{2}(\nabla G)^{2}\big{)}\bigg{\}}\.\]
The energy-momentum tensor in this parametrization reads
\[T_{\mu\nu}= -\frac{K}{2}\,\bigg{\{}-2[\nabla_{\mu}F\nabla_{\nu}F+\nabla_{\mu} H\nabla_{\nu}H+\nabla_{\mu}G\nabla_{\nu}G+\cos(2H)(\nabla_{\mu}F\,\nabla_{\nu}G+ \nabla_{\mu}G\,\nabla_{\nu}F)]\] \[+g_{\mu\nu}\left[(\nabla F)^{2}+(\nabla H)^{2}+(\nabla G)^{2}+2 \cos(2H)(\nabla F\cdot\nabla G)\right]\] \[+2\lambda\bigg{[}\nabla_{\mu}F(\nabla H\cdot\nabla F)\nabla_{\nu} H-\nabla_{\mu}F(\nabla H)^{2}\nabla_{\nu}F-\nabla_{\mu}H(\nabla F)^{2}\nabla_{ \nu}H+\nabla_{\mu}H(\nabla F\cdot\nabla H)\nabla_{\nu}F\] \[+\nabla_{\mu}G(\nabla H\cdot\nabla G)\nabla_{\nu}H-\nabla_{\mu}G( \nabla H)^{2}\nabla_{\nu}G-\nabla_{\mu}H(\nabla G)^{2}\nabla_{\nu}H+\nabla_{ \mu}H(\nabla G\cdot\nabla H)\nabla_{\nu}G\] \[+\cos(2H)\left(\nabla_{\mu}F(\nabla H\cdot\nabla G)\nabla_{\nu}H- \nabla_{\mu}F(\nabla H)^{2}\nabla_{\nu}G-\nabla_{\mu}H(\nabla F\cdot\nabla G) \nabla_{\nu}H\right.\] \[+\nabla_{\mu}H(\nabla F\cdot\nabla H)\nabla_{\nu}G+\nabla_{\mu}G( \nabla H\cdot\nabla F)\nabla_{\nu}H-\nabla_{\mu}G(\nabla H)^{2}\nabla_{\nu}F- \nabla_{\mu}H(\nabla G\cdot\nabla F)\nabla_{\nu}H\] \[+\nabla_{\mu}H(\nabla G\cdot\nabla H)\nabla_{\nu}F\big{)}+4\cos^ {2}(H)\,\sin^{2}(H)\,(\nabla_{\mu}F(\nabla G\cdot\nabla F)\nabla_{\nu}G-\nabla _{\mu}F(\nabla G)^{2}\nabla_{\nu}F\] \[-\nabla_{\mu}G(\nabla F)^{2}\nabla_{\nu}G+\nabla_{\mu}G(\nabla F \cdot\nabla G)\nabla_{\nu}F)\bigg{]}\] \[+\lambda\,g_{\mu\nu}\left[(\nabla F)^{2}(\nabla H)^{2}-(\nabla F \cdot\nabla H)^{2}+(\nabla G)^{2}(\nabla H)^{2}-(\nabla G\cdot\nabla H)^{2}\right.\] \[+2\cos(2H)\left((\nabla F\cdot\nabla G)(\nabla H)^{2}-(\nabla F \cdot\nabla H)(\nabla G\cdot\nabla H)\right)\] \[\left.+4\cos^{2}(H)\,\sin^{2}(H)\,((\nabla F)^{2}(\nabla G)^{2}-( \nabla F\cdot\nabla G)^{2})\right]\right\}\,.\]
The field equations, obtained varying the action with respect to the degrees of freedom \(F\),
Figure 3: The range of the coordinates \((x,\mathfrak{y},\mathfrak{z})\) of (4) represents a “hadronic wire”. The longer edge has a length of \(L_{x}\), while the other two have a size of \(2\pi\,L\) and \(4\pi\,L\). The origin \(O\) is on one of its corners, and the coordinates axes are represented in light green.
\(H\) and \(G\), are
\[0 =\nabla_{\mu}\left\{\,\cos(2G)\,\sin(2H)\,\nabla^{\mu}F-\sin(2G)\, \nabla^{\mu}H\right.\] \[-\lambda\,\sin(2G)\bigg{(}(\nabla F)^{2}\nabla^{\mu}H-(\nabla F \cdot\nabla H)\nabla^{\mu}F+(\nabla G)^{2}\nabla^{\mu}H-(\nabla G\cdot\nabla H )\nabla^{\mu}G\] \[+\cos(2H)\left(2(\nabla F\cdot\nabla G)\nabla^{\mu}H-(\nabla F \cdot\nabla H)\nabla^{\mu}G-(\nabla H\cdot\nabla G)\nabla^{\mu}F\right)\bigg{)}\] \[-\lambda\,\cos(2G)\,\sin(2H)\bigg{(}(\nabla F\cdot\nabla G)\nabla ^{\mu}G+(\nabla F\cdot\nabla H)\nabla^{\mu}H-(\nabla G)^{2}\nabla^{\mu}F\] \[\left.-(\nabla H)^{2}\nabla^{\mu}F-\cos(2H)\,\left((\nabla F \cdot\nabla G)\nabla^{\mu}F-(\nabla F)^{2}\nabla^{\mu}G\right)\bigg{)}\right\}\,, \tag{19}\] \[0 =\nabla_{\mu}\left\{\sin(2G)\,\sin(2H)\,\nabla^{\mu}F+\cos(2G)\, \nabla^{\mu}H\right.\] \[+\lambda\,\cos(2G)\left((\nabla G)^{2}\nabla^{\mu}H-(\nabla G \cdot\nabla H)\nabla^{\mu}G+(\nabla F)^{2}\nabla^{\mu}H-(\nabla F\cdot\nabla H )\nabla^{\mu}F\right.\] \[\left.+\cos(2H)\,\left(2(\nabla F\cdot\nabla G)\nabla^{\mu}H-( \nabla F\cdot\nabla H)\nabla^{\mu}G-(\nabla G\cdot\nabla H)\nabla^{\mu}F \right)\right)\] \[+\lambda\,\sin(2G)\,\sin(2H)((\nabla H)^{2}\nabla^{\mu}F-(\nabla H \cdot\nabla F)\nabla^{\mu}H+(\nabla G)^{2}\nabla^{\mu}F\] \[-\left.(\nabla G\cdot\nabla F)\nabla^{\mu}G+\cos(2H)((\nabla F \cdot\nabla G)\nabla^{\mu}F-(\nabla F)^{2}\nabla^{\mu}G)\right)\right\}\,,\] (20) \[0 =\nabla_{\mu}\left\{\,\cos(2H)\,\nabla^{\mu}F+\nabla^{\mu}G- \lambda\,\sin^{2}(2H)\,((\nabla F\cdot\nabla G)\nabla^{\mu}F-(\nabla F)^{2} \nabla^{\mu}G)\right.\] \[+\lambda\,\cos(2H)\,((\nabla H)^{2}\nabla^{\mu}F-(\nabla H\cdot \nabla F)\nabla^{\mu}H)+\lambda\,((\nabla H)^{2}\nabla^{\mu}G-(\nabla H\cdot \nabla G)\nabla^{\mu}H)\bigg{\}}\,\,. \tag{21}\]
The only way to have an energy density homogeneous in \(\mathfrak{\eta}\) and \(\mathfrak{z}\) is to require that \(F\) and \(G\) are linear functions of these coordinates, and \(H\) depends on the coordinate \(x\) (transverse to the layer) and on time. The only Ansatz satisfying these properties is the one in Equation (6). Indeed, if \(H\) would depend either on \(\mathfrak{\eta}\) or on \(\mathfrak{z}\), then the energy density would depend on these coordinates as well. Hence, the profile \(H\) carries the physical information on when and where \(T_{00}\) and \(J^{0}\) vanish and when they do not (that is why it makes sense to call \(H\) "profile", as it encodes information on the spacetime variations of \(T_{\mu\nu}\) and \(J^{\mu}\)). Furthermore, the above Ansatz is actually _generic_ if one is close enough to the boundary of any baryonic distribution (as it has been already emphasized). Consequently, _the Ansatz here above describe locally any baryonic configuration close to one of its boundaries._
The above choice has several remarkable properties (see [62, 63, 69, 70]). First, the three coupled non-linear Skyrme field equations reduce consistently to just one PDE for the profile \(H(t,x)\); the sine-Gordon equation in \(1+1\) dimensions (as one can check directly in Equations (19) to (21)). Second, this choice keeps alive the topological density.
In fact, by using the Ansatz in Equation (6), the field equations in Equations (19) to (21), are reduced to
\[\partial_{t}^{2}H-\partial_{x}^{2}H+\frac{B^{2}\,\lambda}{8L^{2}\,(2L^{2}+B\, \lambda)}\sin(4H)=0\;, \tag{22}\]
where we have considered for simplicity \(p=q\), and \(B=p^{2}>0\). Also, the on-shell Lagrangian density \(\mathcal{L}_{on-shell}\) (apart from a constant term \(-K\frac{B}{4L^{2}}\)), the energy-density \(T_{00}\) (apart from a constant term \(K\frac{B}{4L^{2}}\)), and the baryon density \(\rho_{B}=J_{0}\) are, respectively,
\[\mathcal{L}_{on-shell}=\frac{K}{64L^{4}}\,\left(16L^{2}\,\left(2L^{2}+|B|\ \lambda\right)\,\left[(\partial_{t}H)^{2}-(\partial_{x}H)^{2}\right]-B^{2}\, \lambda\,\left(1-\cos{(4H)}\right)\right)\;, \tag{23}\]
\[T_{00}=\frac{K}{64L^{4}}\,\left(16L^{2}\left(2L^{2}+|B|\ \lambda\right)\,\left[( \partial_{t}H)^{2}+(\partial_{x}H)^{2}\right]+B^{2}\,\lambda\,(1-\cos(4H)) \right)\;, \tag{24}\]
\[\rho_{B}=J_{0}=-3\,B\,\left(\partial_{x}H\right)\,\sin(2H). \tag{25}\]
The boundary conditions for \(H\) are fixed by requiring that the baryonic charge is \(\pm B\), as we have mentioned. For the case \(Q_{B}\) to be zero, should be \(\cos(2H(t,L_{x}))=\cos(2H(t,0))\), \(\forall\,t\). This implies
\[H(t,L_{x})=H(t,0)+n\,\pi,\quad or\quad H(t,L_{x})+H(t,0)=m\,\pi,\quad n,m\in \mathds{Z},\quad\forall\,t. \tag{26}\]
Such configurations, of \(Q_{B}=0\), are interesting anyway since the baryonic density is non-trivial and one can have bound states of two layers (in breather-like style).
For static configurations \(H(t,x)=H(x)\) one can reduce the field equation to a simple quadrature
\[(\partial_{x}H)=\pm\left[E_{0}-\frac{B^{2}\,\lambda}{16L^{2}\left(2L^{2}+B\, \lambda\right)}\cos 4H\right]^{1/2}\, \tag{27}\]
where the integration constant \(E_{0}\) is fixed by
\[4\int_{0}^{\pi/2}\frac{dH}{\left[16L^{2}E_{0}-\frac{B^{2}\lambda}{\left(2L^{2 }+B\,\lambda\right)}\cos 4H\right]^{1/2}}=\frac{L_{x}}{L}\, \tag{28}\]
where we have taken, for simplicity, \(n=m=0\) in Equation (26). It is easy to see that the above equation for \(E_{0}\) always has a solution if \(L_{x}\) is finite. Indeed, if \(L_{x}\) is small (compared to \(L\)) one can take a large \(E_{0}\) to make the left-hand side of Equation (28) small as well. If \(L_{x}\) is large (but not divergent) one can have the left-hand side of Equation (28) large by choosing
\[E_{0}=\frac{B^{2}\lambda}{16L^{2}\left(2L^{2}+B\lambda\right)}+\varepsilon\, \ \ 0<\varepsilon\ll 1\,\]
so that the denominator of the left-hand side of Equation (28) comes close to have a zero when \(H=0\), or \(H=\pi/2\). The \(L_{x}\to\infty\) case corresponds to the limit in which \(\varepsilon=0\).
The proper normalization of the Skyrme profile \(H\) to define the effective sine-Gordon Lagrangian \(L_{SG}\) and the corresponding energy-density \(\widetilde{T}_{00}\) can be achieved by requiring that the integral along the coordinate \(x\) of \(\widetilde{T}_{00}\)_should give the actual total energy of the hadronic layers_ (with a similar condition for the effective action \(I_{SG}\)). Hence, \(\widetilde{T}_{00}\) is the integral in the transverse coordinates \(\mathfrak{y}\) and \(\mathfrak{z}\) of \(T_{00}\), so that the integral along \(x\) of \(\widetilde{T}_{00}\) will give the total energy of the Skyrmionic system
\[\widetilde{T}_{00}=\frac{\pi^{2}K}{8L^{2}}\left\{16L^{2}\left(2L^{2}+\left|B \right|\lambda\right)\,\left[\left(\partial_{t}H\right)^{2}+\left(\partial_{x }H\right)^{2}\right]+B^{2}\,\lambda\left(1-\cos(4H)\right)\right\}\, \tag{29}\]
where factor \(8\pi^{2}L^{2}\) comes from the integral in \(\mathfrak{y}\) and \(\mathfrak{z}\). The same is true for the effective lagrangian \(L_{SG}\):
\[L_{SG}=\frac{\pi^{2}K}{8L^{2}}\left\{16L^{2}\left(2L^{2}+B\,\lambda\right) \left[\left(\partial_{t}H\right)^{2}-\left(\partial_{x}H\right)^{2}\right]-B^ {2}\lambda\left(1-\cos\left(4H\right)\right)\right\}. \tag{30}\]
The proper normalization of the kinetic term can be achieved normalizing \(H\) as follows:
\[\varphi=\frac{4}{\beta}H\,\quad\beta=\frac{2}{\pi\left[K\left(2L^{2}+B\, \lambda\right)\right]^{1/2}}\,\qquad M_{0}=\frac{\pi^{2}K}{8L^{2}}B^{2}\lambda\, \tag{31}\]
so that the effective sine-Gordon coupling \(\beta\) and the effective dimensionless sine-Gordon action become, respectively,
\[\beta=\frac{2}{\pi\left[K\left(2L^{2}+B\,\lambda\right)\right]^{1/2}}\, \tag{32}\]
\[I_{SG}=\int L_{SG}dtdx=\ \int\left(-\frac{1}{2}\partial^{\mu}\varphi\partial_{ \mu}\varphi+M_{0}\left(\cos(\beta\,\varphi)-1\right)\right)dt\,dx\, \tag{33}\]
where constant terms have been discarded.
### Perturbations on the solutions
An important technical part of the present work is to show that, with the Ansatz defined in Equation (6), not only the field equations and the energy-density reduce to the corresponding quantities in SGT in \(1+1\) dimensions in a sector with non-vanishing baryonic charge, but also that the lowest energy perturbations of these configurations are precisely perturbations of the sine-Gordon effective field which only depend on \(t\) and \(x\) (this is the reason why it is convenient to take \(L_{x}\gg L\): in this case, the energy needed to excite modes which depend non-trivially on the transverse coordinates is much higher than the energy needed to excite "sine-Gordon modes"). This issue is relevant since, if we want to use the available results on equilibrium and non-equilibrium SGT (in particular, [9, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 70, 71, 72, 73, 74, 75, 76]), then we must identify a regime in which also the low energy fluctuations are of "sine-Gordon type".
Hence, let us consider a general solution \(U_{0}\) of the form in Equation (5), where \(F\), \(H\) and \(G\) fulfilling Equations (19) to (21). Let us now take a perturbation of \(U_{0}\) as
\[U\ =U_{0}\left(\mathbb{1}+\chi\right)\;, \tag{34}\]
where \(\chi\) is a \(2\times 2\) matrix with the conditions
\[\chi^{\dagger}=-\chi\;,\qquad\mathrm{Tr}\,\chi=0\,. \tag{35}\]
These conditions ensure that \(\chi\) is an arbitrary element of the \(\mathfrak{su}(2)\) algebra, i.e.,
\[\chi=\epsilon\,\chi_{1}\,t_{1}\,+\epsilon\,\chi_{2}\,t_{1}\,+\epsilon\,\chi_{ 3}\,t_{3}\;, \tag{36}\]
where \(|\epsilon|\ll 1\) is the perturbation parameter. Observe that \(\chi_{i}\) are real functions of the coordinates \(t\), \(x\), \(\mathfrak{\eta}\) and \(\mathfrak{z}\).1 Introducing this expansion in Equation (17), we get
Footnote 1: Notice that we could have equally defined \(U\ =U_{0}+\chi^{\prime}\), where \(\chi^{\prime}=U_{0}\chi\). Therefore, as \(U_{0}\) is invertible, all the results for \(\chi\) are equivalent to \(\chi^{\prime}\). In this way, we see that Equation (34) is directly related to the usual prescription to write a perturbation as the solution \(U_{0}\) plus _something_.
\[[R_{0\mu}+\frac{\lambda}{4}[R_{0}^{\nu},G_{0\mu\nu}],\nabla^{\mu}\chi]+\nabla^ {\mu}\bigg{(}\nabla_{\mu}\chi+\frac{\lambda}{4}[\nabla^{\nu}\chi,G_{0\mu\nu}] \bigg{)}\]
\[+\frac{\lambda}{4}\nabla^{\mu}\bigg{(}[R_{0}^{\nu},[\nabla_{\mu}\chi,R_{0\nu}] ]+[R_{0}^{\nu},[R_{0\mu},\nabla_{\nu}\chi]]\bigg{)}=0\;, \tag{37}\]
up to order \(O(\epsilon^{2})\). Now, let us consider a generic perturbation (in the present context, the wording "generic perturbation" means that we allow the perturbation to depend on all the four space-time coordinates). A natural Ansatz for the perturbation is
\[\chi^{1} =\zeta_{1}(x)\,\cos(H(x))\,\cos(p\,_{\mathfrak{z}})\,e^{i(\omega \,t+k_{1}\,\mathfrak{z}+k_{2}\,\mathfrak{z})}\;,\] \[\chi^{2} =\zeta_{2}(x)\,\cos(H(x))\,\sin(p\,_{\mathfrak{z}})\,e^{i(\omega \,t+k_{1}\,\mathfrak{z}+k_{2}\,\mathfrak{z})}\;,\] \[\chi^{3} =\zeta_{3}(x)\,\sin(H(x))\,e^{i(\omega\,t+k_{1}\,\mathfrak{z}+k_{ 2}\,\mathfrak{z})}\;,\qquad k_{1}\neq 0\;,\ k_{2}\neq 0\,\]
where we are taking into account the fact that the energy and baryon densities do not depend on the transverse coordinates. The profiles \(\zeta_{j}(x)\) (\(j=1,2,3\)) of the perturbations only depend on \(x\). The first conclusion which arises analyzing the linearized field equations is that, actually, only one of these three profiles is independent. Namely, one can choose2:
Footnote 2: The fact that only one radial function is necessary in the Ansatz can be easily verified for the non-linear sigma model case, that is, when \(\lambda=0\). Moreover, for small values of \(\lambda\) this could still be true by analytic continuation. It can be also checked that the linearized equations for perturbations of the form in Equation (38) are always consistent (namely, for any value of \(\lambda\) one always gets as many equations as unknown functions).
\[\chi^{1} =\zeta(x)\,\cos(H(x))\,\cos(p\,_{\mathfrak{z}})\,e^{i(\omega\,t+k_{1} \,\mathfrak{z}+k_{2}\,\mathfrak{z})}\;,\] \[\chi^{2} =\zeta(x)\,\cos(H(x))\,\sin(p\,_{\mathfrak{z}})\,e^{i(\omega\,t+k_{ 1}\,\mathfrak{z}+k_{2}\,\mathfrak{z})}\;, \tag{38}\] \[\chi^{3} =-\,\zeta(x)\,\sin(H(x))\,e^{i(\omega\,t+k_{1}\,\mathfrak{z}+k_{ 2}\,\mathfrak{z})}\;,\]
one can check that the complete set of linearized Skyrme equations (37) are satisfied if \(k_{2}=\frac{q}{p}\,k_{1}\), (i.e., \(k_{2}=k_{1}\equiv k\), because we are taken \(p=q\)), and if \(\zeta\) satisfies a linear ordinary differential equation (ODE) of the form \(\zeta^{\prime\prime}(x)+A(x)\,\zeta^{\prime}(x)+B(x)\,\zeta(x)\;=\;0\), (where the functions \(A(x)\) and \(B(x)\) can be computed explicitly in terms of the background solution). In order to use the Sturm-Liouville theory, it is convenient the following change of variables \(\zeta(x)=\alpha(x)\,\xi(x)\), choosing \(\alpha\) in such a way to eliminate the first derivative term. In this way, we get
\[-\xi^{\prime\prime}(x)+\frac{Q(x)}{L^{2}}\,\xi(x)=\omega^{2}\,W(x)\,\xi(x)\;, \tag{39}\]
where the functions \(Q(x)\) and \(W(x)\) are
\[Q(x)= \frac{1}{32\,(B\lambda+2)\,(B\,\lambda\,\cos^{2}(H(x))+2)^{2}}\times\] \[\left\{(B\lambda\cos(2H(x))+B\lambda+4)\left(B^{3}\lambda^{2}\, \cos(6H(x))+B^{2}\lambda\,(3B\,\lambda+8)\,\cos(2H(x))\right.\right.\] \[+2(B\lambda+2)\,(B^{2}\,\lambda\,\cos(4H(x))+B^{2}\,\lambda+32k^ {2})\bigg{)}\] \[-\left(B\,\lambda\!\left(4\cos(2H(x))\,(\lambda(B-k^{2})+3)+B\, \lambda\,\cos(4H(x))\right)\right.\] \[+B\,\lambda^{2}\,(3B-4k^{2})+8\lambda\,(B-2k^{2})+8\left)\left(1 6(B\lambda+2)\,E_{0}+B^{2}\lambda\,\cos(4H(x))\right)\right\}\,,\] \[W(x)= \frac{4\lambda\,E_{0}+B\,\lambda\left(\frac{B\lambda\,\cos(4H(x) )}{4B\lambda+8}+\cos(2H(x))\right)+B\,\lambda+4}{(2B\,\lambda\,\cos^{2}(H(x)) +4)}\;.\]
Here we have defined \(\lambda\equiv\frac{\lambda}{L^{2}}\), and used Equations (22) and (27). Also,
\[\alpha(x)=\frac{C_{1}}{\sqrt{4+B\,\lambda(1+\cos(H(x)))}}\;, \tag{40}\]
being \(C_{1}\) an arbitrary dimensionless constant. Equation (39) can be written in a slightly different manner by the change of variable \(\rho=\frac{x}{L}\),
\[-\xi^{\prime\prime}+Q\,\xi=\omega^{2}\,W\,\xi\;, \tag{41}\]
where the substitution of \(x\) as a function of \(\rho\) is carried out whatever is necessary, the prime now denotes \(\frac{\partial}{\partial\rho}\), and we have defined \(\omega\equiv\omega\,L\).
The ODE in Equation (41) is a particular case of the Sturm-Liouville problem 3 (SLP), it can hardly be solved analytically and even numerically, it is not simple. For us it is enough to have some sufficient stability conditions: we require that the function \(\xi\) does not diverge inside the range where \(x\) is defined, i.e., the interval \([0,L_{x}]\). Therefore, we find the eigenvalues \(\omega^{2}\) using the method of _simple centred differences_[77], and the minimum \(\omega^{2}_{\min}\) should be positive for stability, with boundary conditions \(\xi(0)=0\) and \(\xi(L_{x})=0\). The function \(H(x)\) as a solution of Equation (22), with boundary conditions in Equation (12), is well-behaved. Also, the function in Equation (40) is well-behaved, strictly positive and without singularities inside the integration range if \(\lambda\geq 0\) and \(L\neq 0\). In Figure 4, we show the stabilitpy regions for different values of \(B\) and \(\frac{L_{x}}{L}\) (see details in the caption). By analyzing the energy scale of the fluctuations of \(F\), \(G\) and \(H\), it is observed that the minimum of positive frequencies \(\omega_{\min}\) goes as \(1/L\) due to the scale normalization of \(\omega\) in Equation (41), at least for the set of parameters analyzed. This can be seen in Figure 5 for some values of \(B\) and \(\frac{L_{x}}{L}\). On the other hand, the lowest
Figure 4: Plots of the stability regions at the \((k\,\lambda/L^{2})\) plane for \(m=0,2,4,6,8\) in (12) for different values of \(B\) and \(\frac{L_{x}}{L}\). For example, plot (j) shows that above the separation line of \(m=8\) (in violet) the region is stable, whilst below such separation line is unstable; for \(m=6\), above the separation line (in red) is stable, whilst below it is unstable; same for the separation line \(m=4\) (in green) and \(m=2\) (in yellow). Plot (b) shows that, for those values of \(\frac{L_{x}}{L}\) and \(B\), the region is stable for all values of \(m\leq 8\). Notice that the \(k\) values are representative, as by boundary condition in the cavity, the allowed values of \(k\) are, in fact, integers.
Figure 5: Plots \(k\) vs. \(\lambda/L^{2}\) for \(m=0\) taking different values of \(B\) and \(\frac{L_{x}}{L}\). The dimensionless quantities \(\mathbf{\omega}_{\text{min}}^{2}=\omega_{\text{min}}^{2}\,L^{2}\) are shown in the color palette on the right of each plot. As the values of \(\mathbf{\omega}^{2}\) are not much tiny, \(\omega\) goes as \(L^{-1}\), at least for the set of parameters analyzed. The \(k\) values are representative (see the caption of Figure 4).
energy perturbations are perturbations \(\delta H\) of \(H(t,x)\) which depend only on \(t\) and \(x\). In particular, these perturbations of static profile \(H(t,x)=H(x)\) are gapless, as in SGT (when \(L_{x}\gg L\)). One can readily see this as follows. The Skyrme field equations for a static profile \(H(t,x)=H(x)\) corresponding to the Ansatz in Equation (6) reduces to Equation (22), which, in its turn, reduces to Equation (27), where the condition in Equation (28) fixes the integration constant \(E_{0}\). Given a solution \(H_{0}(x)\) of Equation (27) one can always find a solution \(\delta H\) of the linearized field equation with zero energy as \(\delta H=\partial_{x}H_{0}(x)\). With the appropriate choice of \(E_{0}\), \(\partial_{x}H_{0}(x)\) never changes sign, so \(\partial_{x}H_{0}(x)\) is a nodeless zero mode. Moreover, as it has been shown in Ref. [78], SGT possesses gapless modes.
Summarizing, the above arguments show that at energy and/or temperatures less than \(1/L\), the only modes which are energetically available in the full Skyrme theory in the cavity, like Figure 3, are the sine-Gordon modes associated with perturbations \(\delta H\) of \(H(t,x)\) which depend only on \(t\) and \(x\). Consequently, not only does the Skyrme model reduce to SGT for baryonic layers configurations in such a cavity, but also perturbations in this regime are, in fact, the lowest energy perturbations of SGT because generic perturbations like the ones shown above always possess higher energy.
## 4 Conclusions
In this article, it has been constructed an explicit mapping between hadronic layers in a three-dimensional cavity in the low-energy limit of QCD and an effective sine-Gordon theory in \(1+1\) dimensions. This mapping allows deriving exact non-perturbative results on these baryonic layers' low-energy/temperature behaviour using well-known results in SGT. These analytic results (especially the out-of-equilibrium one) are entirely out of reach of the other available theoretical methods in the low-energy sector of QCD. The sharp predictions on the oscillations of von Neumann and Renyi entropies can be, at least in principle, tested experimentally. The consequences of the existence of this mapping are far-reaching and will be further investigated in forthcoming papers.
Finally, the implications of our results are especially intriguing in the analysis of neutron stars. Indeed, configurations such as hadronic tubes and layers of baryons (known as nuclear pasta states) appear [79, 80]. Our framework allows us to determine, among other things, the transport properties of these inhomogeneous baryonic distributions with such beautiful shapes, which are challenging to compute using numerical simulations.
## Acknowledgements
F. C. has been funded by Fondecyt Grant No. 1200022. M. L. is funded by ANID, Convocatoria Nacional Subvencion a la Instalacion en la Academia Convocatoria Ano 2022, Folio SA85220027. P. P. is supported by Fondo Nacional de Desarrollo Cientifico y Tecnologico-Chile (Fondecyt Grant No. 3200725) and by Charles University Research Center (UNCE/SCI/013). A. V. is funded by FONDECYT post-doctoral Grant No. 3200884. The Centro de Estudios Cientificos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of ANID.
|
2310.14947 | System Combination via Quality Estimation for Grammatical Error
Correction | Quality estimation models have been developed to assess the corrections made
by grammatical error correction (GEC) models when the reference or
gold-standard corrections are not available. An ideal quality estimator can be
utilized to combine the outputs of multiple GEC systems by choosing the best
subset of edits from the union of all edits proposed by the GEC base systems.
However, we found that existing GEC quality estimation models are not good
enough in differentiating good corrections from bad ones, resulting in a low
F0.5 score when used for system combination. In this paper, we propose GRECO, a
new state-of-the-art quality estimation model that gives a better estimate of
the quality of a corrected sentence, as indicated by having a higher
correlation to the F0.5 score of a corrected sentence. It results in a combined
GEC system with a higher F0.5 score. We also propose three methods for
utilizing GEC quality estimation models for system combination with varying
generality: model-agnostic, model-agnostic with voting bias, and
model-dependent method. The combined GEC system outperforms the state of the
art on the CoNLL-2014 test set and the BEA-2019 test set, achieving the highest
F0.5 scores published to date. | Muhammad Reza Qorib, Hwee Tou Ng | 2023-10-23T13:46:49Z | http://arxiv.org/abs/2310.14947v1 | # System Combination via Quality Estimation for
###### Abstract
Quality estimation models have been developed to assess the corrections made by grammatical error correction (GEC) models when the reference or gold-standard corrections are not available. An ideal quality estimator can be utilized to combine the outputs of multiple GEC systems by choosing the best subset of edits from the union of all edits proposed by the GEC base systems. However, we found that existing GEC quality estimation models are not good enough in differentiating good corrections from bad ones, resulting in a low \(F_{0.5}\) score when used for system combination. In this paper, we propose GRECO1, a new state-of-the-art quality estimation model that gives a better estimate of the quality of a corrected sentence, as indicated by having a higher correlation to the \(F_{0.5}\) score of a corrected sentence. It results in a combined GEC system with a higher \(F_{0.5}\) score. We also propose three methods for utilizing GEC quality estimation models for system combination with varying generality: model-agnostic, model-agnostic with voting bias, and model-dependent method. The combined GEC system outperforms the state of the art on the CoNLL-2014 test set and the BEA-2019 test set, achieving the highest \(F_{0.5}\) scores published to date.
Footnote 1: Source code available at [https://github.com/nusnlp/greco](https://github.com/nusnlp/greco).
## 1 Introduction
Grammatical error correction (GEC) is the task of automatically detecting and correcting errors in text, including but not limited to grammatical errors, misspellings, orthographic errors, and semantic errors (Chollampatt and Ng, 2018; Qorib et al., 2022; Bryant et al., 2023). A GEC model is evaluated by calculating the \(F\)-score (van Rijsbergen, 1979) from comparing the edits proposed by the GEC model against gold (human-annotated) reference edits. GEC edits are a set of insertion, deletion, or substitution operations that are applied to the original (source) sentence to make it free from errors. An edit is represented by three values: start index, end index, and correction string (Table 1). Since CoNLL-2014 (Ng et al., 2014), \(F_{0.5}\) has become the standard metric for GEC. Grundkiewicz et al. (2015) and Chollampatt and Ng (2018) reported that the \(F_{0.5}\) score correlates better with human judgment than other GEC metrics.
Qorib and Ng (2022) reported that GEC models have outperformed humans when measured by the \(F_{0.5}\) metric, but still make occasional mistakes in simple cases. Thus, we need a way to evaluate the corrections proposed by a GEC model before accepting their corrections as a replacement for our original sentences. In real-world use cases where the gold reference is not available, we can use a GEC quality estimation model to assess the quality of the correction made by a GEC model.
GEC quality estimation model accepts a source sentence and its correction and produces a quality score. The quality score characterizes the accuracy and appropriateness of a correction with respect to the source sentence. The higher the score, the more accurate and appropriate the correction is. A quality estimation model is typically used as a filtering method to accept or reject a correction made by a GEC model (Chollampatt and Ng, 2018). It can also be used for choosing the best correction from the top-\(k\) outputs of a GEC system (Liu et al., 2021).
In this paper, we propose to extend that use case further. Instead of choosing the best correction (hypothesis) from GEC models, we can use a GEC quality estimation model to produce a new and more accurate correction, based on the edits that appear in the hypotheses. We generate all possible hypotheses from the edit combinations and score them using a quality estimation model. The highest-scoring hypothesis is then deemed as the most appropriate correction of the source sentence.
We discuss this in more detail in Section 3.
The main contributions of this paper are:
* We present novel methods for utilizing GEC quality estimation models for system combination.
* We reveal and highlight the low performance of existing GEC quality estimation models when used for system combination.
* We present a new state-of-the-art GEC quality estimation model that has better correlation to the \(F_{0.5}\) score and produces higher \(F_{0.5}\) scores when used for system combination.
* We report new state-of-the-art scores on the CoNLL-2014 and BEA-2019 test sets.
## 2 Related Work
### GEC Quality Estimation Models
In this section, we briefly discuss existing neural GEC quality estimation models, including a neural reference-less GEC metric.
#### 2.1.1 NeuQE
NeuQE (Chollampatt and Ng, 2018) is the first neural quality estimation model for GEC. NeuQE uses the predictor-estimator framework (Kim et al., 2017) which trains a word prediction task on the predictor network and trains the quality score on the estimator network. The estimator is trained using knowledge from the predictor. NeuQE has two types of model, one for \(F_{0.5}\) score estimation and the other for post-editing effort estimation. NeuQE is trained on the NUCLE (Dahlmeier et al., 2013) and FCE (Yannakoudakis et al., 2011) corpora.
#### 2.1.2 VerNet
VERNet (Liu et al., 2021) estimates the quality of a GEC model from the top-\(k\) outputs of beam search decoding of the GEC model. VERNet uses BERT-like architecture to get the representation of each token. It then constructs a fully-connected graph between pairs of (source, hypothesis) for each beam search output to learn the interaction between hypotheses, then summarizes and aggregates the information of the hypotheses' interaction using two custom attention mechanisms. VERNet trains the model using the top-5 outputs of the Riken&Tohoku (Kiyono et al., 2019) model on the FCE, NUCLE, and W&I+LOCNESS (Bryant et al., 2019; Granger, 1998) datasets.
#### 2.1.3 Some
SOME (Yoshimura et al., 2020) is a reference-less GEC metric that scores a GEC correction based on three scoring aspects: grammaticality, fluency, and meaning preservation. SOME consists of three BERT models, one for each scoring aspect. Different from the aforementioned GEC quality estimation models, SOME does not aim to estimate the \(F_{0.5}\) score. Instead, it estimates the aspect scores directly. The authors created a new dataset to train the BERT models by annotating outputs of various GEC systems on the CoNLL-2013 test set with the three scoring aspects. The authors argue that reference-less metrics are better than \(F_{0.5}\) score because it is difficult to cover all possible corrections in the gold reference.
### GEC System Combination Methods
In this section, we briefly discuss state-of-the-art GEC system combination methods.
#### 2.2.1 Esc
ESC (Qorib et al., 2022) is a system combination method that takes the union of all edits from the base systems, scores each edit to decide whether the edit should be kept or discarded, and generates the final corrections using the selected edits. ESC uses logistic regression to score each edit based on the edit type and inclusion in the base systems, and filters the overlapping edit based on a threshold and a greedy selection method. ESC is trained on the BEA-2019 development set.
\begin{table}
\begin{tabular}{l|l} \hline Source & To sum it up I still consider having their own car is way more safe and convinient. \\ Correction & To sum up, I still consider having your own car way more safe and convenient. \\ Differences & To sum \{**it\}** up \{,**\} I still consider having \{their\(\rightarrow\)**your**\} own car \{**is\}** way more safe \\ & and \{\(\text{convinient}\rightarrow\)**convenient**\}. \\ Edits & (2, 3, "), (4, 4, ',’), (8, 9, ’your’), (11, 12, "), (16, 17, ‘convenient’) \\ \hline \end{tabular}
\end{table}
Table 1: Example GEC edits.
#### 2.2.2 Memt
MEMT [1] is a system combination method that combines models' outputs by generating candidate hypotheses through token alignments and scoring each candidate according to its textual features, which include n-gram language model score, n-gram similarity to each base model's output, and sentence length. MEMT was originally designed for machine translation system combination, but Susanto2014 successfully adapted it for use in GEC.
#### 2.2.3 EditScorer
EditScorer [1] is a model that scores each edit based on its textual features to generate a better correction. The model can be used to re-rank edits from a single model or combine edits from multiple models. It has a similar principle to ESC but uses the textual features of the edit and its surrounding context instead of the edit type. The textual feature is acquired from RoBERTa-large's [12] token representation of the candidate sentence. The model is trained with more than 2.4M sentence pairs from cLang8 [14] and the BEA-2019 training set.
## 3 GEC System Combination via Quality Estimation
To be able to use a GEC quality estimation model for system combination, we assume an ideal quality estimation model that can discern good hypotheses from bad ones and produce appropriate quality scores. Even though a perfect quality estimation model does not exist yet, quality estimation that behaves closely to our assumption will be good enough to be useful for combining GEC systems.
### Problem Formulation
For a source sentence \(s=\{s_{1},s_{2},...,s_{l}\}\) with length \(l\) and a hypothesis \(h=\{h_{1},h_{2},...,h_{m}\}\) with length \(m\), a quality estimation model produces a quality score \(Q(s,h)\) to assess how good \(h\) is as a correction to \(s\). When combining GEC systems, we have multiple hypotheses from different base GEC systems. From these hypotheses, we can extract all the edits. Let \(\mathbb{E}\) denote the union of all edits.
A new hypothesis can be generated by applying an edit \(e_{i}\in\mathbb{E}\) to the source sentence \(s\). If it is a correct edit (\(e_{i}^{+}\)), the quality score of the resulting hypothesis should be higher than when the edit is not applied or when a wrong edit (\(e_{i}^{-}\)) is applied. Let \(h\oplus e\) denote the operation of applying edit \(e\) to sentence \(h\). For any hypothesis \(h\) (including the case of \(h\) = \(s\)), an ideal quality estimation model should have the following property:
\[Q(s,h\oplus e^{+})>Q(s,h)>Q(s,h\oplus e^{-}) \tag{1}\]
### Beam Search
From an edit union of size \(|\mathbb{E}|\), we can get \(2^{|\mathbb{E}|}\) possible hypotheses. However, scoring all possible hypotheses is too costly, so we use beam search with size \(b\) to generate the potential candidates in a reasonable time. We apply each edit in \(\mathbb{E}\) one by one to the hypotheses in the current beam to generate new candidates, with time complexity \(O(b\times|\mathbb{E}|)\).
Initially, the beam contains the source sentence and all edits in \(\mathbb{E}\) are sorted from left to right, i.e.,
Figure 1: Beam search with beam size \((b)=2\). The blue arrow denotes generation of a new hypothesis, and the orange circle denotes the hypotheses with the highest scores. At each step, new hypotheses are generated by applying edit \(e_{i}\) to the top-\(b\) hypotheses of step \(i-1\).
edits with smaller start and end indices are processed earlier. In each step, we generate new candidates by applying the current edit to all candidate sentences in the beam if it does not create a conflict with previously added edits. We use the edit conflict definition of Qorib et al. (2022). Next, we compute the quality scores for the new candidates and add them to the beam. At the end of each step, we trim the beam by only keeping the top-\(b\) candidates with the highest quality scores. After we finish processing all edits, the candidate with the highest quality score becomes the final correction. We illustrate this process in Figure 1.
## 4 Quality Estimation Method
A correction produced by a GEC model can be wrong in three aspects: keeping wrong words or phrases from the source sentence, changing the words or phrases into the wrong ones, or missing some words or phrases. In other words, a quality estimation model needs to know which words are correct and which are wrong, as well as determine whether the gaps between words are correct or wrong (in which case a word or phrase needs to be inserted).
A GEC quality estimation model should also produce the quality scores proportionately. A better correction of the same source sentence should get a higher score than a worse one. That is, a quality estimation model should be able to rank hypotheses by their quality scores correctly.
In this section, we describe our approach to build a quality estimation model with the aforementioned qualities, which we call **GRECO** (**G**ammatically-score for **re**-ranking **c**orrections).
### Architecture
Our model uses a BERT-like pre-trained language model as its core architecture (Figure 2). We draw inspiration from quality estimation for machine translation models Kim et al. (2019); Lee (2020); Wang et al. (2020). The input to the model is the concatenation of the source sentence and a hypothesis, with the source sentence and the hypothesis prefixed with [CLS] (\(s_{0}\)) and [SEP] (\(h_{0}\)) pseudo-tokens respectively.
For every word in the hypothesis, the model learns to predict its word label \(w_{i}\) and gap label \(g_{i}\). The word label denotes whether the current word is correct. \(w_{i}\) is 1 when the word is correct and 0 otherwise. The gap label denotes whether there should be a word or phrase inserted right after the current word and before the next word. \(g_{i}\) is 1 when the gap is correct (i.e., there should be no words inserted) and 0 otherwise. The gold word labels and gap labels are computed by extracting the differences between the hypothesis and the gold reference sentence using ERRANT Bryant et al. (2017). The word label \(w_{i}\) and gap label \(g_{i}\) are computed from the projection of the embeddings learned by the pre-trained language model to a value in [0,1] using a two-layered neural network with tanh activation (\(\phi\)). We formally describe them in Equation (2) and (3), where LM denotes the pre-trained language model, \(\sigma\) denotes the sigmoid function, \(\mathbf{A}_{w}\), \(\mathbf{a}_{w}\), \(\mathbf{b}_{w}\), and \(b_{w}\) denote the weights and biases for the weight label projector, and \(\mathbf{A}_{g}\), \(\mathbf{a}_{g}\), \(\mathbf{b}_{g}\), and \(b_{g}\) denote the weights and biases for the gap label projector. The size of \(\mathbf{A}_{w}\) and \(\mathbf{A}_{g}\) is \(d_{LM}\times d_{LM}\), while the size of \(\mathbf{a}_{w}\), \(\mathbf{a}_{g}\), \(\mathbf{b}_{w}\), and \(\mathbf{b}_{g}\) is \(d_{LM}\times 1\), with \(d_{LM}\) being the language model dimension.
\[\mathbf{V} =\text{LM}(s;h)\] \[=\text{LM}(s_{0},s_{1},\ \dots,\ s_{l},h_{0},h_{1},\dots,h_{m})\] \[=\{\mathbf{v}_{0}^{s},\mathbf{v}_{1}^{s},\dots,\mathbf{v}_{l}^{s},\mathbf{v}_{0}^ {h},\mathbf{v}_{1}^{h},\dots,\mathbf{v}_{m}^{h}\}\] \[w_{i} =\sigma(\mathbf{a}_{w}^{T}\phi(\mathbf{A}_{w}\mathbf{v}_{h}^{h}+\mathbf{b}_{w})+b_ {w}) \tag{2}\] \[g_{i} =\sigma(\mathbf{a}_{g}^{T}\phi(\mathbf{A}_{g}\mathbf{v}_{h}^{h}+\mathbf{b}_{g})+b _{g}) \tag{3}\]
The length of the word label vector \(\mathbf{w}\) is the same as the hypothesis (\(m\)), while the length of the gap label vector \(\mathbf{g}\) is \(m+1\). The gap vector length is one more than the hypothesis' length to account for the potentially missing words at the start of the hypothesis. If the pre-trained language model uses sub-word tokenization, tokens that are not the beginning of a word are masked. The quality score \(Q\left(s,h\right)\) is calculated from the normalized product of the word label and gap label probabilities from all words in the hypothesis.
\[Q\left(s,h\right)=\sqrt[2m+1]{\prod_{i=1}^{m}w_{i}\cdot\prod_{i=0}^{m}g_{i}} \tag{4}\]
Figure 2: Model architecture
### Loss Function
The model is trained on two objectives: predicting the word label and gap label, and ranking the hypotheses correctly with the quality score, i.e., hypotheses with higher \(F_{0.5}\) scores should have higher quality scores than hypotheses with lower \(F_{0.5}\) scores. This translates into three loss functions: word label loss (\(\mathcal{L}_{w}\)), gap label loss (\(\mathcal{L}_{g}\)), and rank loss (\(\mathcal{L}_{r}\)). The first two losses are based on binary cross-entropy loss and the rank loss is based on RankNet (Burges et al., 2005; Burges, 2010) with a slight modification to amplify the power term with a multiplier \(\mu\).
\[\mathcal{L} =\frac{1}{n}\sum_{j=1}^{n}\mathcal{L}_{w(j)}+\frac{1}{n}\sum_{j=1 }^{n}\mathcal{L}_{g(j)}+\gamma\cdot\mathcal{L}_{r} \tag{5}\] \[\mathcal{L}_{w} =-\frac{1}{m}\sum_{i=1}^{m}(y_{i}^{w}\cdot\log w_{i}+\] (6) \[(1-y_{i}^{w})\cdot\log(1-w_{i}))\] \[\mathcal{L}_{g} =-\frac{1}{m+1}\sum_{i=0}^{m}(y_{i}^{g}\cdot\log g_{i}+\] (7) \[(1-y_{i}^{g})\cdot\log(1-g_{i}))\] \[\mathcal{L}_{r} =\sum_{y_{i}^{r}>y_{u}^{r}}\log\left(1+e^{-\sigma(Q_{v}-Q_{u})\cdot \mu}\right) \tag{8}\]
We formalize the loss functions in Equation (5) to (8), where \(n\) is the number of training instances, \(y^{w}\) and \(y^{g}\) are the correct labels for the word label and gap label respectively, \(y_{v}^{r}\) and \(Q_{v}\) are the \(F_{0.5}\) score and quality score of hypothesis \(v\) respectively, and \(\gamma\) is a hyper-parameter.
### System Combination Biases
The \(Q\) score from quality estimation models is model-agnostic as it fully depends on the source sentence and the hypothesis sentence, independent of the system that proposes the hypothesis. With a perfect quality estimation model, it should be enough to get the best hypothesis. If we use an imperfect quality estimation model, some valuable information in the system combination task can be useful to get a better hypothesis, such as how many systems propose an edit and which systems propose it. We incorporate the former through a voting bias and the latter through edit scores from an edit-based system combination method. In this section, we discuss how we replace the quality score \(Q\) in the beam search with a biased hypothesis score \(Q^{\prime}\).
#### 4.3.1 Voting Bias
Model voting is a common ensemble method that chooses a prediction label based on how many base systems predict that label. The rationale behind it is straightforward: the more systems propose a label, the more likely for it to be correct. In GEC, voting ensemble has also been used to combine edit labels from multiple GEC sequence-tagging models (Tarnavskyi et al., 2022).
\[h =s\oplus\mathbb{E}_{h} \tag{9}\] \[Q^{\prime}(s,h) =Q(s,h)\cdot V(\mathbb{E}_{h})^{\alpha}\] (10) \[V(\mathbb{E}_{h}) =\frac{1}{|\mathbb{E}_{h}|}\sum_{e\in\mathbb{E}_{h}}\frac{count(e) }{c} \tag{11}\]
We incorporate voting bias into beam search by multiplying the quality score with a voting score \(V\) (Eq. 10). The voting score is calculated by the average number of base systems that propose an edit (\(count(e)\)) for all edits in the hypothesis (\(\mathbb{E}_{h}\)), normalized by the number of base systems \(c\). The effect of voting bias is governed by a hyper-parameter \(\alpha\), \(0\leq\alpha\leq 1\). If \(\alpha=0\), voting bias is not used.
#### 4.3.2 Edit Score
Qorib et al. (2022) reported that the best hypothesis is the one that maximizes the strengths of the base systems. Edit-based GEC system combination methods approximate the strengths of GEC models through their performance on each edit type and learn the best combination from the edit type feature. If we have edit scores that reflect the base GEC models' strength, we can incorporate them into the hypothesis scoring function.
One way to incorporate the edit scores is by multiplying the hypothesis score with the edit scores. However, if we only multiply the scores of the edits that are applied to the hypothesis, we will reward hypotheses with fewer edits, even if we normalize the edit score2. Instead, we want to reward hypotheses that contain good edits and penalize hypotheses that miss good edits. Thus, we design the edit score to be the product of all edits in the edit union \(\mathbb{E}\).
Footnote 2: For example, if edit \(e_{1}\) has an edit score of 0.95 and edit \(e_{2}\) has an edit score of 0.9, the score of applying both edits is lower than just applying \(e_{1}\) if we only multiply the scores of edits that appear in the hypothesis, even though \(e_{2}\) is also a good edit.
\[Q^{\prime}(s,h)=Q(s,h)^{1-\beta}\cdot V(\mathbb{E}_{h})^{\alpha} \cdot ES(\mathbb{E}_{h},\mathbb{E})^{\beta} \tag{12}\]
\[p_{edit}(e,\mathbb{E}_{h}) =\begin{cases}p_{ES}(e)&\text{if }e\in\mathbb{E}_{h}\\ 1-p_{ES}(e)&\text{otherwise}\end{cases} \tag{13}\] \[ES(\mathbb{E}_{h},\mathbb{E}) =\sqrt[|\mathbb{E}|]{\prod_{e\in\mathbb{E}}p_{edit}(e,\mathbb{E}_ {h})} \tag{14}\]
We formulate the hypothesis score with voting bias and edit score \(ES\) in Equation (12) - (14), where \(p_{ES}\) denotes the probability of each edit. The effect of edit score is governed by the hyper-parameter \(\beta\), \(0\leq\beta<1\). If \(\beta=0\), the edit score is not used.
## 5 Experiments
### Model Training
We use DeBERTA-V3-Large He et al. (2023) as the pre-trained language model. We train the quality estimation model using the unique corrections of nine GEC systems, which are BARTGEC Katsumata and Komachi (2020), GECToR RoBERTa Omelianchuk et al. (2020), GECToR XLNet, GECToR BERT, Kakao&Brain ensemble Choe et al. (2019), Kakao&Brain Transformerbase, Riken&Tohoku ensemble Kiyono et al. (2019), T5-Large Rothe et al. (2021), and UEDIN-MS ensemble Grundkiewicz et al. (2019), on the W&I+LOCNESS training set.
The training data are grouped into small groups of size \(n\), with corrections of the same source sentence grouped together as much as possible, and each group must contain at least \(\frac{n}{2}\) corrections from the same source sentence. This way, the rank loss (Eq. 8) computes more comparisons of hypotheses from the same source. Corrections with no edits are filtered out so that the model can focus more on predicting the labels on edit words. Perfect corrections are also filtered out to maintain the label distribution balance. W&I+LOCNESS has 34,308 sentences and the resulting training data, obtained after filtering the unique corrections of the nine GEC systems above, has 65,824 hypotheses. During training, word labels and gap labels associated with tokens not present in the source sentence are given a higher weight \(z\) than other word and gap labels in the calculation of \(\mathcal{L}_{w}\) and \(\mathcal{L}_{g}\). We choose the hyper-parameters based on the model's performance on the BEA-2019 development set Bryant et al. (2019) and the CoNLL-2013 test set Ng et al. (2013). We list the hyper-parameters and explain our hyper-parameter search in Appendix B.
### Evaluation
We evaluate our model on quality estimation, re-ranking, and system combination tasks. We compare our models to other GEC quality estimation models (NeuQE, VERNet, SOME) and a language model baseline, GPT-2 Large Radford et al. (2019), which has been reported to perform relatively well on unsupervised GEC task Alikaniotis and Raheja (2019). We use the RC variant of NeuQE and the ELECTRA variant of VERNet which produce the highest scores on the CoNLL-2014 test set Ng et al. (2014). For the system combination task, we compare our model to state-of-the-art GEC system combination methods in Section 2.2.
For the quality estimation and re-ranking tasks, we perform experiments in two scenarios: single system and multi-system evaluation. For single system evaluation, we follow Liu et al. (2021) on evaluating the models on the top-5 outputs of Riken&Tohoku Kiyono et al. (2019) on the CoNLL-2014 test set. For multi-system evaluation, we evaluate the models on the outputs of 12 participating teams of the CoNLL-2014 shared task.
For the quality estimation task, we follow Chollampatt and Ng (2018) in comparing the correlation coefficient of the quality score to the sentence-level \(F_{0.5}\) score. We use Spearman's rank correlation coefficient Spearman (1904), which is the primary metric of the WMT-2022 shared task on quality estimation Zerva et al. (2022).
For the re-ranking task, we pick one hypothesis with the highest quality score for each source
\begin{table}
\begin{tabular}{l|c|c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{**Single system evaluation**} & \multicolumn{4}{c}{**Multi-system evaluation**} \\ \cline{2-9}
**Model** & \(\rho\) & **P** & **R** & **F0.5** & \(\rho\) & **P** & **R** & **F0.5** \\ \hline NeuQE & -0.003 & 52.53 & 12.83 & 32.45 & 0.212 & 38.30 & 10.84 & 25.43 \\ VERNet & 0.199 & 72.13 & 35.93 & 60.04 & 0.354 & 65.06 & 22.41 & 47.12 \\ SOME & 0.002 & 53.02 & 51.06 & 52.62 & 0.392 & 47.30 & 34.62 & 44.07 \\ GPT-2 & 0.088 & 54.09 & 50.98 & 53.43 & 0.116 & 46.67 & 33.32 & 43.21 \\ \hline
**GRECO** & **0.445** & 71.23 & 47.72 & **64.84** & **0.415** & 67.39 & 30.71 & **54.40** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quality estimation and re-ranking results on the CoNLL-2014 test set. \(\rho\) denotes Spearman’s rank correlation coefficient.
sentence, and then compute the corpus-level \(F_{0.5}\) score for the chosen hypotheses of all source sentences. We also add the source sentence as one of the hypotheses so that a model has the option to not make any corrections if the hypotheses are bad. We use the M2 Scorer Dahlmeier and Ng (2012) to compute the \(F_{0.5}\) score.
For the system combination task, we combine the base systems given in Table 3 using ESC, MEMT, and EditScorer. We also combine the same base systems using beam search via quality estimation for the different quality estimation methods NeuQE, VERNet, SOME, GPT-2, GRECO, \(\text{GRECO}_{\textit{voting}}\), and \(\text{GRECO}_{\textit{voting+ESC}}\). We report the \(F_{0.5}\) scores on the BEA-2019 test set and CoNLL-2014 test set.
We use William's test Williams (1959) to measure the statistical significance of the correlation coefficients. We use bootstrap resampling on 100 samples for the statistical significance of the \(F_{0.5}\) scores in the re-ranking and system combination tasks.
## 6 Results
### Quality Estimation and Re-Ranking
We report the results of quality estimation and re-ranking evaluation in Table 2. Our model significantly outperforms all other quality estimation models on the correlation score and \(F_{0.5}\) score in both experimental settings (\(p<0.001\)). Our re-ranking result has higher precision, recall, and \(F_{0.5}\) score compared to the top-1 output of Riken&Tohoku, which has a precision, recall, and \(F_{0.5}\) score of 68.59, 44.87, and 62.03 respectively3.
Footnote 3: The top-1 performance for Riken&Tohoku in this experiment is different from Table 3 because the data for this experiment was from reproduction by Liu et al. (2021), which does not include right-to-left re-ranking due to that component not being publicly available according to them.
### System Combination
We report the results of the system combination experiments in Table 3. Existing GEC quality estimation models fail to produce better corrections. More surprisingly, all of them produce combination scores that are lower than the fourth best base system on the BEA-2019 experiment and the second best base system on the CoNLL-2014 experiment. Our model without any additional biases (GRECO) successfully produces better corrections with 4.17 points and 3.89 points higher than the best base system on the BEA-2019 (T5-Large) and CoNLL-2014 (GECToR XLNet) test sets respectively.
By adding the voting bias, our model outperforms MEMT on both datasets and ESC and Ed
\begin{table}
\begin{tabular}{l||c|c c c||c c c} \hline & \multicolumn{2}{c|}{**BEA-2019**} & \multicolumn{2}{c||}{**BEA-2019 Test**} & \multicolumn{2}{c}{**CoNLL-2014 Test**} \\
**Model** & **Dev (F\({}_{0.5}\))** & **P** & **R** & **F\({}_{0.5}\)** & **P** & **R** & **F\({}_{0.5}\)** \\ \hline
1. T5-Large & 56.21 & 74.30 & 66.75 & 72.66 & 69.66 & 51.50 & 65.07 \\
2. GECToR XLNet & 55.62 & 79.20 & 53.90 & 72.40 & 77.49 & 40.15 & 65.34 \\
3. GECToR RoBERTa & 54.18 & 77.20 & 55.10 & 71.50 & 73.91 & 41.66 & 64.00 \\
4. Riken&Tohoku & 53.95 & 74.7 & 56.7 & 70.2 & 73.26 & 44.17 & 64.74 \\
5. UEDIN-MS & 53.00 & 72.28 & 60.12 & 69.47 & 75.15 & 41.21 & 64.52 \\
6. Kakao\&Brain & 53.27 & 75.19 & 51.91 & 69.00 & - & - & - \\ \hline NeuQE & 29.30 & 68.48 & 20.19 & 46.32 & 66.48 & 15.87 & 40.59 \\ VERNet & 54.80 & 73.19 & 58.42 & 69.67 & 74.08 & 39.12 & 62.85 \\ SOME & 52.23 & 66.40 & 67.83 & 66.68 & 68.39 & 54.23 & 65.00 \\ GPT-2 & 52.00 & 67.20 & 68.08 & 67.38 & 68.30 & 52.65 & 64.47 \\ \hline ESC & 63.09 & 86.65 & 60.91 & 79.90 & 81.48 & 43.78 & 69.51 \\ MEMT & 60.72 & 82.20 & 63.00 & 77.48 & 76.44 & 48.06 & 68.37 \\ EditScorer & 61.66 & 88.05 & 58.71 & 80.05 & 74.32 & 51.44 & 68.25 \\ \hline GRECO & 60.74 & 80.03 & 66.22 & 76.83 & 76.39 & 50.35 & 69.23 \\ \(\text{GRECO}_{\textit{voting}}\) & 62.22 & 82.86 & 65.10 & 78.58 & 79.36 & 48.69 & **70.48** \\ \(\text{GRECO}_{\textit{voting+ESC}}\) & **63.40** & 86.45 & 63.13 & **80.50** & 79.36 & 48.69 & **70.48** \\ \hline \end{tabular}
\end{table}
Table 3: System combination results. The first group of rows shows the base GEC systems that are combined, while the second and the third group of rows show the combination results of existing quality estimation models and system combination methods respectively. We do not show Kakao&Brain’s score on CoNLL-2014 as it is not used in the CoNLL-2014 combination. ESC and MEMT results are taken from (Oorib et al., 2022). \(\text{GRECO}_{\textit{voting}}\) refers to our model with voting bias, and \(\text{GRECO}_{\textit{voting+ESC}}\) refers to our model with voting bias and edit scores from ESC.
itScorer on the CoNLL-2014 test set. Our model when augmented with voting bias and edit scores from ESC outperforms ESC and EditScorer on the BEA-2019 test set by 0.6 and 0.45 points respectively. Note that EditScorer is trained with 70 times more data than GRECO. Using edit scores from ESC on the CoNLL-2014 test set does not change the result since the optimal edit weight (\(\beta\)) is zero. In other experiments, we found that combining with ESC can also improve the \(F_{0.5}\) score on the CoNLL-2014 test set (Appendix Table 11). Our final model has significantly higher scores than all other methods (\(p<0.005\)).
We also evaluate our model on the combination of stronger GEC models. We replace T5-Large by T5-XL and GECToR RoBERTa by GECToR-Large RoBERTa (Tarnavskyi et al., 2022) from the base systems and achieve the highest BEA-2019 test score, 80.84, reported to date. (Table 4).
## 7 Discussions
This section discusses important characteristics of our model. From this section onward, we sometimes refer to our model \(\text{GRECO}_{voting}\) as \(\text{G}_{v}\), and \(\text{GRECO}_{voting+\text{ESC}}\) as \(\text{G}_{v+\text{E}}\).
### Model-Agnostic
GRECO is model-agnostic, which means we can add or change the base systems during inference. We run an experiment of adding the C4-200M GEC system (Stahlberg and Kumar, 2021) to the base systems of the CoNLL-2014 system combination while using the model weights and hyper-parameters from Table 3. This experiment setting is not possible with ESC and MEMT which require the same set of base systems during training and testing. In this experiment, we also replace T5-Large with T5-XL. From this experiment, our model achieves the highest CoNLL-2014 test score reported to date, 71.12 (Table 5). We also evaluate it on the CoNLL-2014 test set with 10 annotations, using the same approach as Bryant and Ng (2015)4 and report the highest score to date, 85.21.
Footnote 4: Evaluate the output on 10 sets of 9-annotation references and average the \(F_{0.5}\) scores from all 10 sets.
### Fluency
We analyze the output fluency of our methods by measuring the perplexity of the generated corrections using GPT-2, since prior work (Kann et al., 2018) has found that perplexity correlates with human fluency score. We found that our method generates more fluent corrections than all other methods, and adding more biases to our model makes the model less fluent (Figure 3). Based on the generated sentences, edit-based methods like ESC and EditScorer are too optimized toward picking the correct edits which can make the sentence unnatural. Our model with three modes of generality offers a flexible trade-off between \(F_{0.5}\) score and fluency. Our \(\text{GRECO}_{voting+\text{ESC}}\) achieves a higher \(F_{0.5}\) score and better fluency at the same time compared to the previous best system combination methods ESC and EditScorer.
\begin{table}
\begin{tabular}{l|c c c} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**BEA-2019 Test**} \\ & **P** & **R** & \(\textbf{F}_{0.5}\) \\ \hline
1. T5-XL & 76.85 & 67.26 & 74.72 \\
3. GECToR-Large & 80.70 & 53.39 & 73.21 \\ \hline + Model [2, 4, 5], [6] & from Table 3 \\ \hline \hline ESC & 86.64 & 61.54 & 80.10 \\ \hline GRECO & 80.46 & 66.59 & 77.24 \\ \(\text{GRECO}_{voting}\) & 83.24 & 65.12 & 78.85 \\ \(\text{GRECO}_{voting+\text{ESC}}\) & 86.66 & 63.72 & **80.84** \\ \hline \end{tabular}
\end{table}
Table 4: Combination of the same base systems in Table 3, but with model [1] replaced by T5-XL and model [3] replaced by GECToR-Large RoBERTa.
\begin{table}
\begin{tabular}{l|c c c} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**CoNLL-2014 Test**} \\ & **P** & **R** & \(\textbf{F}_{0.5}\) \\ \hline
1. T5-XL & 74.40 & 52.02 & 68.50 \\
7. C4-200M & 75.47 & 49.06 & 68.13 \\ \hline \multicolumn{3}{c}{+ Model [2] – [5]} & from Table 3 \\ \hline \hline GRECO & 76.68 & 51.54 & 69.86 \\ \(\text{GRECO}_{voting}\) & 79.60 & 49.86 & **71.12** \\ \hline \end{tabular}
\end{table}
Table 5: Combination of the same base systems in Table 3, but with T5-Large replaced with T5-XL and the C4-200M model added to the base systems.
Figure 3: Median perplexity of the generated corrections on the BEA-2019 development set (lower is better).
### Ablation
We run an ablation study to evaluate the contribution of each component of our method and the effect of beam size on model performance (Table 6). We found that both the training (rank loss) and inference (voting bias, edit scoring, beam search) techniques contribute to the final model performance. We also found that the performance does not change much with beam size >= 8. However, the difference between greedy (beam size = 1) and beam search (beam size = 16) decoding is quite substantial, especially on the CoNLL-2014 test set. Beam search has more effect on the CoNLL-2014 test set because its number of edits per sentence is more than the BEA-2019 development set.
### Number of Base Systems
We investigate the performance of our model when the number of base systems is reduced. We use the base systems in Table 3 and randomly sample 5 combinations of base systems for each experiment (except when combining 5 systems where there is only one combination). We evaluate the combination on the CoNLL-2014 test set and calculate the average \(F_{0.5}\) score for each number of base systems (Figure 4). We found that ESC's performance deteriorates rapidly when the number of base systems is reduced, while GRECO\({}_{voting}\) and EditScorer can maintain their performance.
## 8 Conclusion
In this paper, we present novel methods to utilize GEC quality estimation models for system combination with varying generality: model-agnostic, model-agnostic with voting bias, and model-dependent method. We report that existing GEC quality estimation models are not able to differentiate good corrections from bad ones, which is shown by their ineffectiveness on the re-ranking and system combination tasks. Hence, there is a need for a new quality estimation model for GEC.
We present a new state-of-the-art quality estimation model, GRECO. Our model outperforms existing models on quality estimation and re-ranking evaluation. Our re-ranking of the top-5 hypotheses of Riken&Tohoku beats the performance of the top-1 hypothesis. Our source code and model weights are publicly available, making our model directly usable as a post-processing tool for re-ranking GEC systems' outputs.
Our model, combined with a voting bias and an edit-based system combination method, successfully improves the \(F_{0.5}\) scores on the GEC system combination task and produces the highest \(F_{0.5}\) scores on the CoNLL-2014 test set and BEA-2019 test set to date, which are 71.12 and 80.84 respectively.
## Limitations
Our model is trained on English GEC data with a single reference, and we only report experimental results on English GEC. Future work can apply our
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline
**Model** & \(b\) & \begin{tabular}{l} **BEA** \\ **-Dev** \\ \end{tabular} & \begin{tabular}{l} **BEA** \\ **-Test** \\ \end{tabular} & \begin{tabular}{l} **CoNLL** \\ **-2014** \\ \end{tabular} \\ \hline GRECO\({}_{voting+\text{ESC}}\) & 16 & 63.40 & 80.50 & 70.48 \\ GRECO\({}_{voting+\text{ESC}}\) & 1 & 62.96 & 80.18 & 67.48 \\ GRECO\({}_{voting}\) & 16 & 62.22 & 78.58 & 70.48 \\ GRECO\({}_{voting}\) & 1 & 60.66 & 77.04 & 67.48 \\ GRECO & 16 & 60.74 & 76.83 & 69.23 \\ GRECO & 1 & 60.18 & 76.77 & 66.84 \\ GRECO without rank loss & 16 & 59.78 & 75.62 & 68.51 \\ GRECO without rank loss & 1 & 57.50 & 73.05 & 65.64 \\ \hline \hline \end{tabular} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{\(b\)} & \begin{tabular}{l} **BEA** \\ **-Dev** \\ \end{tabular} & \begin{tabular}{l} **BEA** \\ **-Test** \\ \end{tabular} &
\begin{tabular}{l} **CoNLL** \\ **-2014** \\ \end{tabular} \\ \hline
1 & 62.96 & 80.18 & 67.48 \\
2 & 63.13 & 80.17 & 69.38 \\
4 & 63.31 & 80.31 & 70.25 \\
8 & 63.34 & 80.47 & 70.36 \\
12 & 63.34 & 80.49 & 70.34 \\
16 & 63.40 & 80.50 & 70.48 \\
20 & 63.38 & 80.53 & 70.44 \\
24 & 63.33 & 80.53 & 70.36 \\
32 & 63.37 & 80.55 & 70.56 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study for each component of the model (left) and different beam sizes \(b\) (right).
Figure 4: Average \(F_{0.5}\) score on the CoNLL-2014 test set for varying number of base systems.
model to other languages. We believe our work does not pose any risks to society.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. This research is supported by a research grant from TikTok (WBS No. A-8000972-00-00). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)). Cloud resources involved in this research work were also partially supported by NUS IT's Cloud Credits for Research Programme and Amazon Web Services.
|
2310.05692 | Based on What We Can Control Artificial Neural Networks | How can the stability and efficiency of Artificial Neural Networks (ANNs) be
ensured through a systematic analysis method? This paper seeks to address that
query. While numerous factors can influence the learning process of ANNs,
utilizing knowledge from control systems allows us to analyze its system
function and simulate system responses. Although the complexity of most ANNs is
extremely high, we still can analyze each factor (e.g., optimiser,
hyperparameters) by simulating their system response. This new method also can
potentially benefit the development of new optimiser and learning system,
especially when discerning which components adversely affect ANNs. Controlling
ANNs can benefit from the design of optimiser and learning system, as (1) all
optimisers act as controllers, (2) all learning systems operate as control
systems with inputs and outputs, and (3) the optimiser should match the
learning system. Please find codes:
\url{https://github.com/RandomUserName2023/Control-ANNs}. | Cheng Kang, Xujing Yao | 2023-10-09T13:09:38Z | http://arxiv.org/abs/2310.05692v1 | # Based on What We Can Control Artificial Neural Networks
###### Abstract
How can the stability and efficiency of Artificial Neural Networks (ANNs) be ensured through a systematic analysis method? This paper seeks to address that query. While numerous factors can influence the learning process of ANNs, utilizing knowledge from control systems allows us to analyze its system function and simulate system responses. Although the complexity of most ANNs is extremely high, we still can analyze each factor (e.g., optimiser, hyperparameters) by simulating their system response. This new method also can potentially benefit the development of new optimiser and learning system, especially when discerning which components adversely affect ANNs. Controlling ANNs can benefit from the design of optimiser and learning system, as (1) all optimisers act as controllers, (2) all learning systems operate as control systems with inputs and outputs, and (3) the optimiser should match the learning system. Please find codes: [https://github.com/RandomUserName2023/Control-ANNs](https://github.com/RandomUserName2023/Control-ANNs).
Optimizer Controller Learning System Control System Fuzzy Logic Filter
## 1 Introduction
Controlling artificial neural networks (ANNs) has become an urgent issue on such a dramatically growing domain. Although ANN models, such as, vision models (e.g., CNN [21], VGG19 [33], ResNet50 [11], EfficientNet [34], ViT [7]), language models (e.g., BERT [6], GPT [29], PaLM [4]), and generative models (e.g., GAN [9], VAE [18], Stable Diffusion Models [15, 31]), all require input and output, as they aim to map the gap between their output and the desired output. However, basically, CNN-based vision models prefer SGDM [28] optimiser, and generative models tend to rely on AdaM optimiser. Using various architecture on CNN-based vision models (e.g., from VGG19 to ResNet50, from GAN to CycleGAN [40], and from CNN to FFNN [13]) yield significantly varied results for classification and generation tasks. Two critical questions arise: **(1)** why some of them satisfy the corresponding optimiser, **(2)** based on what to propose an advanced ANN architecture and a proper optimiser.
Compared to existing era-acrossing optimisers, such as SGD [30, 5, 38], SGDM [28, 25], AdaM [17, 3], PID [36], and Gaussian LPF-SGD [2], we proposed a FuzzyPID optimiser modified by fuzzy logic to avoid vibration during PID optimiser learning process. Referring to Gaussian LPF-SGD (GLFP-SGD), we also proposed two filter processed SGD methods according to the low and high frequency part during the SGD optimiser learning process: low-pass-filter SGD (LPF-SGD) and high-pass-filter SGD (HPF-SGD). To achieve stable and convergent performance, we simulate these above optimisers on the system response to analyze their attributes. When using simple and straightforward architecture (without high techniques, such as, BN [16], ReLU [26], pooling [37], and exponential or cosine decay [24]), we found their one step system response are always consistent with their training process. Therefore, we conclude that every optimiser actually can be considered as a controller that optimise the training process. Results using HPF-SGD indicate that the high frequency part using SGD optimiser significantly benefits the learning process and the classification performance.
To analyze the learning progress of most ANNs, for example, CNN using backpropagation algorithm, FFNN using forward-forward algorithm, and GAN such a generative model using random noise to generate samples. We assume
above three mentioned models here essentially can be represented by corresponding control systems. But the difficulty is that when using different optimisers, especially, AdaM, we cannot analyze its stability and convergence, as the complexity is extremely high. Thus, we use MATLAB Simulink to analyze their system response, as well as their generating response. Experiment results indicate that advanced architectures and designs of these three ANNs can improve the learning, such as residual connections (RSs) on ResNets, a higher Threshold on FFNN, and a cycle loss function on CycleGAN.
Based on the knowledge of control systems [27], designing proper optimisers (or controllers) and advanced learning systems can benefit the learning process and complete relevant tasks (e.g., classification and generation). In this paper, we design two advanced optimisers and analyze three learning systems relying on the control system knowledge. The contributions are as follows:
**Optimisers are controllers. (1)** PID and SGDM (PI controller) optimiser performs more stable than SGD (P controller), SGDM (PI controller), AdaM and fuzzyPID optimisers on most residual connection used CNN models. **(2)** HPF-SGD outperforms SGD and LPF-SGD, which indicates that high frequency part is significant during SGD learning process. **(3)** AdaM is an adaptive filter that combines an adaptive filter and an accumulation adaptive part.
**Learning systems of most ANNs are control systems. (1)** Most ANNs present perfect consistent performance with their system response. **(2)** We can use proper optimisers to control and improve the learning process of most ANNs.
**The Optimiser should match the learning system. (1)** RSs based vision models prefer SGDM, PID and fuzzyPID optimisers. **(2)** RS mechanism is similar to AdaM. particularly, SGDM optimizes the weight of models on the time dimension, and RS optimizes the model on the space dimension. **(3)** AdaM significantly benefits FFNN and GAN, but PID and FuzzyPID does CycleGAN most.
## 2 Problem Statement and Preliminaries
To make ANNs more effective and adaptive to specific tasks, controlling ANNs has become necessary. We initialize a parameter of a node in the ANN model as a scalar \(\theta_{0}\). After enough time of updates, the optimal value of \(\theta^{*}\) can be obtained. We simplify the parameter update in ANN optimisation as a one-step response (from \(\theta_{0}\) to \(\theta^{*}\) ) in the control system. The Laplace transform of \(\theta^{*}\) is \(\theta^{*}/s\). We denote the weight \(\theta(t)\) at iteration \(t\). The Laplace transform of \(\theta(t)\) is denoted as \(\theta(s)\), and that of error \(e(t)=\theta^{*}-\theta(t)\) as \(E(s)\):
\[E(s)=\frac{\theta^{*}}{s}-\theta(s) \tag{1}\]
Considering the collaboration of backward and forward algorithms, the Laplace transform of the training process is
\[U(s)=(\textit{Controller1}+\textit{Controller2}\cdot F(s))\cdot E(s) \tag{2}\]
\(F(s)\) is the forward system which has the capability to affect \(U(s)\) beforehand. In our case, \(u(t)\) corresponds to the update of \(\theta(t)\). _Controller1_ is the parameter update algorithm for the backward process, and _Controller2_ is the parameter update algorithm for the forward process. Therefore, we replace \(U(s)\) with \(\theta(s)\) and \(E(s)\) with \((\theta^{*}/s)-\theta(s)\). Equation 2 can be rewritten as
\[\theta(s)=(\textit{Controller1}+\textit{Controller2}\cdot F(s))\cdot\left( \frac{\theta^{*}}{s}-\theta(s)\right) \tag{3}\]
Finally, we simplify the formula of training a model as:
\[\theta(s)=\frac{\textit{Controller}}{\textit{Controller}+1}\cdot\frac{ \theta^{*}}{s} \tag{4}\]
where \(\textit{Controller}=\textit{Controller1}+\textit{Controller2}\cdot F(s)\). \(\theta^{*}\) denotes the optimal model which we should get at the end. Simplifying \(\theta(s)\) further as below:
\[\theta(s)=\textbf{Controller}(\mathbf{s})\cdot\mathbf{C}(\mathbf{s}) \tag{5}\]
Figure 1: The schematic structure of training ANN models. C(s) is the controller to train the target ANN model.
where \(\mathbf{Controller}(\mathbf{s})=\mathit{Controller}/(\mathit{Controller}+1)\), and \(\mathbf{C}(\mathbf{s})=\theta^{*}/s\). Based on above analytic thought, as shown in Figure 1 there are two ways to obtain an optimal \(\theta(s)\) and to make the training process better: **(1)** using a better **Controller** and **(2)** constructing a better training or control system \(\mathbf{C}(\mathbf{s})\).
## 3 Optimisers are Controllers
In this section, we review several widely used optimisers, such as SGD [30; 5; 38], SGDM [28; 25], AdaM [17; 3], PID-optimiser [36] and Gaussian LPF-SGD [2]. In the training process of most ANNs, there are diverse architectures used to satisfy various tasks. We analyze the performance of optimisers in terms of one node of backpropagation based ANN models. Please see the proof in Appendix A.
### AdaM Optimiser
AdaM [17] has been used to optimise the learning process of most ANNs, such as GAN, VAE, Transformer-based models, and their variants. We simplify the learning system of using AdaM on ANNs as below:
\[\theta(s)=\frac{K_{p}s+K_{i}}{Ms^{2}+(K_{p}-Mln\beta_{1})s+K_{i}}\cdot\frac{ \theta^{*}}{s} \tag{6}\]
where \(M\) is an adaption factor which will dynamically adjust the learning during the training process, and it can be derived from:
\[M=\frac{1}{\sqrt{\frac{\sum_{i=0}^{t}\beta_{2}^{t-i}(\partial L_{t}/\partial \theta_{i})^{2}}{\sum_{i=0}^{t}\beta_{2}^{t-1}}}+\epsilon}\cdot\frac{1}{\sum_ {i=0}^{t}\beta_{1}^{t-1}} \tag{7}\]
Apart from the adaption part \(M\), AdaM can be thought as the combination of SGDM and an adaptive filter with the cutoff frequency \(\omega_{c}=ln(\beta_{1})\).
### Filter Processed SGD optimiser
SGD learning process can be filtered under carefully designed filters. GLPF-SGD [2] used a low pass Gaussian-filter to smooth the training process, as well as actively searching the flat regions in the Deep Learning (DL) optimisation landscape. Eventually, we simplify the learning system of using SGD with filters on ANNs as below:
\[\theta(s)=\frac{Gain\cdot\prod_{i=0}^{m}{(s+h_{i})}}{Gain\cdot\prod_{i=0}^{m }{(s+h_{i})+\prod_{j=0}^{n}{(s+l_{j})}}}\cdot\frac{\theta^{*}}{s} \tag{8}\]
where designed \(Filter\) have the order, such as \(n\) for the low pass and \(m\) for the high pass (\(h_{i}\) is the coefficient of the high pass part and \(l_{i}\) is the coefficient of the low pass part), and \(Gain\) is the gain factor:
\[Filter=Gain\cdot\frac{(s+h_{0})(s+h_{1})...(s+h_{m})}{(s+l_{0})(s+l_{1})...(s+l _{n})} \tag{9}\]
### PID and FuzzyPID optimiser
Based on PID optimiser [36], we design a PID controller which is optimised by fuzzy logic to make the training process more stable while keeping the dominant attribute of models. For instance, the ability to resist the disturbance of the poisoned samples, the quick convergent speed and the competitive performance.
There are two key factors which affect the performance of the Fuzzy PID optimiser: (1) the selection of Fuzzy Universe Range \([-\varphi,\varphi]\) and (2) Membership Function Type \(f_{m}\).
\[\widehat{K}_{\mathrm{P,I,D}}=K_{\mathrm{P,I,D}}+\Delta K_{\mathrm{P,I,D}} \tag{10}\]
\[\Delta K_{\mathrm{P,I,D}}=Defuzzy(E(s),Ec(s))\cdot K_{\mathrm{P,I,D}} \tag{11}\]
\(Defuzzy(s)=f_{m}(round(-\varphi,\varphi,s))\)
where \(\Delta K_{\rm P,I,D}\) refer to the default gain coefficients of \(K_{\rm P}\), \(K_{\rm I}\) and \(K_{\rm D}\) before modification. \(E(s)\) is the back error, and \(Ec(s)\) is the difference product between the \(Laplace\) of \(e(t)\) and \(e(t-1)\). The Laplace function of this model \(\theta(s)\) eventually becomes:
\[\theta(s)=\frac{\widehat{K}_{d}s^{2}+\widehat{K}_{p}s+\widehat{K}_{i}}{ \widehat{K}_{d}s^{2}+(\widehat{K}_{p}+1)s+\widehat{K}_{i}}\cdot\frac{\theta^{ *}}{s} \tag{12}\]
where \(\widehat{K}_{p}\), \(\widehat{K}_{i}\) and \(\widehat{K}_{d}\) should be processed under the fuzzy logic. By carefully selecting the learning rate \(r\), \(\theta(s)\) becomes a stable system.
The PID [1] and Fuzzy PID [35] controllers have been used to control a feedback system by exploiting the present, past, and future information of prediction error. The advantages of a fuzzy PID controller includes that it can provide different response levels to non-linear variations in a system. At the same time, the fuzzy PID controller can function as well as a standard PID controller in a system where variation is predictable.
## 4 Control Systems of ANNs
In this section, to systematically analyze the learning process of ANNs, we introduce three main common-used control systems that we believe can be respectively connected to backpropagation based CNNs, forward-forward algorithm based FFNNs, and GANs: **(1)** backward control system, **(2)** forward control system using different hyperparameters, and **(3)** backward-forward control system on different optimisers and hyperparameters. Please see the proof in Appendix B.
### Backward Control System
Traditional CNNs use the backpropagation algorithm to update initialized weights, and based on errors or minibatched errors between real labels and predicted results, optimisers are used to control on how the weight should be updated. According to the deduction of PID optimiser [36], the training process of Deep Neural Networks (DNNs) can be conducted under a step response of control systems. However, most common-used optimisers have their limitations, such as **(1)** SGD costs a very long term to reach convergence, **(2)** SGDM also has the side effect of long term convergence even with the momentum accelerating the training, **(3)** AdaM presents a frequent vibration during the training because of the merging of momentum and root mean squared propagation (RMSprop), **(4)** PID optimiser has better stability and convergence speed, but the training process is still vibrating. This proposed fuzzyPID optimiser can keep the learning process more stable, because it can be weighted towards types of responses, which seems like an adaptive gain setting on a standard PID optimiser. Finally, we get the system function \(\theta(s)\) of ANNs by using FuzzyPID optimisers as an example below:
\[\theta(s)=\frac{\textit{FuzzyPID}}{\textit{FuzzyPID}+1}\cdot\frac{\theta^{ *}}{s} \tag{13}\]
### Forward-Forward Control System
The using of forward-forward computing algorithm was systematically analyzed in forward-forward neural network [13] which aims to track features and figure out how ANNs can extract them from the training data. The Forward-Forward algorithm is a greedy multilayer learning procedure inspired by Boltzmann machines [14] and noisy contrastive estimation [10]. To replace the forward-backward passes of backpropagation with two forward passes that operate on each other in exactly the same way, but on different data with opposite goals. In this system, the positive pass operates on the real data and adjusts the weights to increase the goodness in each hidden layer; the negative pass operates on the negative data and adjusts the weights to reduce the goodness in each hidden layer. According to the training process of FFNN, we get its system function \(\theta(s)\) as below:
\[\theta(s)=\left\{\left(-(1-\lambda)\frac{\theta^{*}}{s}+\lambda\frac{\theta^{ *}}{s}-\left[\theta(s)-\frac{Th}{s}\right]\right)\right\}\cdot Controller \tag{14}\]
where \(\lambda\in[0,1]\) is the portion of positive samples, and \(Th\) is the given Threshold according to the design [13]. Input should contain negative and positive samples, and by adjusting the Threshold \(Th\), the embedding space can be optimised. In each layer, weights should be updated on only corresponding errors that can be computed by subtracting the Threshold \(Th\). We finally simplify \(\theta(s)\) as:
\[\theta(s)=\frac{1}{Controller+1}\cdot\left(\frac{(2\lambda-1)\theta^{*}+Th}{s}\right) \tag{15}\]
Because \((2\lambda-1)\theta^{*}+Th\geq 0\), the system of FFNN is stable. Additionally, when \(\lambda=0.5\) and \(Th=1.0\), the learning system of FFNN (the second half part of Equation 15) will become to that of backpropagation based CNN, as we assume \(\theta^{*}\approx 1.0\). When \(\lambda=0.5\), the optimal result \(\theta^{*}\) has no relationship with the learning system.
### Backward-Forward Control System
GAN is designed to generate samples from the Gaussian noise. The performance of the GAN depends on its architecture [39]. The generative network uses random inputs to generate samples, and the discriminative network aims to classify whether the generated sample can be classified [9]. We get its \(\theta(s)\) as below:
\[\theta_{D}(s)=controller\cdot\theta_{G}(s)\cdot E(s) \tag{16}\]
\[\theta_{G}(s)=controller\cdot E(s) \tag{17}\]
\[E(s)=\frac{\theta_{D}^{*}}{s}-\theta_{D}(s) \tag{18}\]
where \(\theta_{D}(s)\) is the desired Discriminator, \(\theta_{G}(s)\) is the desired Generator. \(E(s)\) is the feed-back error. \(\theta_{G}^{*}\) is the optimal solution of the generator, and \(\theta_{D}^{*}\) is the optimal solution of the discriminator.
Eventually, we simplify \(\theta_{G}(s)\) and \(\theta_{D}(s)\) as below:
\[\theta_{G}(s)=\frac{1}{2}\cdot\left(\frac{\theta_{D}^{*}}{Controller}\pm \sqrt{(\frac{\theta_{D}^{*}}{Controller})^{2}-\frac{4}{s}}\right) \tag{19}\]
\[\theta_{D}(s)=\theta_{G}^{2}(s) \tag{20}\]
where if set \(\theta_{G}(s)=0\), we get one pole point \(s=0\). When using SGD as the \(controller\), \(\theta_{G}(s)\) is a marginally stable system.
## 5 Experiments
### Simulation
As we believe that the training process of most ANNs can be modeled as the source response of control systems, we use Simulink (MATLAB R2022a) to simulate their response to different sources. For the classification task, because all models aim to classify different categories, we set a step source as illustrated in [36]. For the sample generation task, to get a clear generating result, we use a sinusoidal source.
### Experiment Settings
We train our models on the MNIST [23], CIFAR10 [20], CIFAR100 [20] and TinyImageNet [22] datasets. For an apple-to-apple comparison, our training strategy is mostly adopted from PID optimiser [36] and FFNN [13]. To optimise the learning process, we **(1)** firstly use seven optimisers for the classification task on backpropagation algorithm based ANNs. **(2)** Secondly, we choose some important hyperparameters and simulate the learning process of FFNN. **(3)** Lastly, to improve the stability and convergence during the training of GAN, we analyze its system response on various optimisers. All models are trained on single Tesla V100 GPU. All the hyper-parameters are presented in Table 3 of Appendix E.
#### 5.2.1 Backward Control System
We design one neural network using backpropagation algorithm with \(2\) hidden layers, setting the learning rate \(r\) at \(0.02\) and the fuzzy universe range \(\varphi\) at \([-0.02,0.02]\). We initialize \(K_{P}\) as \(1\), \(K_{I}\) as \(5\), and \(K_{D}\) as \(100\). Thus, we compare seven different optimisers: SGD (P controller), SGDM (PI controller), AdaM (PI controller with an Adaptive Filter), PID (PID controller), LPF-SGD, HPF-SGD and FuzzyPID (fuzzy PID controller) on the above ANN model. We set Gaussian membership function as the default membership function. See filter coefficients in Table 4 of Appendix E. In Table 5 of Appendix E, there is a set of hyperparameters that we have used to train CIFAR10, CIFAR100 and TinyImageNet.
#### 5.2.2 Forward-Forward Control System
Following the forward-forward algorithm [13], we design one forward-forward neural network (FFNN) with \(4\) hidden layers each containing \(2000\) ReLUs and full connectivity between layers, by simultaneously feeding positive and negative samples into the model to teach it to distinguish the handwriting number (MNIST). We also carefully select the proportion of positive and negative samples. The length of every block is \(60\).
#### 5.2.3 Backward-Forward Control System
To demonstrate the relationship between the control system and the learning process of some complex ANNs, we choose the classical GAN [9]. Both the generator and the discriminator comprise \(4\) hidden layers. To verify the influence of different optimisers on GAN, we employ SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and fuzzyPID to generate the handwriting number (MNIST). We set the learning rate at \(0.0002\) and the total number of epochs at \(200\).
## 6 Results and Analysis
In this section, we present simulation performance, classification accuracy, error rate and generation result, using different optimisers and advanced control systems.
### Backward Control System on CNN
Before doing the classification task, we firstly simulate the step response of backpropagation based ANNs on each controller (optimiser). As observed in Figure 1(b) and Figure 1(c), AdaM optimiser can rapidly converge to the optimal but with an obvious vibration. Although FuzzyPID cannot rapidly converge to the optimal, there is no obvious vibration during the training. Other optimisers, such as HPF-SGD, SGDM and PID, perform lower than AdaM and FuzzyPID in terms of the training process. In Figure 1(a), the response of AdaM controller is faster than others, and FuzzyPID follows it. However, due to the overshoot on AdaM, the stability of ANN system when using the AdaM controller tends to be lower. This overshoot phenomenon is reflected on the training process of Adam optimising in Figure 1(b) and Figure 1(c).
We summarize the result of classifying MNIST in Table 1. Under the same condition, SGD optimiser reaches the testing accuracy at \(91.98\%\), but other optimisers can reach above \(97\%\). FuzzyPID gets the highest training and testing accuracy rates using Guassian membership function. In Figure 2, if considering the rise time, the settling time and the overshoot, the fuzzy optimiser outperforms other optimisers. A better optimiser (or controller) that has inherited advanced knowledge and sometimes has been effectively designed is beneficial for the classification performance.
### Forward Forward Control System on FFNN
We also simulate the control system of this proposed FFNN and compare its system response on different hyperparameters. In Figure 3, SGD controller still cannot reach the target, and AdaM controller reacts fastest approaching to the target. However, SGDM controller lags behind PID in terms of the step response. Because of the low frequency part of LPF-SGD, it climbs slower than HPF-SGD. Although the differential coefficient D of PID optimiser can help reduce overshoot and overcome oscillation and reduce the adjustment time, its performance cannot catch up with AdaM. Compared to Table 2, AdaM outperforms other optimisers in terms of error rates, and the performance of these seven optimisers are echoing Figure 2(a). A higher portion of positive samples can contribute to the classification, and a higher
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**optimiser** & SGD & SGDM & Adam & PID & LPF-SGD & HPF-SGD & FuzzyPID \\ \hline
**Training**\(Accuracy\) & \(91.48_{\pm 0.03}\) & \(97.78_{\pm 0.00}\) & \(99.46_{\pm 0.02}\) & \(99.45_{\pm 0.01}\) & \(11.03_{\pm 0.01}\) & \(93.35_{\pm 0.02}\) & \(99.73_{\pm 0.00}\) \\
**Testing**\(Accuracy\) & \(91.98_{\pm 0.05}\) & \(97.11_{\pm 0.02}\) & \(97.81_{\pm 0.10}\) & \(98.18_{\pm 0.02}\) & \(10.51_{\pm 0.03}\) & \(93.45_{\pm 0.09}\) & \(98.24_{\pm 0.10}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of ANN based on the backpropogation algorithm on MNIST data. Using the 10-fold cross-validation, the average and standard variance results are shown below.
Figure 2: The step response, training curve and loss curve using different controllers, such as SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and FuzzyPID optimisers.
\(Threshold\) can benefit more. For the step response in Figure (c)c, although AdaM ( \(Threshold=0.5\), \(portion\) of positive samples is \(70\%\), and \(portion\) of negative samples is \(30\%\)) and AdaM ( \(Threshold=0.5\), \(portion\) of positive samples is \(50\%\), and \(portion\) of negative samples is \(50\%\)) rise fateset, the final results in Table 2 present that AdaM ( \(Threshold=5.0\), \(portion\) of positive samples is \(50\%\), and \(portion\) of negative samples is \(50\%\)) get a lower error rate.
### Backward-Forward Control System on GAN
For the sample generation task, we also simulate the system response of GANs on each controllers (optimisers) and summarize the result in Figure 5. Apart from AdaM, LPF-SGD and HPF-SGD, all controllers have obvious noise, and interestingly, this phenomenon can be seen in Figure 4. The generated MNIST using Adam optimiser has no noise and can be easily recognized, and not surprised, the source response of AdaM in Figure 5 can finally converge. Figure 4 and Figure 5 mutually echo each other. Eventually, when using classical GAN to generate samples, AdaM should be the best optimiser to optimise the update of weights. The generated MNIST sample sometimes cannot be recognized, and GAN generates only same samples. One reason for this can be observed in Figure 5, where the sinusoidal signals generated by these four controllers, such as PID, LPF-SGD, HPF-SGD and FuzzyPID move up and down, potentially leading to an unstable and same generation output.
## 7 Discussion
### Why various optimisers are controllers during the learning process?
Under the same training condition (e.g., same architecture and hyperparameters), corresponding optimisers can tackle with specific tasks. Residual connection used vision models prefer SGDM, HPF-SGD and PID optimisers (Seen from Figure 14 of Appendix F). There is an obvious overshoot on the step response of AdaM controller (Seen from Figure 10), and a similar vibration can be found in the testing curve of Figure 14 of Appendix F. The classification task always needs a rapid response to save learning resources, but if stability and robustness are the priorities, we should set others as the optimizer, such as PID or FuzzyPID optimiser, which under fuzzy logic adjustment, demonstrates a superior step response (can be seen from Figure 2a). Moreover, for the generation task, GAN satisfies AdaM optimiser. We found that the adaptive part of AdaM can rapidly adjust the learning process. However, other optimisers, such as SGD, SGDM and PID, generate samples with obvious noise and output the same samples make the generated sample cannot be recognized easily (can be seen from Figure 4 and Figure 5). For particular needs (e.g., Image-to-Image Translation), CycelGAN, this advanced generation system was proposed to generate samples from one data pool and to improve its domain adaption on the target data pool. Coincidentally, we found that CycleGAN has a preference for the PID optimiser. Therefore, it is necessary to design a stable and task-satisfied optimiser on a specificly designed learning system. However, given that the system functions of most learning systems are extremely complex, simulating their system responses has become a viable way to analyze them. We conclude that to achieve best performance, every ANN should use the proper optimiser according to its learning system.
### How various learning systems can be analyzed?
Numerous advanced components have enhanced ANNs. Conducting a quantitative analysis on each of them can pave the way for the development of new optimisers and learning systems. For the classification task using a backward control system, in one node of the learning system, and in terms of analyzing a single component, the rise time, peak time, overshoot (vibration), and settling time [36; 27] can be the metrics to evaluate the performance of such component on learning systems. To visualize the learning process, FFNN was proposed by [13], and effectively, this forward-forward-based training system also can achieve competitive performance compared to backpropagation-based models. The \(Threshold\) - one hyperparameter - can significantly benefit the convergence speed, as it has the effect of proportional adjustment (same as a stronger P in PID controller). The portion of positive samples can slightly affect the classification result, as because the proportional adjustment is too weak on FFNN learning system (Seen from Equation 15). Additionally, the system response on various sources can also serve as a metric to evaluate the learning system. We conclude that there are two main branches to improve ANNs: **(1)** develop a proper optimiser; **(2)** design a better learning system. On the one hand, for example, the system response of GAN has high-frequency noise and cannot converge using SGD, SGDM and PID optimisers (seen from Figure 5). One possible solution is adding an adaptive filter. Thus, AdaM outperforms other optimisers on generating samples (Seen from Figure 4). The overshoot of AdaM and SGDM during the learning process of classification tasks can accelerate the convergence, but its side-effect of vibration brings us to PID and FuzzyPID. Therefore, developing a task-matched optimizer according to the system response determines the final performance of ANNs. On the other hand, to satisfy various task requirements, learning systems also should become stable and fast. For example, \(\theta_{G}(s)\) has two system functions as derived from Eq 19), to offset the side effect by considering the possible way using extra generator. That can explain why other advanced GANs using multi-generators (e.g., CycleGAN) can generate high-quality samples than the classical GAN.
## 8 Limitations
Although we systematically proved that **(1)** the optimiser acts as a controller and **(2)** the learning system functions as a control system, in this preliminary work, there are three obvious limitations: **a.** we cannot analyze larger models due to the complexity introduced by advanced techniques; **b.** the system response of some ANNs (e.g., FFNN) may not perfectly align with their real performance; **c.** we cannot always derive the solution of complex learning system.
## 9 Conclusion
In this study, we showed comprehensive empirical study investigating the connection between control systems and various learning systems of ANNs. We provided a systematic analysis method for several ANNs, such as CNN, FFNN, GAN, CycleGAN, and ResNet on several optimisers: SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and FuzzyPID. By analyzing the system response of ANNs, we explained the rationale behind choosing appropriate optimisers for different ANNs. Moreover, designing better learning systems under the use of proper optimiser can satisfy task
requirements. In our future work, we will intend to delve into the the control system of other ANNs, such as Variational Autoencoders (VAEs), diffusion models, Transformer-based models and so on, aw well as the development of optimisers, as we believe the principles of control systems can guide improvements in all ANNs and optimisers.
|
2305.01642 | Construct sparse portfolio with mutual fund's favourite stocks in China
A share market | Unlike developed market, some emerging markets are dominated by retail and
unprofessional trading. China A share market is a good and fitting example in
last 20 years. Meanwhile, lots of research show professional investor in China
A share market continuously generate excess return compare with total market
index. Specifically, this excess return mostly come from stock selectivity
ability instead of market timing. However for some reason such as fund capacity
limit, fund manager change or market regional switch, it is very hard to find a
fund could continuously beat market. Therefore, in order to get excess return
from mutual fund industry, we use quantitative way to build the sparse
portfolio that take advantage of favorite stocks by mutual fund in China A
market. Firstly we do the analysis about favourite stocks by mutual fund and
compare the different method to construct our portfolio. Then we build a sparse
stock portfolio with constraint on both individual stock and industry exposure
using portfolio optimizer to closely track the partial equity funds index
930950.CSI with median 0.985 correlation. This problem is much more difficult
than tracking full information index or traditional ETF as higher turnover of
mutual fund, just first 10 holding of mutual fund available and fund report
updated quarterly with 15 days delay. Finally we build another low risk and
balanced sparse portfolio that consistently outperform benchmark 930950.CSI. | Ke Zhang | 2023-04-24T09:40:44Z | http://arxiv.org/abs/2305.01642v1 | # Construct sparse portfolio with mutual fund's favourite stocks in China A share market
###### Abstract
Unlike developed market, some emerging markets are dominated by retail and unprofessional trading. China A share market is a good and fitting example in last 20 years. Meanwhile, lots of research show professional investor in China A share market continuously generate excess return compare with total market index. Specifically, this excess return mostly come from stock selectivity ability instead of market timing. However for some reason such as fund capacity limit, fund manager change or market regional switch, it is very hard to find a fund could continuously beat market. Therefore, in order to get excess return from mutual fund industry, we use quantitative way to build the sparse portfolio that take advantage of favorite stocks by mutual fund in China A market. Firstly we do the analysis about favourite stocks by mutual fund and compare the different method to construct our portfolio. Then we build a sparse stock portfolio with constraint on both individual stock and industry exposure using portfolio optimizer to closely track the partial equity funds index 930950.CSI with median 0.985 correlation. This problem is much more difficult than tracking full information index or traditional ETF as higher turnover of mutual fund, just first 10 holding of mutual fund available and fund report updated quarterly with 15 days delay. Finally we build another low risk and balanced sparse portfolio that consistently outperform benchmark 930950.CSI.
**Keywords**: Portfolio construction, Sparsity, Index tracking, Mutual fund, China A share market
## 1 Introduction
Active investing compare with passive investing is a popular topic during recent years. In developed country, lots of research shows active investing in average can't beat passive investing in decades[3][7][2]. The main reason they conclude are: active management fee is much higher than pass management fee. Active investing tax is less efficient than passive. Top active fund can't continuously stay in top performance category. Higher cost from higher turnover rate of active fund. Top fund AUM grow quite fast and Alpha decreased correspondingly. Most of these research focus on developed market like United States or Europe which are dominated by institution investor. Because of this, it is very hard to get
excess return and Alpha especially calculate after total fee. On the other hand, in some emerge stock market like China, it seems active fund have better performance consistently even we compared after fee and tax. Most reason cause this is the market inefficiency due to short history of stock market and high percentage of retail trading.
Unlike developed stock market, history of China A share market is just 30 years. Shanghai Stock Exchange built at 1990/11/26. Shenzhen Stock Exchange built at 1990/12/01. In 1998, first 6 mutual fund company built and 5 mutual fund IPO. Number of mutual funds in this market increased from 0 to more than 10 thousand from 1998 to 2023Q1. The market capitalization hold by mutual funds arrived one trillion CNY at 2007Q1 firstly, achieved twenty five trillion CNY at end of 2022 which is 4th largest in world.[12]
However, institution investors are still a very small portion of this market. Until 2022Q3, institution investors just hold about 17% of stocks measured by market capitalization. And mutual fund hold about 8% market capitalization of total stock market.[11]
As institution investors market capitalization is just small part of China A share market, we expect institution investors such as mutual fund could generate excess return. Indeed lots of researchers show mutual funds in China A share market do get excess return or Alpha compare with total market return [10][9][8][1][4][13].
If mutual funds in China could beat market in average, how could we take advantage of these information. Researchers already shows top mutual fund can't stay in top category all the time. Sometimes worst fund last year will have better performance later as mean reversion. China largest index provider, China Securities Index Co., Ltd. (CSI), construct a partial equity funds index 930950.CSI which reflect the average performance of partial equity funds in China A share market. However, in 2023 Q1, 930950.CSI hold more than 5000 funds which is impossible to directly invest.
In this paper, our first goal is mimic 930950.CSI with a small stock universe using constraint sparse convex optimizer. As 930950.CSI is a long-only index, therefore our portfolio is also long-only which automatically inherit the L1-norm penalty which we will explain in detail later. Traditional index track method is minimizing the MSE of different between portfolio return and index return [14][19]. As our index holding information has time delay and just partial public information available. Thus it is much more difficult to track than
Figure 1: Size of mutual fund in history and investor distribution in China A share market
full information index or traditional ETF. Therefore, we also add the industry exposure constraint in our portfolio. At last we will construct another more balanced and lower volatility portfolio which showed consistently beat the 930950.CSI.
## 2 Choose benchmark
In this section, we will choose a benchmark which could represent the average performance of mutual fund in China A share market.
China Securities Index Co., Ltd. (CSI), a financial market index provider jointly funded by the Shanghai Stock Exchange and the Shenzhen Stock Exchange in August 2005. It is Chinese largest index company which public more than 95% of ETFs in China A share market.
CSI mutual fund indices provide by CSI reflect the overall performance of all CSRC regulated mutual funds and their sub-classes. The indices provide benchmarks and underlyings for fund investors.
CSI partial equity funds index (930950.CSI) is one of CSI mutual fund indices which provide benchmarks for partial equity Funds. It includes all funds that open public more than 3 months and whose lower limit of investment scope is 60% and weight by fund shares which have larger capacity than equal weight method. Until 2023 Q1, there are more than 30 funds or fund of funds using 930950.CSI as benchmark.
In Tab. 1, we compare average return of partial equity funds and 930950.CSI which they are highly correlated. Meanwhile 930950.CSI performance is slightly worse than average return of Partial Equity Funds. We think the main reason is 930950.CSI weighted by shares of funds that overweight large AUM funds. Also 930950.CSI exclude small AUM funds public less than 3 month. As we known, equity mutual fund performance is negative correlated with size of AUM and higher IPO return of small AUM funds.
After we decide our benchmark, let's compare this partial equity funds index with other popular passive index in China A share market. 000300.CSI index consists of the 300 largest and most liquid A-share stocks which aims to reflect the overall performance of large cap
\begin{table}
\begin{tabular}{||l c c||} \hline Year & Average of partial equity funds & 930950.CSI \\ \hline \hline
2022 & -20.68 & -21.80 \\ \hline
2021 & 7.43 & 4.05 \\ \hline
2020 & 5.61 & 5.15 \\ \hline
2019 & 43.07 & 43.74 \\ \hline
2018 & -20.45 & -24.58 \\ \hline
2017 & 12.43 & 12.63 \\ \hline
2016 & -12.88 & -17.00 \\ \hline
2015 & 47.91 & 37.52 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average return of partial equity funds vs return of 930950.CSI, Data from fund.eastmoney.com and uqer.datayes.com
stock in China A-share market. 000905.CSI index consists of the 500 largest and most liquid A-share stocks without stocks in 000300.CSI universe which aims to reflect the overall performance of mid-small cap stock in China A-share market. 930903.CSI is total market index in China A-share market.
In Tab. 2, Here we compare performance of different indices with total return, annualized volatility ( using log return), Sharpe ratio ( using log return). Not surprise, partial equity funds index outperform other indices with higher total return and lower volatility. Which means, we pick the best perform index in China A share market as benchmark to track and beat.
## 3 Trade setting and assumption
Firstly, as fund reports we used are quarterly base, trade time of our model set at 16 days after each quarter when last quarterly report firstly available. We assume we can just trade stock after it IPO 3 months later. We can't trade suspension stock. We can't buy stock at limit up and sell stock at limit down.
We assume commission cost is 5 bps for buy and 1.5 bps for sell. Our order assume trade-able stocks be filled fully with 1 cent slippage cost for each side. Our beginning cash is 100 million CNY which is average AUM of small mutual funds. Finally we assume we can just trade shares that can be divided by 100.
## 4 Fund and stock screener
In order to construct partial mutual fund portfolio, we need generate our stock universe firstly. As our data source doesn't include information about partial equity mutual fund, we need manually screen our funds. Our target funds are partial equity mutual funds listed more than 3 months.
* Only include Open-ended Funds( No Close-ended Funds).
* Only include funds with categories Equity or Hybrid.
* As we just care about active fund, so we exclude ETF, Enhanced ETF or ETF connected fund.
\begin{table}
\begin{tabular}{||c c c c||} \hline Index & Total Return(\%) & Vol & Sharpe \\ \hline
930950.CSI & 60.49 & 23.52 & 0.139 \\ \hline
000300.CSI & -23.36 & 25.97 & -0.068 \\ \hline
000905.CSI & 24.95 & 29.46 & 0.06 \\ \hline
930903.CSI & 3.21 & 26.42 & 0.012 \\ \hline \end{tabular}
\end{table}
Table 2: Compare different indices from 2007/12/31 to 2023/03/01, data from uqer.datayes.com
* Exclude QDII, FoF, Structured Fund or Guarantee fund.
* Get rid of duplicated fund, only include one fund if several funds with same name but end with different alphabet such as A,B,C,D
* Exclude fund listed less than 3 months.
As universe of our portfolio are stocks. Then we filter stocks from our candidate funds.
* At 16 days after each end of quarter (earliest time to get quarterly report), we get first 10 holding from candidate funds which are only holding information available from quarterly fund report.
* Only include stocks in A-share market ( No B-share, H-share or ADR and so on).
* Only include stocks listed more than 3 months.
* Only include stocks is trade-able.
Here we only use quarterly report information as semi-annual and annual fund report delay too much. We doesn't exclude ST or ST* stocks as we thought these stocks selected by mutual funds still create Alpha and are liquid enough.
After we get our stock universe, now we want to choose a way to weight our stocks. 930950.CSI use fund share as weight for their index. Here we construct three weight methods and compare their performance. Firstly we use market capitalization of holding stocks which is closed to the weight method by 930950.CSI. Also we create a method that weight stocks by how many funds hold it. Finally, we construct third method which use sum of weight of stocks in each funds.
\[w_{i}=\frac{\sum_{j}{mcap}_{i,j}}{\sum_{i}{\sum_{j}{mcap}_{i,j}}} \tag{1}\]
\(mcap_{i,j}\) is market capitalization of stock i hold in fund j.
Eq. (1) is weight of stock by market capitalization of holding in mutual funds.
\[w_{i}=\frac{\sum_{j}{I_{i,j}}}{\sum_{i}{\sum_{j}{I_{i,j}}}} \tag{2}\]
\(I_{i,j}=1\) if stock i in fund j else 0.
Eq. (2) is weight of stock by numbers of holding in mutual funds.
\[w_{i}=\frac{\sum_{j}{p_{i,j}}}{\sum_{i}{\sum_{j}{p_{i,j}}}} \tag{3}\]
\(p_{i,j}\) is weight of stock i in fund j.
Eq. (3) is weight of stock by weight sum of holding in mutual funds.
Eq. (1) consider the effect of both market capitalization and number of holding. Eq. (2) only consider the effect of the number of holding. Eq. (3) consider the effect of both weight
of stock in mutual funds and number of holding in mutual funds. Then let's look at the performance of each method.
In Tab. 3, we weighted stocks by each method mentioned above. and constructed a simple long only portfolio. Here volatility and Sharpe ratio are calculated using log return. As we can see market capitalization weight method has worst return and volatility. Weighted sum of holding has highest return but volatility is slightly worse than weighted by numbers of holding. In the section below, when we want to track index closely, we choose the market capitalization weight method similar with index which is fund share weight. On the other hand, when we want to build a portfolio beat the index, we select weighted sum of holding with best performance.
## 5 Sparse convex optimization with constraint
_"Diversification is the only free lunch in investing."_
- Harry Markowitz
As original partial equity mutual fund index 930950.CSI hold thousands of funds each quarter which is impossible to directly invest. We want to build a much smaller stock universe portfolio to track or beat it. Convex optimization with L1-norm fit our requirement.
Portfolio optimization is a very important part of investing. Harry Markowitz provide mean-variance optimization in 1950s[5]. Steven Boyd publish convex optimization book at 2004 [6]. Also people realized we can transfer some none convex problem to convex problem [6]. For example we can transfer L1-norm optimization problem to equivalent quadratic optimization problem.
\[\min_{w} ||A^{T}w-B||_{2}^{2}+\lambda||w||_{1}\] s.t. \[w\in\mathbb{C} \tag{4}\]
Here \(||w||_{1}=\sum_{i}|w_{i}|\).
Eq. (4) is convex optimization problem with L1-norm.
This problem is equivalent to solve:
\begin{table}
\begin{tabular}{||c c c c||} \hline Method & Total Return(\%) & Vol & Sharpe \\ \hline Market capitalization of holding & 108.47 & 25.53 & 0.326 \\ \hline Numbers of holding & 110.37 & 24.56 & 0.343 \\ \hline Weighted sum of holding & 112.01 & 25.10 & 0.338 \\ \hline \end{tabular}
\end{table}
Table 3: Performance of different weight method from 2013/12/31 to 2023/03/01, data from uqer.datayes.com
\[\min_{w} ||A^{T}w-B||_{2}^{2}+\lambda 1^{T}v\] s.t. \[w\in\mathbb{C}\] \[-v\leq w\leq v\]
Eq. (5) is a convex quadratic optimization problem which is also another form of Eq. (4).
Optimization with L1-norm penalty has been showed several properties:
* Bring in the sparsity. This is one of most important reasons why people prefer L1-norm penalty. Because of sparsity, it could add explanation to model parameters. In portfolio selection, people consider use L1-norm to reduce number of holding or number of trading.
* Long-only portfolio optimization naturally include a L1-norm penalty. When we put two constraint \(w_{i}\geq 0\) and \(\sum_{i}w_{i}=1\) into our problem. Then naturally we have \(||w||_{1}=\sum_{i}|w_{i}|=\sum_{i}w_{i}=1\) which is a fixed number independent with w. Thus there are no more work we need to do to put sparsity requirement to long-only portfolio optimization.
* Similar with L2-norm penalty, L1-norm penalty could stabilizes the estimators which usually used to avoid singularity or ill-condition of problem.[14]
## 6 Construct sparse stock portfolio to track 930950.CSI
Build a portfolio to track full information index or ETF is well-known and there are lots of research and application about it. However, mutual fund just public their topic 10 holdings every quarter and will delay 15 days which make our problem much more difficult than traditional tracking goal.
As we mentioned before, 930950.CSI have better return and lower volatility than other passive benchmark index. However, 930950.CSI hold thousands of funds each period which is impossible to invest directly. Thus in this section, we will construct a sparse stock portfolio to track the 930950.CSI closely.
Here we construct our portfolio by this way:
\[\min_{w} -\alpha^{T}w+\beta w^{T}\Sigma w+\kappa||R^{T}_{stock}w-r_{bench} ||_{2}^{2}+||w||_{1}\] s.t. \[0\leq w\leq\gamma \tag{6}\] \[\phi_{1}\leq Aw\leq\phi_{2}\] \[\sum_{i}w_{i}=1\]
First of all, as weight method of 930950.CSI is closed to market capitalization, we rank stock by market capitalization of holdings and take first half of them as our portfolio universe which represent the stocks that favorite by mutual funds.
Here \(\alpha\) is Alpha of each stock. We still use weight cap of holdings as Alpha to replicate 930950.CSI. Then we standardize our Alpha. After that we winsorize standardized Alpha by 3 standard deviation from each side.
\(\beta\) is risk parameter to control how important of our risk model are. \(\Sigma\) is our covariance model to control risk of portfolio. We use sample covariance by rolling two year log return of stocks. In order to avoid covariance singularity or ill-condition we shrink our covariance by method mentioned in [15].
\(||R_{stock}^{T}w-r_{bench}||_{2}^{2}\) is a common tracking measure called empirical tracking error.[19] We minimize it to closely track our benchmark. \(R_{stock}\) is return of stocks in our universe. \(r_{bench}\) is return of index. Here we use rolling 3 month data to do this mean square error minimization because fund report update quarterly.
\(||w||_{1}\) bring sparsity into our portfolio. As we also have constraint \(w_{i}\geq 0\) and \(\sum_{i}w_{i}=1\), thus \(||w||_{1}=\sum_{i}|w_{i}|=\sum_{i}w_{i}=1\). This term is a constant that independent with w.
As law requirement, Mutual fund in China are long-only with 10% single stock constraint. Thus we induce \(0\leq w\leq\gamma\). this inequality make sure we are long-only portfolio and weight of each stock less or equal than \(\gamma\), in order to track index closely, here we choose \(\gamma\) equal to 0.01 that maximum hold position of each stock is less than 1%.
Just try to track index from individual stocks is not enough especially our portfolio is sparse and our benchmark is not an ETF. Our idea is also track the industry exposure of benchmark quarterly. As we don't have whole history holding of 930950.CSI. Here we use \(\phi\) as industry exposure of market capitalization weight of fund stocks which is similar with weight method of 930950.CSI. In order to make portfolio feasible, we use inequality constraint instead of equality constraint. A is a \(M\times N\) matrix with information if stock i in industry j where \(i\leq M,j\leq N\).
Finally, \(\sum_{i}w_{i}=1\) make sure w is percentage weight of portfolio.
Figure 2: Return and correlation between Portfolio and 930950.CSI
In Fig. 2, we successfully track the 930950.CSI closely, the median 3 month rolling correlation between our portfolio and index is as high as 0.985. We could find most time the correlation is very close except some short period. We thought the main reason are during that period, lots of mutual fund change their position a lot and we can just saw the first 10 holdings of mutual fund quarterly and report delayed 15 days. Therefore it is much more difficult to track than full information index or ETF. Some researcher already show the Herding behavior in institutional investors [1].
In graph above, we showed the industry exposure of our portfolio which is optimized closely to 930950.CSI. We use CSI Industry Classification Standard Level I to get stock industry exposure. Communication Services sector is almost 0% exposure all the time so we doesn't show on graph above. We can find sector Industry, Information Technology, Health care and Consumer Staple are over weighted that more than 10%.
Meanwhile, we showed number of holdings in our portfolio. As we have a 1% maximum holding limit, we have to hold more than 100 stocks each period. From graph we see that our holding is slightly more than 100 each period (about 120 in average and 25% of universe). Thanks to the sparsity property of L1-norm penalty in portfolio optimizer.
Figure 3: Industry exposure of portfolio and number of holdings
Construct sparse portfolio beat 930950.Csi
Here we construct another portfolio with optimizer but different setting:
\[\min_{w} -\alpha^{T}w+\beta w^{T}\Sigma w+||w||_{1}\] s.t. \[0\leq w\leq\gamma \tag{7}\] \[Aw\leq\phi\] \[\sum_{i}w_{i}=1\]
Here we create another portfolio which will beat partial equity funds index, 930950.CSI.
First of all, we choose the weight method with weight sum of mutual fund holding which showed higher return than other two weight method. Then we still select top 50% of stocks as the favorites by mutual fund industry.
Also, we know China A share market has very high volatility and lots of paper showed select stock by low volatility in China A share market could improve not only annual return but also Sharpe ratio. Therefore we sub-select our universe by top 100 lowest max-flat weighted volatility stocks. [16][17][18].
As our goal is beat 930950.CSI instead of track it closely and we have a much smaller universe after sub-select universe. We relax the single stock constraint \(\gamma\) from 1% to 10% which is same as the limit of single holding of mutual fund.
Finally, we restrict \(\phi\) each industry exposure of our portfolio within 10% to reduce the sector risk and build a more balanced portfolio.
Figure 4: Performance of portfolio compare with benchmark
Here volatility and Sharpe ratio are calculated by log return. From table above, obviously we can find our portfolio has better total return, lower volatility and higher Sharpe ratio compared with either partial equity funds index or total market index.
Meanwhile, we can compare the maximum drawdown of three strategies. Obviously, Our portfolio have lower maximum drawdown than 930950.CSI and 930903.CSI.
Then let's consider use portfolio return minus 930950.CSI and portfolio return minus 930903.CSI. What we get two equity curve gradually go up which means our portfolio could beat this two index in long term.
\begin{table}
\begin{tabular}{||c c c c||} \hline Methods & Total Return(\%) & Vol & Sharpe \\ \hline Portfolio & 302.27 & 22.22 & 0.71 \\ \hline
930950.CSI & 127.11 & 23.46 & 0.40 \\ \hline
930903.CSI & 82.45 & 23.45 & 0.30 \\ \hline \end{tabular}
\end{table}
Table 4: Performance of portfolio compare with benchmark from 2013/12/31 to 2023/03/01, data from uqer.datayes.com
Figure 5: Under water graph and excess return graph
From graph above, all the industry exposure are restricted within 10%. While Industry, Finance, Health care are all time about 10% but lower than their weight in 930950.CSI. Energy, Utility obviously overweight compared with 930950.CSI.
Meanwhile, our portfolio average holding is about average 25 stocks that is very sparse compare with 100 stock universe.
## 8 Conclusion
In this paper, we firstly do some analysis about mutual funds and mutual fund's holdings in China A share market. Similar with conclusion of other researchers, in last 20 years, mutual funds in China A share market outperform the total market index as the inefficiency of emerging market. Then we build the sparse portfolio with individual stock and industry exposure constraint to track partial equity fund index. We get a high correlation (0.985) sparse portfolio. Finally, we build a low-risk balanced sparse portfolio that beat the partial equity funds index regard of return and Sharpe ratio consistently.
|
2306.09996 | Investigating Prompting Techniques for Zero- and Few-Shot Visual
Question Answering | In this paper, we explore effective prompting techniques to enhance zero- and
few-shot Visual Question Answering (VQA) performance in contemporary
Vision-Language Models (VLMs). Central to our investigation is the role of
question templates in guiding VLMs to generate accurate answers. We identify
that specific templates significantly influence VQA outcomes, underscoring the
need for strategic template selection. Another pivotal aspect of our study is
augmenting VLMs with image captions, providing them with additional visual cues
alongside direct image features in VQA tasks. Surprisingly, this augmentation
significantly improves the VLMs' performance in many cases, even though VLMs
"see" the image directly! We explore chain-of-thought (CoT) reasoning and find
that while standard CoT reasoning causes drops in performance, advanced methods
like self-consistency can help recover it. Furthermore, we find that text-only
few-shot examples enhance VLMs' alignment with the task format, particularly
benefiting models prone to verbose zero-shot answers. Lastly, to mitigate the
challenges associated with evaluating free-form open-ended VQA responses using
string-matching based VQA metrics, we introduce a straightforward LLM-guided
pre-processing technique to adapt the model responses to the expected
ground-truth answer distribution. In summary, our research sheds light on the
intricacies of prompting strategies in VLMs for VQA, emphasizing the
synergistic use of captions, templates, and pre-processing to enhance model
efficacy. | Rabiul Awal, Le Zhang, Aishwarya Agrawal | 2023-06-16T17:47:57Z | http://arxiv.org/abs/2306.09996v2 | # Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
###### Abstract
Visual question answering (VQA) is a challenging task that requires the ability to comprehend and reason with visual information. While recent vision-language models have made strides, they continue to struggle with zero-shot VQA, particularly in handling complex compositional questions and adapting to new domains i.e. knowledge-based reasoning. This paper explores the use of various prompting strategies, focusing on the BLIP2 model, to enhance zero-shot VQA performance. We conduct a comprehensive investigation across several VQA datasets, examining the effectiveness of different question templates, the role of few-shot exemplars, the impact of chain-of-thought (CoT) reasoning, and the benefits of incorporating image captions as additional visual cues. Despite the varied outcomes, our findings demonstrate that carefully designed question templates and the integration of additional visual cues, like image captions, can contribute to improved VQA performance, especially when used in conjunction with few-shot examples. However, we also identify a limitation in the use of chain-of-thought rationalization, which negatively affects VQA accuracy. Our study thus provides critical insights into the potential of prompting for improving zero-shot VQA performance.
## 1 Introduction
Visual Question Answering (VQA) is a complex task that requires models to comprehend both visual and textual inputs to deliver accurate responses (Antol et al., 2015). Recent vision-language models (VLMs) pre-trained on webscale image-text data have made significant advancements towards tackling VQA tasks, including surpassing human performance on the popular VQAv2 dataset when fine-tuned on it (Chen et al., 2022; Wang et al., 2022; Alayrac et al., 2022). Some of these models (Eichenberg et al., 2021; Alayrac et al., 2022; Manas et al., 2022; Li et al., 2023) are also capable of zero- (or few-) shot transfer to various VQA tasks via prompting, where a text prompt is employed to steer the model towards generating the correct response. In zero-shot VQA, the prompt only describes the task (e.g., "Question: <question> Answer."). In few-shot VQA, the prompt also includes a few examples of image, question, correct answer triplets (for instance, Flamingo (Alayrac et al., 2022)) to further guide the model.
Inspired by the exploration of various prompting techniques for large language models (LLMs) (Brown et al., 2020; Lester et al., 2021; Srivastava et al., 2022; Wei et al., 2022; Kojima et al., 2022), in this paper, we investigate what prompting techniques are effective for this recent paradigm of zero- and few-shot VQA. In particular, we study the effectiveness of each of the following factors towards improving VQA performance:
1. **Choice of the question template:** We explore different question templates to guide the model's answer generation. These templates include options such as using the question as-is, wrapping the
question in a question-answer format, and providing task-specific instructions. Further details are in SS3.3.
2. **Incorporating (text-only) few-shot examples:** We incorporate text-only few-shot Q&A examples to enhance the model's understanding of the downstream task; for instance, understanding that OKVQA requires a knowledge-based answer (SS3.4).
3. **Incorporating image captions as additional visual cues:** We incorporate image captions as a prefix to questions. We expect the captions to serve as additional visual cues and help improve the model's comprehension of visual data (SS3.2). We explore several methods for automatic caption generation via prompting (SS3.3).
4. **Incorporating chain-of-thought reasoning:** Inspired by the success of chain-of-thought (CoT) reasoning in language models, we investigate its application in VQA. This approach prompts the model to provide step-by-step rationale alongside answers (SS3.2).
We conduct our investigation of prompting techniques on the BLIP2 model Li et al. (2023) which is the state-of-art model for zero-shot VQA. BLIP2 has two variants, one with OPT Zhang et al. (2022) language model (LM), and the other with instruction fine-tuned FLAN-T5 Chung et al. (2022) LM, offering an opportunity to explore the impact of having an instruction-tuned LM on the effectiveness of prompting. To measure the effectiveness of the prompting techniques, we focus our evaluation on VQA tasks that prove to be challenging for current VLMs (including BLIP2) such as tasks involving compositional reasoning (GQA Hudson and Manning (2019)), and knowledge-based reasoning (OKVQA Marino et al. (2019) and AOKVQA Schwenk et al. (2022)). We also repurpose the recently released compositional understanding benchmark Winoground Thrush et al. (2022) in VQA format and evaluate our models on that.
To summarize, our findings are: (1) We observed that the BLIP2 variant that uses the instruction tuned FLAN-T5 LM showed moderate sensitivity to the choice of the question template, while the OPT variant exhibited considerable sensitivity. (2) Through experimentation with five in-context Q&A exemplars, we found that few-shot exemplars hurt task performance, unless used in conjunction with few-shot examples of captions or CoT rationales. (3) The use of captions as a prefix to questions, combined with few-shot Q&A exemplars, consistently improves performance, indicating that in-context QA samples and image captions can guide the model to better utilize available information. (4) Chain-of-Thought reasoning for rationalization and answer prediction leads to a drop in performance.
With the insights we provide regarding the effectiveness of various prompting techniques, we hope our findings will: 1) enhance our understanding of how we can better utilize large pre-trained VLMs, such as BLIP2, without task-specific fine-tuning, and 2) serve as a point of reference for future work towards advancing zero- and few-shot VQA.
## 2 Related Work
VQA tasks and datasetsAdvances in Visual-Question Answering (VQA) have been largely driven by a variety of benchmark datasets (Goyal et al., 2017; Antol et al., 2015; Zhu et al., 2016; Johnson et al., 2017; Hudson and Manning, 2019; Marino et al., 2019). One influential example is the VQA v2 dataset (Antol et al., 2015; Goyal et al., 2017), which includes diverse questions about images, requiring a wide range of visual understanding capabilities from models. Further specialized datasets such as GQA (Hudson and Manning, 2019), and CLEVR (Johnson et al., 2017) focus on unique aspects of visual reasoning. The GQA dataset evaluates compositional reasoning, and CLEVR is designed for synthetic visual reasoning. Another significant benchmark is the OK-VQA (Marino et al., 2019) and AOKVQA (Schwenk et al., 2022) dataset, which uniquely requires models to integrate external knowledge with visual understanding to provide accurate responses.
While VLMs have made tremendous progress in tackling VQA datasets such as VQAv2, Visual7W, even surpassing human performance on VQAv2 when fine-tuned (Alayrac et al., 2022; Li et al., 2023; Chen et al., 2022), their ability to tackle more complex datasets such as GQA which requires compositional reasoning and OK-VQA, AOKVQA which require knowledge-based reasoning is limited. In this work, we focus our evaluation on these complex datasets, in zero- and few-shot settings. Additionally, to test compositional reasoning in a more controlled and strict setting, we repurposed the recently released Winoground (Thrush et al., 2022) benchmark into VQA format and evaluate Winoground-QA too.
Prompting in LLMsPrompting techniques have been extensively investigated to adapt LLMs to a variety of unseen NLP tasks Brown et al. (2020), Lester et al. (2021). A successful prompting technique guides the model towards the correct response either through the presentation of _in-context_ labeled examples (Brown et al., 2020) or via a well-designed set of task instructions (Liu et al., 2023; Zhao et al., 2021; Liu et al., 2021; Lu et al., 2021; Lu et al., 2021; Ouyang et al., 2022).Recently, certain prompt templates, such as "Let's think step-by-step" used in (Kojima et al., 2022), have been found to enhance reasoning and facilitate complex task solving. This paradigm of eliciting reasoning in LLMs via prompting is known as Chain-of-Thought (CoT) prompting (Wei et al., 2022). However, CoT prompting does not work very well for language models (LMs) with fewer than 100 billion parameters (Wei et al., 2022). To enable smaller LMs to exhibit CoT reasoning, FLAN T5 (Chung et al., 2022) fine-tuned an 11B LM on a mixture of natural instructions and CoT data. The BLIP2 model we study in this paper uses such a fine-tuned FLAN T5 as its LM, thus making it well-suited for CoT prompting. Overall, our work draws inspiration from the recent prompting research in NLP and presents a new finding on the effectiveness of prompting techniques for multimodal VQA tasks.
Multimodal PromptingPrompting is not well explored in multimodal models as large generative VLMs are relatively new. There are a few different lines of work that apply prompting in different ways. For instance, models such as Flamingo, MAPL, and others (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Manas et al., 2022) employ few-shot in-context learning to facilitate knowledge transfer to unseen tasks. However, Flamingo (Alayrac et al., 2022) requires interleaved image-text data for pre-training which can be difficult to curate. Conversely, MAPL (Manas et al., 2022) exhibits lower absolute performance in VQA compared to other state-of-the-art (SOTA) methods due to its training with less data and fewer parameters. In another line of work, ViperGPT (Suris et al., 2023), VisualProg (Gupta and Kembhavi, 2022) prompt GPT-3 (Brown et al., 2020) or Codex (Chen et al., 2021) to convert complex natural language questions into programs where the subroutines are made of off-the-shelf vision-language models. The resulting programs are then executed to generate answers. Similarly, approaches such as PICa (Jin et al., 2021) and PromptCap (Hu et al., 2022) convert images into text captions, which are then processed by GPT-3 to answer questions. Furthermore, (Zhang et al., 2023) apply multimodal CoT reasoning and demonstrate improved answering accuracy for ScienceQA tasks. However, their approach requires fine-tuning the model on multimodal CoT data.
In contrast, we focus on exploring the potential of text-only prompting techniques to maximize the performance of zero-shot multimodal models like BLIP2. Unlike Flamingo, BLIP2 is not pre-trained with interleaved image-text data. Moreover, BLIP2 has a relatively smaller model size compared to GPT-3 and is publicly available, making it easily accessible.
## 3 Prompting BLIP2 for VQA
### BLIP2 Model
The Bootstrap Language Image Pre-training (BLIP2) model (Li et al., 2023) is a recent VLM that combines pre-trained image encoders with large language models. Despite having significantly fewer trainable parameters, BLIP2 outperforms existing models on various vision-language tasks. It surpasses the Flamingo-80B (Alayrac et al., 2022) model by 8.7% on the zero-shot VQAv2 task and exhibits promising capabilities in zero-shot image-to-text generation which makes it an ideal choice for our study. We selected BLIP2 for our investigation for the following reasons: a) it is the SOTA model on zero-shot VQA, b) its pre-trained checkpoint and the inference code are publicly available, and c) its language model, with over 1 billion parameters and instruction-tuned capabilities, is well-suited for prompting (Chung et al., 2022). Due to the lack of any other models with performance close to BLIP2 that also have publicly released pre-trained checkpoints, we could not include any other models in our study. However, we experiment with two variants of BLIP2, one with OPT (Zhang et al., 2022) LM, and the other with instruction fine-tuned FLAN-T5 (Chung et al., 2022) LM. For each variant, we evaluate two model sizes; in total we evaluate four models: BLIP2 FLAN-T5xL, BLIP2 FLAN-T5xXL, BLIP2 OPT2.7B, and BLIP2 OPT6.7B.
### Prompt Settings
We examine different prompt settings. These settings differ in the amount of input information provided to the model (in addition to the task instruction) and the level of guidance given by the task instruction, as illustrated in Figure 1. We denote the predictor (BLIP2) as \(f\).
* **Standard VQA**: In addition to the task instruction, this setting supplies only the image and the question as inputs to the model to guide the model's answer generation. The input-output format is \(f(\text{image, question})\rightarrow\text{answer}\).
* **Caption VQA**: Expanding on the Standard VQA setting, this setting includes a caption as a prefix to the question to serve as additional visual cues and help facilitate model's comprehension of the visual data. The caption input to the model is generated automatically via prompting (see SS3.3). The input-output format is \(f(\text{image, caption, question})\rightarrow\text{answer}\).
* **Chain-of-Thought (CoT) VQA**: Going a step further, this setting prompts the model to generate not only the answer but also a rationale for the generated answer. The input-output format is \(f(\text{image, question})\rightarrow\text{rationale}\), answer.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Template Name** & **Template** \\ \hline \hline \multicolumn{3}{c}{**Question Templates**} \\ Null & \textless{question}\textgreater{} \\ short-qa & Question: \textless{question}\textgreater{} \textless{choices}\textgreater{} Short answer: \\ ga & Question: \textless{question}\textgreater{} \textless{choices}\textgreater{} Answer: \\ following-qa & Answer the following question. \textless{question}\textgreater{} \textless{choices}\textgreater{} \\ instruct-task\textgreater{}qa & Your task is to answer a knowledge-based question. Question: \textless{question}\textgreater{} \textless{choices}\textgreater{} Short answer: \\ instruct-task\textgreater{}qa & Your task is to answer a question that may require compositional reasoning. Question: \textless{question}\textgreater{} \textless{choices}\textgreater{} Short answer: \\ following-qa-yn & Answer the following yes/no question. \textless{question}\textgreater{} \\ is-true-yn & Is it true about this image “\textless{text}\textgreater{}”? Answer in yes/no. \\ does-describe-qa-yn & Does this describe the image “\textless{text}\textgreater{}”? \\ \multicolumn{3}{c}{**Chain-of-Thought Templates**} \\ chain-of-thought-prefix & Please answer the following question by reasoning step by step. Q: \textless{question}\textgreater{} A: \\ chain-of-thought-suffix & Q: \textless{question}\textgreater{} A: Let’s think step by step \\ \hline \hline \multicolumn{3}{c}{**Caption Generation Templates**} \\ a-photo-of & A photo of \\ q-guided-cap & Describe the image according to the following question \textless{question}\textgreater{} \\ describe-scene & Please describe the scene. \\ describe-image & Please describe the image. \\ \hline \hline \end{tabular}
\end{table}
Table 1: A list of task instruction templates used for each of the following – asking the questions, generating captions, and prompting chain-of-thought reasoning. These templates transform a given question, or caption (if provided) into a specific format to elicit the most suitable response. \textless{choices}\textgreater{} are optional and only provided for multiple-choice VQA tasks.
Figure 1: An overview of different prompt settings explored in the study. The Standard VQA setting uses a task-instruction followed by a given question and prompts the model to generate a response. The caption VQA setting enhances the prompt with a caption as a prefix to the question. The chain-of-thought VQA setting additionally prompts the model to provide a rationale for the generated answer. All VQA formats can also utilize text-only few-shot exemplars to further guide the model towards the correct response.
### Task Instruction Templates
Table 1 contains a list of templates we use for formatting the instructions corresponding to each of the following - asking the questions, generating captions, and prompting chain-of-thought reasoning. These templates are designed to elicit the most suitable responses from VLMs.
**Question Templates** We use a variety of templates for asking questions in the Standard VQA setting. We expect BLIP2's LMs to make use of these templates. The FLANT5 model in BLIP2 has been exposed to a vast array of QA tasks during its pre-training, while the OPT model is pre-trained on a web text corpus that presumably contains numerous question-answer snippets. Thus each template is expected to guide the model in a different direction. For instance, the is-true-yn prompt template encourages the VLM to provide a yes/no answer (which is needed for Winoground VQA task), whereas the short-qa template nudges a concise response to a given question (which might be desirable for tasks where the ground truth answers are short). The instruct-{task}-qa template provides further specific instructions on the VQA task; for instance, for the knowledge-based questions we use the template "Your task is to answer a knowledge-based question."
**Caption Generation Templates** We have two caption generation templates: a-photo-of and q-guided-cap. The a-photo-of template prompts the model to start the caption with "A photo of," providing a straightforward description of the image. The q-guided-cap template is inspired by PromptCAP Hu et al. (2022) and incorporates the question as a guide for generating captions relevant to the subject of the question. We also tried some additional templates (describe-scene, describe-image) for Winoground.
**Chain-of-Thought Templates** We use two chain-of-thought templates (chain-of-thought-prefix, chain-of-thought-suffix) that guide BLIP2 to first generate rationale and then infer the answer.
Overall, by using a range of different task instruction templates, we aim to explore the capabilities of BLIP2 in answering various types of questions.
### (Text-only) Few-Shot Exemplars
BLIP2 lacks the ability to learn in-context from a few-shot demonstrations of image, question, answer triplets, unlike Flamingo. However, the LMs in BLIP2 can still learn in-context from text-only few-shot examples! So, to improve the model's grasp of the task at hand, we provide _text-only_ few-shot exemplars (Q&A pairs) as additional context. We expect these exemplars to more precisely guide the model towards the desired answer format. Consider the question: "Where are these animals found?" The model's prediction could be: "The animals are found in the wild.." which would be a valid answer to a human judge but due to the rigidness of the automatic VQA evaluation metrics Agrawal et al. (2023), would be deemed incorrect for datasets such as OK-VQA where the ground truth is: "Africa". For each test question, we retrieve \(5\) nearest few-shot exemplars (Q&A pairs) from the corresponding task's training set. However, we had to discard nearest neighbors that were too similar to the test question to prevent the model from directly copying exemplar answers instead of formulating appropriate responses (see appendix A.2 for details). Each prompt setting described in SS3.3 can be applied with or without exemplars, as shown in (Figure 1). For the Caption VQA setting, the few-shot examples also include the generated caption for each few-shot sample, and for the CoT VQA setting, the few-shot examples include the model generated rationale for each few-shot sample.
### Experimental Setup
DatasetsWe evaluate the effectiveness of prompting techniques on three complex VQA datasets and a visio-linguistic probing dataset called Winground, each with distinct characteristics: (1) **OKVQA**(Marino et al., 2019): A benchmark dataset with real-world images that focuses on knowledge-based visual question answering, requiring external knowledge resources. (2) **AOKVQA**(Schwenk et al., 2022): A crowdsourced dataset that necessitates commonsense reasoning and world knowledge for answering diverse questions in multiple choice format. (3) **GQA**(Hudson and Manning, 2019): A large-scale dataset that evaluates the model's performance on various visual reasoning abilities, including compositional reasoning. (4) **Winoground-QA**(Thrush et al., 2022): Winoground is a recently released image-text matching benchmark designed to probe visio-linguistic compositional reasoning in VLMs. Each sample consists of two images and two captions and the
task is to determine the correct image-caption matching (each caption matches with only one image and each image matches with only one caption). We repurpose this dataset into a yes/no visual question-answering task called Winoground-QA. We rephrase the captions as yes/no questions using ChatGPT (see appendix A.2 for more details). Thus, each sample of Winoground-QA requires answering two yes/no questions for each of the two images.
EvaluationWe explore two evaluation settings: open-ended and multiple-choice. In the open-ended setting, BLIP2 is conditioned only on the question and the image (in Standard VQA), whereas in the multiple-choice setting, BLIP2 is additionally conditioned on the multiple choices provided in the test benchmark, as shown in Table 1; however the model could generate a response outside the multiple choices. To evaluate the model generated response, we use the VQA accuracy metric Antol et al. (2015) for datasets containing multiple ground-truth answers per question (OKVQA, AOKVQA), otherwise we use 1/0 accuracy depending on whether the candidate answer exactly matches the (only) ground truth answer (multi-choice AOKVQA and GQA). Before conducting string matching, we process the answers using lemmatization and removal of prepositions, articles, and punctuation. For the Winoground-QA dataset, we use 1/0 accuracy: 1 if all four yes/no questions are answered correctly, 0 otherwise.
## 4 Results
### Is VQA performance sensitive to the choice of the question template?
Our study reveals that using a question template yields better results than using a null template (Table 2). Notably, we observed significant variations in the effectiveness of different question templates, with the **instruction-tuned BLIP2 FLAN-T5 model showing moderate sensitivity to template variations, while OPT exhibiting considerable performance discrepancies depending on the chosen template.** Specifically, the instruct-{task}-qa template, which incorporates task-specific instructions, proved to be the best performing template for BLIP2 FLAN-T5XL model, improving the average accuracy across tasks from 49.64% (with a null prompt) to 52.97%. On the other hand, the BLIP2 FLAN-T5XXL model performed best with the following-qa template, which prompts for an answer without requiring a structured "Question:", "Answer:" format. The choice of the question templates matters for the challenging Winoground-QA task as well (Table 4), with some templates performing below random change, while others achieving significantly better performance. It is worth mentioning that for the OPT variants, we had to remove newlines from all templates to prevent the OPT model from outputting an empty string. Moreover, our results show that OPT requires a template with a structured "Question:", "Answer:" format (such as short-qa and following-qa). The Null and following-qa templates resulted in zero accuracy. We also observed that the FLAN-T5 variants demonstrated remarkably high accuracy when dealing with multiple-choice answers, in contrast to the OPT variants. This difference could be attributed to FLAN-T5's exposure to similar multiple-choice tasks during language model fine-tuning.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Question Template & OKVQA & AOKVQA & GQA & AVG \\ & val & val me & val & testdev-all & \\ \hline \hline \multicolumn{5}{c}{**BLIP2 FLAN-T5\({}_{\text{XL}}\)**} \\ \hline Null & 40.06 & 65.0 & 43.86 & 55.89 & 49.64 \\ short-qa* & 43.48 & 67.94 & 47.15 & 58.48 & 52.26 \\ following-qa & 43.45 & 68.38 & 46.40 & 57.42 & 51.85 \\ instruct-{task}-qa & **45.12** & **63.38** & **48.61** & 57.91 & **52.97** \\ \hline \multicolumn{5}{c}{**BLIP2 FLAN-T5\({}_{\text{XL}}\)**} \\ \hline Null & 43.93 & 62.85 & 47.46 & 56.10 & 50.69 \\ short-qa & 48.11 & 69.17 & 50.99 & **59.17** & **54.51** \\ following-qa & **48.53** & **69.09** & **51.20** & 57.75 & 54.33 \\ instruct-{task}-qa & 47.74 & 68.12 & 48.42 & 57.41 & 53.25 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zero-shot Performance on VQA datasets using various question templates in the Standard VQA setting. \({}^{*}\) denotes the templates from the BLIP2 paper for each model variant. The evaluation includes open-ended VQA unless specified otherwise. ‘val,’ testdev-bal,’ and ‘testdev-all’ represent different evaluation data splits, while ‘mc’ stands for multiple-choice QA.
### Does incorporating (text-only) few-shot exemplars help improve VQA performance?
Although the question templates can guide the model generation, they struggle to precisely guide the model towards the desired answer format. Our hypothesis was that using text-only exemplars would provide more accurate guidance for the model, leading to improved performance. To investigate this, we conduct further prompting experiments with five few-shot Q&A exemplars using the best performing question template from SS4.1. **To our surprise, we found that few-shot exemplars hurt task performance!** (compare [question] and [question (n=5)] rows in Table 3). We also tried introducing more examples but that did not help either. We suspect introducing more examples without paired images introduces noise. It is important to note that **while few-shot exemplars are not effective in the Standard QA setting, they demonstrate effectiveness in the Caption QA and CoT QA settings**, as discussed in the following sections.
### Does incorporating image captions as additional visual cues help improve VQA performance?
**Yes, when used in conjunction with few-shot exemplars.** We find that BLIP2's performance improved by incorporating model generated image-captions. The results of our experiments are presented in Table 3. When captions are used as a prefix to a question along with few-shot exemplars (rows corresponding to [a-photo-of,question (n=5)], and [q-guided-cap,question (n=5)]), the performance consistently surpasses the baseline Standard VQA setting which also uses few-shot exemplars, but no captions (rows corresponding to [question (n=5)]), with an average improvement of 4.94% (for FLAN-T5XXL variant) across all tasks using the a-photo-of caption generation template. Different captioning strategies, such as a-photo-of and q-guided-cap show varying
\begin{table}
\begin{tabular}{l c c c c}
degrees of performance improvements. a-photo-of captions perform better than question-guided captioning. This suggests that generating good question-guided captions in a zero-shot setting is challenging for BLIP2.
Interestingly, incorporating captions does not help when _not_ used in conjunction with few-shot exemplars (compare rows [a-photo-of,question] and [q-guided-cap,question] against [question]). This indicates that in-context Q&A exemplars aid the model to better utilize the information available in the captions. However, with better captioning methods such as PromptCap (which uses the captions generated by the PromptCAP [Hu et al., 2022] method) we see improvements both in zero- and (few-) shot settings. The PromptCAP model is explicitly fine-tuned for question-guided caption generation. We use PromptCap to show an upper bound on the extent to which incorporating captions could help. The high performance achieved with PromptCap captions suggests that leveraging captions as additional visual cues can augment the model's performance. These results underscore the importance of designing prompts that encourage models to effectively utilize all available sources of information.
For the Winoground-QA task (Table 4), surprisingly we observe that incorporating captions helps even in the absence of few-shot examples! This suggests that for Winoground-QA, extraction of visual information from images is more challenging than other datasets.
We also note that when comparing Caption VQA with and without few-shot exemplars against each other (e.g., comparing row [a-photo-of,question(n=5)] against [a-photo-of,question] in Table 3), we observe that few-shot exemplars help in most of the cases suggesting the effectiveness of using few-shot Q&A exemplars in conjunction with examples of model generated captions.
Furthermore, Caption VQA applied to BLIP-2 (FLAN-T5 variants) shows comparable performance to PromptCAP [Hu et al., 2022], without making use of larger models such as GPT3 (unlike PromptCAP). Moreover, unlike PromptCAP, Caption VQA has access to both the image modality and the caption modality, thus its performance is not bottlenecked by the amount of visual information captions can capture about images.
Lastly, for completeness, we also report the performance of respective state-of-art (SOTA) in-context learning (ICL) methods for each evaluation benchmark (ICL SOTA row in Table 3). Compared to our best performing Caption VQA approach (row [a-photo-of,question(n=5)] using BLIP2 FLAN-T5\({}_{\text{XXL}}\)), the average SOTA ICL method performance is \(\sim\)6% higher. However, compared to our set-up, these SOTA methods are much more computationally heavy and also more complex in nature. For instance, Driess et al. [2023] uses a 540B parameter language model which is significantly larger than our FLAN-T5 variants with 3B and 11B parameters. Hu et al. [2022] uses GPT-3 with ZZ parameters and 32 few-shot examples compared to merely 5 examples used in our Caption VQA method. Suris et al. [2023] uses a complex set-up requiring multiple models such as BLIP-2 and CLIP [Radford et al., 2021].
### Does incorporating Chain-of-Thought (CoT) reasoning help improve VQA performance?
Considering that BLIP2 utilizes an instruction-finetuned LLM, a question naturally arises: Can BLIP2 provide a rationale for its answers? In this section, we investigate the effectiveness of using the widely-used CoT templates (chain-of-thought-prefix, chain-of-thought-suffix) to prompt BLIP2 and report our findings. For instance, we applied the prompt template, "Q-\(<\)question\(>\) Let's think step by step," to a specific question, "What best describes the pool of water?" The generated rationale from BLIP2 using this prompt stated, "The pool of water is a pond. The giraffes are standing next to the pond in the zoo. Therefore, the final answer is pond." Surprisingly, incorporating CoT
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model Name** & **Input** & **Standard VQA** & **Caption VQA** \\ & (question template, caption generation & \\ & template) & & \\ \hline BLIP2 FLAN-T5\({}_{\text{XL}}\) & \begin{tabular}{l} does-describe-qa-yn, a-photo-of \\ is-true-yn, describe-scene \\ does-describe-qa-yn,a-photo-of \\ is-true-yn, describe-image \\ \end{tabular} & 5.25 & 8.25 \\ BLIP2 FLAN-T5\({}_{\text{XXL}}\) & \begin{tabular}{l} does-describe-qa-yn,a-photo-of \\ is-true-yn, describe-image \\ \end{tabular} & 5.75 & 8.75 \\ & \begin{tabular}{l} is-true-yn, describe-image \\ \end{tabular} & 6.0 & 9.5 \\ & \begin{tabular}{l} following-qa-yn,describe-scene \\ \end{tabular} & **7.25** & **10.25** \\ &
\begin{tabular}{l} 6.25 \\ \end{tabular} & 6.25 & 6.25 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on the Winoground-QA task for both the Standard VQA and Caption VQA settings.
prompting for rationalization resulted in a significant drop in performance (Table 5), which contradicts previous findings in NLP where CoT has shown improvements in complex reasoning tasks (Kojima et al., 2022; Chung et al., 2022; Zhang et al., 2022b). A similar observation is reported in (Zhang et al., 2023) for multimodal-CoT in ScienceQA tasks, where hallucinated rationales were identified as a major challenge. Our experiments suggest that BLIP2 does not have the bootstrapped CoT abilities of FLAN T5. However, incorporating few-shot exemplars (similar to Auto-CoT (Zhang et al., 2022b)) with rationalizations from the training set did result in performance improvements, as seen in Table 5, **suggesting the usefulness of few-shot exemplars when used in conjunction with few-shot examples of CoT rationales**. However, this setting still underperforms the Standard VQA setting, demonstrating that CoT prompting is not helpful.
To improve rationalization and final predictions, we conducted several experiments as outlined in Table 6: a) CoT-iterative: Manual inspection revealed that the model often generated correct but lengthy (2-3 steps) chains that included the correct answer, but the prediction itself was inaccurate. To address this, we trimmed the final answer sentence from CoT and limited CoT length to a maximum of two sentences. The final answer was conditioned on the generated chain, following a two-step process, resulting in a slight performance boost. b) CoT-context: Rearranging the input order by placing the generated rationale as a prefix to the question ("Context: <generated rationale> Q: A:") yielded the best performance. c) CoT-consistency: We explored consistency over reasoning paths as proposed in (Wang et al., 2022b), sampling 40 reasoning paths and taking a majority vote for the final answer. However, this approach did not improve performance. **In summary, despite multiple attempts and incorporating approaches that have shown success in NLP, the effectiveness of chain-of-thought prompting in BLIP2 is limited.**
## 5 Discussion
### Limitations
Our research has the following limitations. Firstly, the generalizability of our findings may be limited to the BLIP2 model, as our study primarily focuses on its application. Further research is needed to explore other VLMs and their effectiveness with different prompting strategies. Additionally, while we demonstrate the effectiveness of captioning as a prefix and in-context exemplars, there may be unexplored prompting techniques that could yield better results. Furthermore, our research primarily focuses on task performance analysis and lacks extensive examination of model interpretability. Future research should address these limitations by investigating broader model applicability, exploring alternative prompting techniques, and improving interpretability and explainability in VQA models.
### Conclusion
In conclusion, our research demonstrates the effectiveness of text-only prompting strategies for improving task performance in VQA for BLIP2 models. By incorporating task-specific instructions, in-context exemplars, and strategic captioning, we achieved substantial improvements on diverse datasets.
Our experiments revealed several key points: the choice of question templates affected different models differently, requiring careful selection; few-shot examples were not effective in standard QA
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Format & Accuracy \\ \hline \multicolumn{3}{c}{**BLIP2 FLAN-T5\({}_{\text{XL}}\)**} \\ \hline CoT & Q\(\rightarrow\) RA & 39.23 \\ CoT-iterative & QR \(\rightarrow\) A & 41.33 \\ CoT-context & RQ \(\rightarrow\) A & **46.48** \\ CoT-consistency (t=0.7) & VOTE(QR\(\rightarrow\) Ai) & 39.84 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Answer prediction evaluated on the AOKVQA validation set using iterative CoT, CoT as context, and self-consistency over reasoning paths with chain-of-thought-prefix templates.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **OKVQA** & **AOKVQA** & **GQA** \\ \hline \multicolumn{3}{c}{**BLIP2 FLAN-T5\({}_{\text{XL}}\)**} \\ \hline Standard (instruct-(task)-qa) & 45.12 & 48.61 & 44.84 \\ CoT & 34.54 & 39.23 & 40.36 \\ CoT (n=5) & **38.38** & **43.75** & **40.96** \\ \hline \multicolumn{3}{c}{**BLIP2 FLAN-T5\({}_{\text{XL}}\)**} \\ \hline Standard (following-qa) & 48.53 & 51.20 & 44.46 \\ CoT & 37.97 & **44.58** & **39.79** \\ CoT (n=5) & **39.56** & 42.23 & 39.19 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of Chain-of-Thought VQA (Q \(\rightarrow\) RA) for self-rationalization on open-ended VQA answers. We report the best results across the two CoT templates.
but worked well in Caption QA and CoT QA settings; incorporating captions as prefixes to questions and few-shot Q&A examples consistently improved performance. One limitation we found was that chain-of-thought reasoning led to decreased performance, necessitating further research into rationalizing VQA answers.
Overall, our study reveals some simple but effective techniques to better utilize large pre-trained VLMs, such as BLIP2, for VQA without task-specific fine-tuning. We hope our study will serve as a point of reference for future work towards advancing zero- and few-shot VQA.
|
2307.00325 | A System for Differentiation of Schizophrenia and Bipolar Disorder based
on rsfMRI | Schizophrenia and bipolar disorder are debilitating psychiatric illnesses
that can be challenging to diagnose accurately. The similarities between the
diseases make it difficult to differentiate between them using traditional
diagnostic tools. Recently, resting-state functional magnetic resonance imaging
(rsfMRI) has emerged as a promising tool for the diagnosis of psychiatric
disorders. This paper presents several methods for differentiating
schizophrenia and bipolar disorder based on features extracted from rsfMRI
data. The system that achieved the best results, uses 1D Convolutional Neural
Networks to analyze patterns of Intrinsic Connectivity time courses obtained
from rsfMRI and potentially identify biomarkers that distinguish between the
two disorders. We evaluate the system's performance on a large dataset of
patients with schizophrenia and bipolar disorder and demonstrate that the
system achieves a 0.7078 Area Under Curve (AUC) score in differentiating
patients with these disorders. Our results suggest that rsfMRI-based
classification systems have great potential for improving the accuracy of
psychiatric diagnoses and may ultimately lead to more effective treatments for
patients with this disorder. | Daniela Janeva, Stefan Krsteski, Matea Tashkovska, Nikola Jovanovski, Tomislav Kartalov, Dimitar Taskovski, Zoran Ivanovski, Branislav Gerazov | 2023-07-01T12:36:04Z | http://arxiv.org/abs/2307.00325v1 | # A System for Differentiation of Schizophrenia and Bipolar Disorder based on rsfMRI
###### Abstract
Schizophrenia and bipolar disorder are debilitating psychiatric illnesses that can be challenging to diagnose accurately. The similarities between the diseases make it difficult to differentiate between them using traditional diagnostic tools. Recently, resting-state functional magnetic resonance imaging (rsfMRI) has emerged as a promising tool for the diagnosis of psychiatric disorders. This paper presents several methods for differentiating schizophrenia and bipolar disorder based on features extracted from rsfMRI data. The system that achieved the best results, uses 1D Convolutional Neural Networks to analyze patterns of Intrinsic Connectivity time courses obtained from rsfMRI and potentially identify biomarkers that distinguish between the two disorders. We evaluate the system's performance on a large dataset of patients with schizophrenia and bipolar disorder and demonstrate that the system achieves a 0.7078 Area Under Curve (AUC) score in differentiating patients with these disorders. Our results suggest that rsfMRI-based classification systems have great potential for improving the accuracy of psychiatric diagnoses and may ultimately lead to more effective treatments for patients with this disorder.
Schizophrenia, Bipolar disorder, resting-state Functional Magnetic Resonance Imaging (rsfMRI), 1D Convolutional Neural Networks, biomedical engineering, AUC;
## I Introduction
Schizophrenia and bipolar disorder are two of the most challenging psychiatric illnesses, affecting millions of people worldwide. Schizophrenia is a severe mental disorder characterized by a wide range of symptoms, including delusions, hallucinations, disorganized thinking, and abnormal behaviors [1]. On the other hand, bipolar disorder is a mood disorder characterized by recurrent episodes of mania and depression [2]. Schizophrenia and bipolar disorder are chronic illnesses that can severely impact an individual's daily life and functioning [3]. The symptoms of these disorders can be distressing and debilitating, making it difficult for patients to maintain relationships, work, or engage in everyday activities. Unfortunately, accurate diagnosis of these disorders is often delayed or missed, resulting in inappropriate or ineffective treatment.
While the two disorders have distinct clinical features, they also share some similarities in terms of symptoms and genetic risk factors. Both disorders register problems in cognitive achievements reporting deficits in visuospatial performance as a precursor of both disorders. This overlap has led some researchers to suggest that the two disorders may be part of a broader spectrum of mental illnesses that share underlying genetic and environmental risk factors [4].
It is essential to distinguish between schizophrenia and bipolar disorder because although they share some common symptoms, they require different treatments. Misdiagnosis or delayed diagnosis can lead to inappropriate or ineffective treatments, resulting in poor outcomes for patients. For example, antipsychotic medications, which are typically used to treat schizophrenia, may exacerbate symptoms of mania in bipolar disorder [5]. Conversely, mood stabilizers and antidepressants, typically used to treat bipolar disorder, may not be effective for treating symptoms of schizophrenia [6].
In recent years, the development of new techniques for brain imaging has led to significant advances in the diagnosis and treatment of schizophrenia and bipolar disorder. Resting-state functional magnetic resonance imaging (rsfMRI) has emerged as a promising tool for understanding the underlying neural mechanisms of these disorders. RsfMRI measures brain activity by detecting changes in blood flow to different regions of the brain during periods of rest. Studies have shown that there are distinct patterns of brain activity associated with schizophrenia and bipolar disorder, and these patterns can be used to differentiate between the two disorders [7, 8]. In this paper, we present methods for differentiating schizophrenia and bipolar disorder based on rsfMRI data. We apply machine learning algorithms to analyze patterns of resting-state brain activity and evaluate the models' performances on a large dataset of patients with schizophrenia and bipolar disorder. The proposed system was submitted to the IEEE Signal Processing Cup (SPC) 2023. [9].
## II Dataset
The dataset used in this work was provided by the Brain Space Initiative for the IEEE SPC [9]. It consists of features extracted from the rsfMRI data of individuals with Schizophrenia and Bipolar disorder. The dataset was obtained by using 105 intrinsic connectivity network (ICN) time courses derived from a multi-spatial-scale spatially constrained ICA approach and their functional network connectivity (FNC).
The provided features were extracted using the following steps [9]: |
2305.18313 | Fire and Smoke Digital Twin -- A computational framework for modeling
fire incident outcomes | Fires and burning are the chief causes of particulate matter (PM2.5), a key
measurement of air quality in communities and cities worldwide. This work
develops a live fire tracking platform to show active reported fires from over
twenty cities in the U.S., as well as predict their smoke paths and impacts on
the air quality of regions within their range. Specifically, our close to
real-time tracking and predictions culminates in a digital twin to protect
public health and inform the public of fire and air quality risk. This tool
tracks fire incidents in real-time, utilizes the 3D building footprints of
Austin to simulate smoke outputs, and predicts fire incident smoke falloffs
within the complex city environment. Results from this study include a complete
fire and smoke digital twin model for Austin. We work in cooperation with the
City of Austin Fire Department to ensure the accuracy of our forecast and also
show that air quality sensor density within our cities cannot validate urban
fire presence. We additionally release code and methodology to replicate these
results for any city in the world. This work paves the path for similar digital
twin models to be developed and deployed to better protect the health and
safety of citizens. | Junfeng Jiao, Ryan Hardesty Lewis, Kijin Seong, Arya Farahi, Paul Navratil, Nate Casebeer, Dev Niyogi | 2023-05-19T00:43:06Z | http://arxiv.org/abs/2305.18313v1 | # Fire and Smoke Digital Twin - A computational framework for modeling fire incident outcomes
###### Abstract.
Fires and burning are the chief causes of particulate matter (PM2.5), a key measurement of air quality in communities and cities worldwide. This work develops a live fire tracking platform to show active reported fires from over twenty cities in the U.S., as well as predict their smoke paths and impacts on the air quality of regions within their range. Specifically, our close to real-time tracking and predictions culminates in a digital twin to protect public health and inform the public of fire and air quality risk. This tool tracks fire incidents in real-time, utilizes the 3D building footprints of Austin to simulate smoke outputs, and predicts fire incident smoke falloffs within the complex city environment. Results from this study include a complete fire and smoke digital twin model for Austin. We work in cooperation with the City of Austin Fire Department to ensure the accuracy of our forecast and also show that air quality sensor density within our cities cannot validate urban fire presence. We additionally release code and methodology to replicate these results for any city in the world. This work paves the path for similar digital twin models to be developed and deployed to better protect the health and safety of citizens.
smoke prediction, physical simulation, digital twin, urban fire +
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
Fires detection and tracking is usually performed using satellite data. The fire detection models often relies on abnormal heat signatures and bright spots on the Earth's surface (Fire, 1998; Grin et al., 2000). This type of tracking is helpful for wildfires, which are so large that they appear on satellite data. Their distinct and high signal-to-noise heat signal makes it easy to rely on satellite data. These signals for the fire incidents in an urban environment, however, are low signal-to-noise and often indistinguishable from the white noise, which makes detection difficult and leads to many false positives due to other source of noise, such as sun-glint off metallic structures, or false negatives (Krause et al., 2015). False negatives are more common, though. Often there are no detections, as not all urban fires are large enough for spatial satellite resolution and often remain covered by trees, buildings, and other obstructions. An alternative route would be to use 911 calls, but not much is done with this data. Many U.S. cities provide live pages to the public of emergency incidents as they are reported. These reports can be accurately pinpointed to a street address and the time lags are insignificant. FireCOM utilizes this information to track urban fires.
Tracking fires in real-time is not the only challenge. It is more important to model and predict the impact of urban fire incidents, specifically particulate matter exposure, such that the decision-makers can react rapidly as the situation unfolds. The existing fire and smoke studies mainly focus on wildfires (Bahdan et al., 2016; Grin et al., 2017; Grin et al., 2017). Wildfire impact is often larger - spans hundreds and thousands of acres - making these models less practical for vulnerable urban residents avoiding exposure to smoke. While urban fire studies are not non-existing (Grin et al., 2017; Grin et al., 2017), not much work is done in this area. This is despite the fact that fire incidents are more widespread and common in urban communities. FireCOM fills this gap.
FireCOM gathers real-time fire and building footprint data, and combines this information with weather data to accurately predict smoke dispersion within one, two, and three hours following a fire incident. This model not only helps to warn individuals about deteriorating air quality in their vicinity, but also assists fire departments and decision-makers in coordinating their efforts to mitigate the fire and its smoke-related aftermath.
The primary users of FireCOM include urban residents, emergency responders, and decision-makers involved in fire management and public health. By serving as an early warning system, FireCOM aims to improve health conditions and facilitate a more effective response to urban fires. The tool's accuracy and effectiveness are validated through data from thousands of air quality sensors and through collaborative case studies with the Austin Fire Department, ensuring its reliability and practical applicability in predicting urban fire impacts on air quality.
## 2. Related Work
The toxicity of wildfire smoke and urban smoke is a significant health hazard (Bahdan et al., 2016; Grin et al., 2017; Grin et al., 2017). Carbon monoxide is often the leading cause of deaths in fires, but burning synthetic materials, such as those in urban environments, produces cyanide in fires that causes up to 50% of fire deaths (Bahdan et al., 2016). Building structures are often made of inorganic materials, implying that fumes from burning structures could be more harmful than forest fires. Fire incidents in urban environments, where the majority of fires are structure fires (Grin et al., 2017) compounded with high population density compared to rural areas, present an acute and immediate danger to vulnerable communities. Hence, real-time tracking of smoke falloff and mitigating its harms can save lives. Actively tracking, predicting, and managing smoke is usually focused on wildfires, with examples including NOAA's AirNow forecast (Zhao et al., 2017) and the US Forest Service's BlueSky (Fire, 2017). However, little work has been done in predicted smoke from urban fires, with existing models focusing on evaluating risk and fire spread (Grin et al., 2017; Grin et al., 2017; Grin et al., 2017). To the best of our knowledge, our model provides the first city-wide tracking and prediction of urban smoke.
Prediction of the fire smoke diffusion process requires running computationally expensive fluid dynamics numerical simulations, but results in near-realistic outcomes. For instance, Zhao et al. (Zhao et al., 2017) used their smoke simulation to recommend changes to typical subway carriages and proposed guidelines for evacuation protocols. Their results achieved the same performance as an empirical miniature tunnel study done by Li et al. (Li et al., 2017) almost a decade earlier. Fluid dynamic simulations have become a tool of engineering, design, and prediction with verifiable accuracy (Li et al., 2017). However, for these simulations to be useful outside of a scientific audience, visualization is key. We combine domain-specific visualizations of both fluid-simulated smoke and a 3D city superimposed in order to make our forecasts publicly available, through a web dashboard, to urban residents. Alongside the help of real-time air sensors, the combination of these various tools allows us to model smoke dynamics on a city-wide scale. Figure 1 shows the early ability of our approximate VSmoke forecast in determining the land uses an urban fire affects and the degree to which they will be affected.
Figure 1. A VSmoke smoke simulation in-browser, with residences affected also highlighted according to smoke impact.
## 3. Firecom a Digital Twin
Architecture for Active Fires
FireCOM is a digital twin architecture designed for real-time monitoring and prediction of active fires and their impact on urban environments. Our data comprises three main components: (1) fire incidents, (2) weather data, and (3) air quality data. We obtain real-time fire incident locations from the Austin Fire Department, weather and wind conditions surrounding each fire from the National Weather Service, and air quality data from nearby outdoor air quality sensors provided by PurpleAir's sensors1.
Footnote 1: [https://api.purpleair.com/](https://api.purpleair.com/)
Blender, a powerful open-source 3D creation suite, is chosen as the research platform due to its easy integration with Python, extensibility, and compatibility with GIS tools and accurate fluid simulations. Although Blender may not be primarily designed for real-time work, it offers precise models and simulations, essential for our research. We integrate OpenTopography's topographic map of Austin and OpenStreetMap's 3D city structures to create the digital twin's base map.
To improve the accuracy of our simulations, we incorporate real-world infrastructure, including building and street classifications, such as schools, hospitals, and speed limits. This enables our model to identify areas at higher risk from reduced air quality.
Our fire model leverages live information from local fire departments, weather services, and air quality sensors to predict smoke paths and monitor fire incidents. We map reported fire incidents using geographical coordinates, and then query the National Weather Service for recent weather and wind information at each location. Subsequently, we generate predicted smoke paths for each fire and validate our predictions using data from air quality sensors to ensure accuracy in the upcoming hours. A general workflow of FireCOM can be seen in Figure 2.
By integrating real-time data, GIS tools, and fluid simulations within the Blender platform, FireCOM serves as a reliable digital twin architecture for active fire monitoring and prediction, ultimately enhancing urban safety and public health.
## 4. Data
Real-time fire incident data are procured from the 911 calls' active incident page as updated for over twenty cities. We note that these cities have vastly different schemes for all emergency information, mixing different date formats, missing certain information like coordinates, and calling the same category different things. To deal with this, we created a unified schema with only the needed information. This includes incident name, date, longitude and latitude coordinates, street address, and reporting department. Although latitude and longitude and street address are translatable to one another, we included both to reduce any geocoding or confusion on
Figure 2. A diagram of the urban smoke prediction workflow.
the part of the end-user, whose interest is in the street address. We, however, only need the exact coordinates to display on a map. From all of our cities, we manually annotated what each live incident feed returned and created scripts to convert all the different formats we received back, including HTML tables, RSS feeds, plain text files, and JSON documents, into our format. An example of such a script is provided in Algorithm 1.
```
1:for\(city=1,2,\ldots\)do
2: Fetch city data from permanent live URL
3: Based on city, convert to custom format (JSON)
4: Clean data with timestamps and geolocations
5: Store data in date-based structure
6:endfor
```
**Algorithm 1** Urban Fire Retrieval
After variations of this script run for each and every annotated city's live data, we compare the latest data with the same information retrieved on the previous request, actively updating which fires are still burning and which ones have been reportedly fixed. With this information, we track active fires in all reported cities. We additionally determine where most fires in each city are likely to occur by looking at the average amount of fires per region, creating a "fire risk" map of Austin.
In 2022, over 20,000 fires occurred in Austin alone. Analyzing this data revealed several trends. By intersecting each fire event with a tract-level map of Austin, we created a "fire risk" analysis map where orange represents normal fire risk, while yellow and red represent below-average and above-average risk, respectively. This map exhibits a correlation with the City of Austin's FireCAT Wildfire Risk Assessment (Wedther, 2018), as seen in Figure 3, indicating that wildfires remain a significant source of fire incidents. However, it also highlights the elevated probability of fires in downtown Austin.
Our results suggest that most fire incidents occur in the afternoon, typically between 5 p.m. and 10 p.m. (see Figure 4). However, the transition between different time periods is not as smooth as one might expect, warranting further investigation into the underlying factors influencing these trends.
Weather service data is incredibly important while surveying smoke impacts, as wind direction and strength, among other weather conditions, ultimately determine the region of impact. As demonstrated by the HRRR-Smoke model for wildfires, weather is perhaps the single most important factor that determines the spatial progress of smoke (Bender and OpenStreetMaps, 2022). City weather is very well-researched and provided mainly by Weather.gov, a government service that provides hourly updates of weather conditions across the entire continental United States. We query Weather.gov and selectively retrieve the important information, namely, wind speed, direction, and conditions.
Finally, we create a large data set of time-independent data. This data set includes elevation maps, building footprint data sets, and locations of essential points of interest like hospitals and fire departments. The elevation data is similarly available to the public from OpenTopography, and the rest are publicly available from Open Data Portals for each respective city.
## 5. Smoke Prediction and Validation
### Methods
To predict smoke output from various tracked fires, we initially employed a Gaussian smoke prediction model based on the VSmoke particulate matter concentration program developed by the Georgia Forestry Commission (GFC) (Grecher et al., 2017; Grecher et al., 2018). This model calculates smoke dispersion using a normal probability distribution, which is common for ground-level fires (Bender and OpenStreet, 2022).
We gathered real-time weather information, such as wind speed, wind direction, and humidity, from Weather.gov and NOAA's HRRR model to input into the VSmoke model. From the live fire information, we can make average assumptions based on the size of each fire, such as an "appliance fire" versus a "brush fire," what type of fuel the fire uses, as well as tweak other qualities of the active fire.
However, it's important to note that VSmoke doesn't account for the surrounding buildings' geometry. To address this, we moved to a 3D city model using Blender and OpenStreetMaps 3D (Blender and OpenStreetMaps, 2022), which
Figure 4. A graph of the number of reported fires by the hour.
Figure 3. A comparison of FireCAT Wildfire Risk Assessment (left) and our Fire Averages per tract (right).
allowed us to implement a more accurate smoke prediction model using MantaFlow (MantaFlow, 2017; 2017).
To do this, Blender was used, alongside OpenStreetMaps 3D, which is a generated mesh of an entire city using building footprints and available height information to create approximate models of each structure (Wammer et al., 2017). We also overlay it on an accurate topographic height map of Austin using OpenTopography, down to 30m precision. With the surrounding environment in-place, we created a new BlenderPy script to spawn a smoke fluid simulation at the exact longitude and latitude within the "3D City", with parameters for weather and fire matching those input into VSmoke. In this way, we run a smoke simulation in a near-realistic digital twin of any city. An example of this visualization, converted from Blender to a browser-view, can be seen in Figure 5.
### Results
Using the VSmoke model as a reference, we tuned the MantaFlow fluid simulation parameters to match the VSmoke prediction area. Figure 6 shows a comparison between the VSmoke and MantaFlow predicted smoke outputs.
Typically, smoke systems are validated from air sensors, satellite imagery, and meteorology stations (MantaFlow, 2017). Since satellite imagery does not work well for fires in an urban context, as well as that meteorology stations were often sparse and located far from cities, we assessed the accuracy of our smoke predictions by comparing them with air sensor data. We collaborated with researchers from the Texas Advanced Computing Center (TACC) to evaluate the overlap between our generated smoke trails and PurpleAir sensor data and found that sensors were too sparse to detect urban smoke impacts. Instead, we used a validation process by case studies with the City of Austin, where we were able to use prescribed burns and set up local air sensors ahead of time to test the accuracy of our model.
In terms of pipeline performance, our system proved to be efficient in generating smoke predictions and displaying them in the browser. Upon receiving a notification from the active fire query API, the system takes less than a second to generate an approximate VSmoke prediction for up to three hours in advance (not considering city geometry) and communicate that result in-browser. For the more accurate MantaFlow 3D smoke predictions, which take into account the city geometry, the pipeline takes approximately thirty seconds to generate predictions for up to three hours in advance. This timely generation and display of smoke predictions can be critical in facilitating rapid decision-making and response efforts in urban fire scenarios.
### Discussion
Our study resulted in two models: the VSmoke-based model, which is computationally efficient but less accurate, and the MantaFlow-based model, which is computationally intensive but provides more accurate results by considering complex city geometry.
As for validation, our partners at TACC demonstrated that air sensors within our cities were too sparse, never being even within a mile of a documented fire. On the few collisions, they suggested that our findings were accurate, as air sensors reported lower than average values within the predicted smoke area. However, it's important to note that air quality sensors in Austin and major cities are too sparse to detect acute events like fires, even a decade after a similar issue was reported by NASA (Natashi et al., 2017). NASA's solutions to this problem of "sparse data" are generally smoothing of data points over large regions, which is still not suitable for the smoke from urban fires, which would be a spike in a small area.
Instead, we used experimental data provided from a series of case studies conducted with the City of Austin. The Austin Fire Department, like many fire departments, prescribes fires weeks in advance that need to occur for firefighter training, brush removal, or forest clearing. Knowing the location of a future fire, we could set up our air sensors downwind of the location and detect if our model was accurate. We had predicted that air sensors that did not change under downwind conditions were often behind buildings and other obstructive geometry, which could be accounted for with our 3D smoke model. The MantaFlow-based model demonstrated
Figure 5. A supervised smoke fluid simulation in-browser.
Figure 6. A comparison of the VSmoke and MantaFlow predicted smoke outputs.
promising accuracy in our case study with the City of Austin, finding that the air quality of downwind regions correlates highly with our predicted smoke impact.
Despite the positive results, our study faced limitations due to the sparsity of air quality sensors in Austin and other major cities. We recommend future research in this area to consider deploying low-cost air sensors across cities for improved detection and reporting of immediate threats to air quality.
However, we also recognize that some of our tracked cities, like Los Angeles, had thousands of air sensors, yet exhibited the same behaviour. For this reason, we also recommend the development of novel air quality tracking techniques, like geostationary satellite detection, currently being worked on by NASA ARSET (Marcus et al., 2017). With tools like these, validation of fire and smoke digital twins like FireCOM will become much easier.
## 6. Key Challenges, Solutions, and Innovations
A key challenge in conducting this research was the relative scarcity of smoke prediction models. While models do, to an extent, exist, they are often very specific or meant for non-scientific usage. For example, the EPA's smoke model predicts long continuous regions of smoke across North America, without really offering much useful information in terms of actual air quality impacts on communities. Another example is the fluid simulations of game engines, such Unreal Engine, which are meant to be optimized for visualization rather than any consideration to accurately portray how smoke and fire act in life. For these reasons, we chose to use smoke models from VSmoke from the Georgia Forestry Commission and MantraFlow provided by the Technical University of Munich.
Another challenge to overcome was the inability to present these models in any direct way to viewers. Both fluid simulations and real-time active maps are rather costly to compute and maintain on a running basis, and doing these smoke predictions on-demand to display immediately within a live map required some technical innovations. Displaying fluid simulations interactively to users is a pipe dream, as well, as they take a powerful computer to compute, yet alone visualize. We needed to make the technology accessible at the whim of a smartphone's web browser. To this end, we created APIs for both our 2D and 3D smoke pipelines, as well as used some tricks to display the output of each respective simulation.
Firstly, we rewrote a lot of GFC's codebase to optimize it for newer computers, as well as run parallel smoke simulations at the request of an API. The program outputs Keyhole Markup (KML) files to visualize each 2D smoke simulation, which we spaced out at one hour, two hour, and three hour predicted simulations. Secondly, we created an API to run fluid simulations for fires on-demand in Blender. While the model expanded for a full three hours in Blender, we timestamped and exported the one, two, and three hour marks to Filmbox (.FBX) object files, which can be rendered in-browser.
With the smoke generated in six different iterations, we just needed to display our results on each browser. To reduce confusion between the two separate models, we displayed VSmoke's simulations on a standard map labelled "2D", while presenting the MantraFlow simulations on a vector-tiled 3D map, courtesy of MapBox GL, which quickly renders the same OpenStreetMap 3D tiles from Blender in-browser. From this, we just used some extension libraries, like Threebox.js, to display our smoke simulations within the 3D space from the fire's origin, all without requiring anything other than a machine capable of viewing a model, which each computer and smartphone is capable of. Our computationally efficient VSmoke-enabled 2D map can be seen in Figure 7, while our fluid-simulated MantraFlow 3D map can be seen in Figure 8. We also overlay a dynamically updating "fire risk" layer, which changes as the average amount of fires per tract in Austin updates day by day.
The novelty of our work lends itself to three areas: (1) data integration, (2) shifting from 2D to 3D, and (3) use of public and generalized data. We integrated information from a multitude of resources in creating this real-time model, including building footprints, air sensors, wind and weather updates, as well as census data and points of interest around the city. With all of this data, we can easily begin to generate smoke models across the city, but with only an approximate accuracy, given the complexity of an urban environment. Shifting to a 3D perspective of a city, complete with topography and building geometry, allows for a more realistic and better detailed understanding of how each fire falloff will develop, and where exactly it will affect, allowing both citizens and firefighters to make informed decisions when dealing with fire in their communities. Finally, our work was done completely with public data, and alongside our released open-source code, makes this model completely generalizable and replicable to any city.
Figure 8. Our 3D real-time fire and smoke map in-browser.
Figure 7. Our 2D real-time fire and smoke map in-browser.
## 7. Conclusions
On-demand smoke simulations for urban fires were never attempted because of the complex geometry and insights needed to perform such a task, but with our model, one can easily predict both approximate smoke outcomes, as well as hyper-realistic smoke fluid simulations for any fire. With our APIs, researchers can begin to predict future fires, as well as analyze past fires for any city. With our map, citizens can become well-informed of the air quality risks posed by urban fires around their city, as well as better understand how smoke might impact their nearby communities. We could potentially serve as an early warning system for lowered air quality and smoke in areas of high risk, where exposure to reduced PM2.5 could endanger lives and hinder child development.
In the near future, we expect to see our model used to conduct air quality research into respiratory diseases. We can detect if there is a correlation between areas of high urban fire risk alongside areas of high COPD, Asthma, and other respiratory issues. On top of this, we wish to see our model used in growing digital twins of Smart City environments, whereas every aspect of a city can be visualized, predicted, and optimized to save lives and serve communities better.
Within a full digital twin model, our current data would be a marginal list of the information necessary. A full twin might take advantage of the traffic conditions on a right or left lane to indicate which side of the road a burning car might be on, or even make estimations based on the total aggregation of phone signals in an area about how many people might be potentially affected by a smoke falloff. For future research in this area, we would encourage the use of _Big Data_ in retrieving all relevant local conditions and creating a more full-scale model.
## Funding
This research is supported by the Bridging Barriers Initiative Good Systems Grand Challenge at The University of Texas at Austin, the City of Austin (UTA19-000382), National Science Foundation (2043060, 2133302, 1952193, 2125858, 2236305), NSF-GOLD (2228205), and CSE-OCE (1835739).
## Acknowledgments
The authors extend their sincere gratitude to Dan Chen (Georgia Forestry Association), Marc Coudert (Office of Resilience, City of Austin), Branifif Davis (Austin Fire Department), and Chief Joel Baker (Austin Fire Department) for supporting our project. The authors also acknowledge an interagency agreement between UT and the City of Austin through the Bridging Barriers Initiative.
|
2309.00851 | Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms | The fitness level method is a popular tool for analyzing the hitting time of
elitist evolutionary algorithms. Its idea is to divide the search space into
multiple fitness levels and estimate lower and upper bounds on the hitting time
using transition probabilities between fitness levels. However, the lower bound
generated by this method is often loose. An open question regarding the fitness
level method is what are the tightest lower and upper time bounds that can be
constructed based on transition probabilities between fitness levels. To answer
this question, {\color{red} we combine drift analysis with fitness levels and
define the tightest bound problem as a constrained multi-objective optimization
problem subject to fitness levels.} The tightest metric bounds from fitness
levels are constructed and proven for the first time. Then linear bounds are
derived from metric bounds and a framework is established that can be used to
develop different fitness level methods for different types of linear bounds.
The framework is generic and promising, as it can be used to draw tight time
bounds on both fitness landscapes without and with shortcuts. This is
demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function | Jun He, Yuren Zhou | 2023-09-02T07:42:57Z | http://arxiv.org/abs/2309.00851v3 | # Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms
###### Abstract
The fitness level method is a popular tool for analyzing the hitting time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the hitting time using transition probabilities between fitness levels. However, the lower bound generated by this method is often loose. An open question regarding the fitness level method is what are the tightest lower and upper time bounds that can be constructed based on fitness levels. To answer this question, drift analysis with fitness levels is developed, and the tightest bound problem is formulated as a constrained multi-objective optimization problem subject to a fitness level constraint. The tightest metric bounds from fitness levels are constructed and proven for the first time. Then general linear bounds are derived from metric bounds and a framework is established that can be used to develop different fitness level methods for different types of linear bounds. The framework is generic and promising, as it can be used to draw tight time bounds on both fitness landscapes without and with shortcuts. This is demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function.
Evolutionary algorithm; algorithm analysis; computation time; fitness levels; drift analysis; Markov chain
## 1 Introduction
### Background
The time complexity of evolutionary algorithms (EAs) is an important topic in the EA theory (Oliveto et al., 2007; Yu and Zhou, 2008; Doerr et al., 2017; Huang et al., 2019). The computation time of EAs can be measured by either the number of generations to find an optimum for the first time, called hitting time (He and Yao, 2001), or the number of fitness evaluations, called running time (He and Yao, 2017). The analysis of running time is more complicated as it is related to the population size (He and Yao, 2002; Chen et al., 2009; He and Yao, 2017), and the population size often varies from generation to generation. Therefore, in theoretical analysis, the computation time often refers to hitting time. Several methods have been proposed for analyzing hitting time of EAs, such as drift analysis (He and Yao, 2001), Markov chains (He and Yao, 2002, 2003) and fitness level partition (Wegener, 2001). Each method has its own advantages and disadvantages. Drift analysis is a powerful tool in which an appropriate distance
is constructed as the bound on hitting time (He and Yao, 2001; Oliveto and Witt, 2011; Doerr et al., 2012). According to the theory of absorbing Markov chains, the exact hitting time of EAs can be calculated from the fundamental matrix of absorbing Markov chains (He and Yao, 2003). But the calculation of the fundamental matrix is too complex for most EAs. Therefore, hitting time is estimated by replacing the original chain with a slower or faster chain (He and Yao, 2003; Zhou et al., 2009).
The fitness level method (Wegener, 2001, 2003) is a popular tool used to estimate hitting time of elitist EAs (Antipov et al., 2018; Corus et al., 2020; Rajabi and Witt, 2020; Quinzan et al., 2021; Aboutaib and Sutton, 2022; Malalanirainy and Moraglio, 2022; Oliveto et al., 2022). The basic concept of this method is to divide the search space into multiple ranks \((S_{0},\cdots,S_{K})\), called fitness levels, based on the fitness value from high to low, where the highest rank \(S_{0}\) is the optimal set; then calculate transition probabilities between fitness levels, that is, \(p(X_{k},S_{\ell})\) from \(X_{k}\in S_{k}\) to \(S_{\ell}\) (where \(1\leq\ell<k\leq K\)); finally, estimate a bound \(d_{k}\) on the hitting time of the EA starting from a level \(S_{k}\). The method was combined with other techniques such as tail bounds (Witt, 2014) and stochastic domination (Doerr, 2019). The fitness level method is available for elitist EAs. Although the level partition is also used to analyze non-elitist EAs (Corus et al., 2017; Case and Lehre, 2020), they should be considered as a different method.
In this paper, we express time bounds from fitness levels in linear forms as follows:
\[\text{lower bound }d_{k} =\sum_{\ell=1}^{k}\frac{c_{k,\ell}}{\max_{X_{\ell}\in S_{\ell}}p( X_{\ell},S_{0}\cup\cdots\cup S_{\ell-1})}, \tag{1}\] \[\text{upper bound }d_{k} =\sum_{\ell=1}^{k}\frac{c_{k,\ell}}{\min_{X_{\ell}\in S_{\ell}}p( X_{\ell},S_{0}\cup\cdots\cup S_{\ell-1})}, \tag{2}\]
where \(c_{k,\ell}\) are coefficients and \(p(X_{\ell},S_{0}\cup\cdots\cup S_{\ell-1})\) represents the transition probability from a state \(X_{\ell}\in S_{\ell}\) to the union of levels \(S_{0}\cup\cdots\cup S_{\ell-1}\). The development route of coefficients is from \(c_{k,\ell}=0,1,\,c\) to \(c_{\ell}\). Wegener (2003) assigned a constant coefficient \(c_{k,\ell}=0\) for the lower bound (except \(c_{k,k}=1\)) and \(c_{k,\ell}=1\) for the upper bound where \(k>\ell\). This assignment is good at obtaining a tight upper bound, but not good at obtaining a tight lower bound. Several efforts have been made to improve the lower bound since then. Sudholt (2012) made an improvement using a constant coefficient \(c_{k,\ell}=c\) (called viscosity) where \(k>\ell\) and \(c_{k,k}=1\), and gave tight lower time bounds of the (1+1) EA on several unimodal functions such as LeadingOnes, OneMax, long \(k\)-paths. Recently, Doerr and Kotzing (2022) made another improvement using coefficients \(c_{k,\ell}=c_{\ell}\) (called visit probability) and provided tight lower bounds of the (1+1) EA on LeadingOnes, OneMax, long \(k\)-paths jump function.
The fitness landscape corresponding to an EA can be divided into two categories: with shortcuts or without shortcuts. When the EA follows a shortcut, it skips some intermediate fitness levels. In this paper, we find that the lower bounds based on the coefficient \(c\) or \(c_{\ell}\) are loose on fitness landscapes with shortcuts (Case Studies 1 and 2, Theorem 7). Therefore, it is necessary to improve lower bounds for EAs on fitness landscapes with shortcuts.
### New research and main results in this paper
The aim of this paper is to explore a fundamental question that has not been addressed before: What are the tightest lower and upper bounds based on transition probabilities between fitness levels? This paper also answers the question: is it possible to use the fitness level method to draw tight lower bounds on fitness landscapes with shortcuts?
To answer the questions, drift analysis with fitness levels is developed for constructing lower and upper bounds on hitting time of elitist EAs. The fitness level method is viewed as a combination of drift analysis and fitness levels. Given a fitness level partition \((S_{0},\cdots,S_{K})\), a distance \(d_{k}\) between \(S_{k}\) and \(S_{0}\) is assigned to each fitness level \(S_{k}\), where \(d_{k}\) is constructed using transition probabilities between fitness levels. Since \(d_{k}\) is related to distance, it is called a metric bound. Then by drift analysis, it is proved that \(d_{k}\) is a lower or upper bound on the hitting time of the EA starting from the level \(S_{k}\) and the best \(d_{k}\) is the tightest metric bound. The new contributions and results are summarized in three parts.
1. First, we construct metric bounds from fitness levels and prove that the best metric bounds are the tightest (Theorems 1 to 4). The metric lower bound from fitness levels is expressed recursively as \[d_{k}\leq\min_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{0}\cup\cdots\cup S_{ k-1})}+\sum_{\ell=1}^{k-1}\frac{p(X_{k},S_{\ell})}{p(X_{\ell},S_{0}\cup \cdots\cup S_{\ell-1})}d_{\ell}\right\}.\] (3) Similarly, the metric upper bound from fitness levels is expressed recursively as \[d_{k}\geq\max_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{0}\cup\cdots\cup S_{ k-1})}+\sum_{\ell=1}^{k-1}\frac{p(X_{k},S_{\ell})}{p(X_{\ell},S_{0}\cup \cdots\cup S_{\ell-1})}d_{\ell}\right\}.\] (4) The tightest lower or upper bound is reached when Inequality (3) or (4) becomes an equality.
2. Secondly, we construct general linear bounds from metric bounds (3) and (4) (Theorems 5 and 6). Coefficients in the linear lower bound (1) satisfy \(c_{k,k}=1\) and the following linear inequalities: \[c_{k,\ell}\leq\min_{X_{k}\in S_{k}}\frac{p(X_{k},S_{\ell})+\sum_{j=\ell+1}^{k- 1}p(X_{k},S_{j})c_{j,\ell}}{p(X_{k},S_{0}\cup\cdots\cup S_{k-1})},\qquad 0< \ell<k.\] (5) Coefficients in the linear upper bound (2) satisfy \(c_{k,k}=1\) and the following linear inequalities: \[c_{k,\ell}\geq\max_{X_{k}\in S_{k}}\frac{p(X_{k},S_{\ell})+\sum_{j=\ell+1}^{k- 1}p(X_{k},S_{j})c_{j,\ell}}{p(X_{k},S_{0}\cup\cdots\cup S_{k-1})},\qquad 0< \ell<k.\] (6) Previous bounds (Wegener, 2003; Sudholt, 2012; Doerr and Kotzing, 2022) can be regarded as special cases of \(c_{k,\ell}=0,1,c,c_{\ell}\) (Corollaries 1 to 6). For the sake of discussion, the family of linear bounds (5) and (6) are named the Type-\(c_{k,\ell}\) bound. Similarly, Type-\(0,1\), \(c\) and \(c_{\ell}\) bounds stand for linear bounds with coefficients \(c_{k,\ell}=0,1\), \(c\) and \(c_{\ell}\), respectively.
3. Finally, we demonstrate the advantage of the Type-\(c_{k,\ell}\) lower bound over Type-\(c\) and \(c_{\ell}\) lower bounds. For the (1+1) EA maximizing the TwoMax1 function, we prove that our Type-\(c_{k,\ell}\) lower bound is \(\Omega(n\ln n)\), but Type-\(c\) and \(c_{\ell}\) lower bounds are only \(O(1)\) (Case Studies 1, 2 and 4).
The paper is organized as follows: Section 2 provides the foundation of theoretical analysis. Section 3 reviews existing fitness level methods and explains the necessity of improving previous lower linear bounds. Section 4 proposes drift analysis with fitness levels, constructs new metric bounds and proves they are the tightest. Section 5 constructs general linear bounds and presents different explicit expressions of coefficients. Section 6 shows the application of general linear bounds. Section 7 concludes the work.
## 2 Foundations and notation
This section introduces some basic concepts and notation used in theoretical analysis.
### Elitist EAs and Markov chains
A maximization problem is considered in the paper: \(f_{\max}=\max f(x)\) where \(f(x)\) is defined on a finite set. In EAs, an individual \(x\) represents a solution. A population consists of several individuals, denoted by \(X\). The fitness of a population \(f(X)=\max\{f(x);x\in X\}\). Let \(S\) denote the set of all populations and \(S_{\mathrm{opt}}\) the set of optimal populations \(X_{\mathrm{opt}}\) such that \(f(X_{\mathrm{opt}})=f_{\max}\). This paper studies elitist EAs that maximize \(f(x)\). Let \(X^{[t]}\) denote the \(t\)-th generation population.
**Definition 1**.: _An EA is called elitist if \(f(X^{[t]})\geq f(X^{[t-1]})\)._
A simple elitist EA is the (I+1) EA using bitwise mutation and elitist selection for maximizing a pseudo-Boolean function: \(f(x)\) where \(x=(x_{1},\cdots,x_{n})\in\{0,1\}^{n}\). The (I+1) EA does not use a population, but only an individual.
```
1:initialize a solution \(x\) and let \(x^{[0]}=x\);
2:for\(t=1,2,\cdots\)do
3: flip each bit of \(x\) independently with probability \(\frac{1}{n}\) and generate a solution \(y\);
4: if \(f(y)\geq f(x)\), then let \(x^{[t]}=y\), otherwise \(x^{[t]}=x\).
5:endfor
```
**Algorithm 1** The (1+1) EA
We assume that EAs are modeled by homogeneous Markov chains (He and Yao, 2003; He and Lin, 2016). The set \(S\) is the state space of a Markov chain and a population \(X\) is a state. The Markov chain property means that the next state \(X^{[t+1]}\) depends only on the current state \(X^{[t]}\), that is, \(\Pr(X^{[t+1]}\mid X^{[t]},\ldots,X^{[0]})=\Pr(X^{[t+1]}\mid X^{[t]})\). The homogeneous property means that the transition probability from a state \(X\) to another state \(Y\) does not change over the generation \(t\), that is for any \(t\), \(\Pr(X^{[t+1]}=Y\mid X^{[t]}=X)=p(X,Y)\).
### Hitting time and drift analysis
Hitting time is the first time when an EA finds an optimal solution.
**Definition 2**.: _Given an elitist EA for maximizing \(f(x)\), assume that the initial population \(X^{[0]}=X\). Hitting time \(\tau(X)\) is the number of generations when an optimum is found for the first time, that is, \(\tau(X)=\min\{t\geq 0,X^{[t]}\in S_{\mathrm{opt}}\mid X^{[0]}=X\}\). The mean hitting time \(m(X)\) is the expected value of \(\tau(X)\), that is \(m(X)=\mathrm{E}[\tau(X)]\). Assume that \(X^{[0]}\) is chosen at random, the mean hitting time is the expected value \(m(X^{[0]})=\mathrm{E}[\mathrm{E}[\tau(X^{[0]}]]\)._
Drift analysis was introduced by He and Yao (2001) to the analysis of hitting time of EAs. It is based on the intuitive idea: time = distance/drift. A non-negative function \(d(X)\) measures the distance between \(X\) and the optimal set. By default, let \(d(X)=0\) if
\(X\in S_{\rm opt}\). A distance function \(d(X)\) is called a lower time bound if for all \(X\), \(d(X)\leq m(X)\), while \(d(X)\) is called an upper time bound if for all \(X\), \(d(X)\geq m(X)\).
There are several variants of drift analysis (He and Yao, 2001; Oliveto and Witt, 2011; Doerr et al., 2012; Doerr and Goldberg, 2013). For a complete review of drift analysis, see (Kotzing and Krejca, 2019; Lengler, 2020). This paper follows a Markov chain version of drift analysis (He and Yao, 2003). If an elitist EA cannot be modeled by a Markov chain, the super-martingale version of drift analysis is available.
Definition 3: The _drift_\(\Delta d(X)\) is the distance change per generation at the state \(X\),
\[\Delta d(X)=d(X)-\sum_{Y\in S}p(X,Y)d(Y). \tag{7}\]
Lemma 1: _(He and Yao, 2003, Theorem 2) If for any \(X\notin S_{\rm opt}\), the drift \(\Delta d(X)\geq 1\), then the mean hitting time \(m(X)\leq d(X)\)._
Lemma 2: _(He and Yao, 2003, Theorem 3) If for any \(X\notin S_{\rm opt}\), the drift \(\Delta d(X)\leq 1\), then the mean hitting time \(m(X)\geq d(X)\)._
Asymptotic notation (Knuth, 1976) is used to discuss the time complexity of EAs. The worst-case time complexity of an EA is measured by the maximum value \(\max_{X\in S}m(X)\). A tight lower bound refers to \(\max_{X\in S}d(X)=\Omega(\max_{X\in S}m(X))\), while a tight upper bound refers to \(\max_{X\in S}d(X)=O(\max_{X\in S}m(X))\).
### Fitness level partition and transition probabilities between fitness levels
The fitness level method depends on a fitness level partition.
Definition 4: A state space \(S\) is divided into \(K+1\) disjoint subsets (ranks) \((S_{0},\cdots,S_{K})\) according to the fitness from high to low such that (i) the highest rank \(S_{0}=S_{\rm opt}\), (ii) for all \(X_{k}\in S_{k}\) and \(X_{k+1}\in S_{k+1}\), the rank order holds: \(f(X_{k})>f(X_{k+1})\). Each rank is called a fitness level. The partition \((S_{0},\cdots,S_{k})\) is called a _fitness level partition_.
Transition probabilities between fitness levels are defined as follows. The notation \(p(X_{k},S_{\ell})\) denotes the transition probability from a state \(X_{k}\in S_{k}\) to the level \(S_{\ell}\).
\[p(X_{k},S_{\ell})=\Pr(X^{[t+1]}\in S_{\ell}\mid X^{[t]}=X_{k}). \tag{8}\]
Its minimal and maximal values are denoted as follows:
\[p_{\mbox{\tiny min}}(X_{k},S_{[0,k-1]})=\min_{X_{k}\in S_{k}}p(X_{k},S_{[0,k-1 ]})\mbox{ and }p_{\mbox{\tiny max}}(X_{k},S_{[0,k-1]})=\max_{X_{k}\in S_{k}}p(X_{k},S_{[0,k -1]}).\]
Other transition probabilities between levels are derived from \(p(X_{k},S_{\ell})\). Let \([i,j]\) denote the integer set \(\{i,i+1,\cdots,j-1,j\}\) and \(S_{[i,j]}\) denote the union of levels \(S_{i}\cup S_{i+1}\cup\cdots\cup S_{j-1}\cup S_{j}\). The transition probability from a state \(X_{k}\in S_{k}\) to the union \(S_{[i,j]}\) is denoted by \(p(X_{k},S_{[i,j]})\), that is, \(p(X_{k},S_{[i,j]})=\sum_{\ell=i}^{j}p(X_{k},S_{\ell})\).
The notation \(r(X_{k},S_{\ell})\) denotes the conditional probability
\[r(X_{k},S_{\ell})=\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,k-1]})}. \tag{9}\]
Its minimal and maximal values are denoted as follows:
\[r_{\mbox{\tiny min}}(X_{k},S_{\ell})=\min_{X_{k}\in S_{k}}p(X_{k},S_{\ell}) \mbox{ and }r_{\mbox{\tiny max}}(X_{k},S_{\ell})=\max_{X_{k}\in S_{k}}r(X_{k},S_{\ell}).\]
Table 1 lists main symbols used in this paper.
### Shortcuts
The combination of a fitness function and an EA constitutes a fitness landscape. A fitness landscape is associated with an EA because one function may be easy for one EA but difficult for another (He et al., 2015). Fitness landscapes can be divided into two categories: with shortcuts and without shortcuts. In most fitness landscapes, an EA can take different paths from a lower fitness level to a higher fitness level, some of which are shorter than others. A shortcut implies that an intermediate fitness level is skipped. In this paper, we formally define a shortcut as follows.
**Definition 5**.: _Given an elitist EA for maximizing a function \(f(x)\) and a fitness level partition \((S_{0},\cdots,S_{K})\), there exists a shortcut on the fitness landscape if for some \(1\leq\ell,k\leq K\) and \(X_{k}\in S_{k}\), the conditional probability_
\[\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}=o(1). \tag{10}\]
According to (10), the conditional probability of the EA starting from \(X_{k}\) to visit \(S_{\ell}\) is \(o(1)\), so the conditional probability of the EA starting from \(X_{k}\) to visit \(S_{[0,\ell-1]}\) is \(1-o(1)\). Thus, the EA skips the level \(S_{\ell}\) with a large conditional probability \(1-o(1)\).
Let us demonstrate two examples: one fitness landscape without a shortcut and the other with shortcuts. The first example is the (1+1) EA that maximizes the OneMax function:
\[\mathrm{OM}(x)=|x|,\quad x=(x_{1},\cdots,x_{n})\in\{0,1\}^{n},\]
where \(|x|=x_{1}+\cdots+x_{n}\). The state space is divided into \(n+1\) levels \((S_{0},\cdots,S_{n})\), where \(S_{k}=\{x\in\{0,1\}^{n};\mathrm{OM}(x)=n-k\}\). Figure 1 shows that no shortcut exists.
The second example is the (1+1) EA for maximizing the TwoMax1 function:
\[\mathrm{TM1}(x)=\left\{\begin{array}{ll}n&\text{if }|x|=0\text{ or }|x|=n,\\ |x|&\text{if }|x|\geq\frac{n}{2},\\ \frac{n}{2}-|x|&\text{else},\end{array}\right.\quad x=(x_{1},\cdots,x_{n})\in \{0,1\}^{n},\]
where \(n\) is a large even integer. There are two maxima at \(|x|=0\) and \(n\). The search space is split into \(n\) fitness levels \((S_{0},\cdots,S_{n-1})\) from high to low: \(S_{k}=\{x\in\{0,1\}^{n}:\mathrm{TM1}(x)=n-k\}\). One shortcut is from \(x_{n/2+1}\in S_{n/2+1}\) to \(S_{0}\) skipping fitness levels \(S_{1},\cdots,S_{n/2}\). Another shortcut from \(x_{n-1}\in S_{n-1}\) to \(S_{n/2}\) skipping \(S_{n/2+1}\). Figure 1 displays the instance \(n=10\) with two shortcuts \(x_{6}\in S_{6}\to S_{0}\) and \(x_{9}\in S_{9}\to S_{5}\). We omit the rigorous proof of these shortcuts. Similar to OneMax, it is expected that the hitting time of the (1+1) EA on TwoMax1 is \(\Omega(n\ln n)\).
\begin{table}
\begin{tabular}{|l|l|} \hline \(S_{k}\) & a fitness level \\ \hline \(S_{[i,j]}\) & the union of fitness levels \(S_{i}\cup S_{i+1}\cdots\cup S_{j-1}\cup S_{j}\) where \(i<j\) \\ \hline \(X_{k}\) & a state in \(S_{k}\) \\ \hline \(m(X_{k})\) & the mean hitting time when the EA starts from \(X_{k}\) \\ \hline \(p(X_{k},S_{\ell})\) & the transition probability from \(X_{k}\) to \(S_{\ell}\) \\ \hline \(p(X_{k},S_{[i,j]})\) & the transition probability from \(X_{k}\) to \(S_{[i,j]}\) \\ \hline \(r(X_{k},S_{\ell})\) & the conditional probability \(\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,k-1]})}\) \\ \hline \(c_{k,\ell},c_{\ell},c\) & coefficients in linear bounds \\ \hline \end{tabular}
\end{table}
Table 1: Notation used in the paper.
## 3 Review and discussion of existing fitness level methods
This section reviews existing fitness level methods and discusses their shortcomings.
### Existing fitness level methods
In a fitness level method, the hitting time of elitist EAs is estimated using transition probabilities between fitness levels. Given a fitness level partition \((S_{0},\cdots S_{K})\), previous results are summarized as follows.
Simple Type-\(0\) lower bound and Type-\(1\) upper time bounds were given by Wegener (2003) as shown in Propositions 1 to 2.
**Proposition 1**.: _(_Wegener_,_ 2003_, Lemma 1)_ _For all \(k\geq 1\) and \(X_{k}\in S_{k}\), the hitting time \(m(X_{k})\geq\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}\)._
**Proposition 2**.: _(_Wegener_,_ 2003_, Lemma 2)_ _For all \(k\geq 1\) and \(X_{k}\in S_{k}\), the hitting time \(m(X_{k})\leq\sum_{\ell=1}^{k}\frac{1}{p_{\min}(X_{\ell},S_{[0,\ell-1]})}\)._
The above bounds were improved to Type-\(c\) bounds by Sudholt (2012) using a constant coefficient \(c\) (called viscosity) as shown in Propositions 3 to 4.
**Proposition 3**.: _(_Sudholt_,_ 2012_, Theorem 3)_ _For any \(0\leq\ell<k\leq K\), let \(p(X_{k},S_{\ell})\leq p_{\max}(X_{k},S_{[0,k-1]})\,r_{k,\ell}\) and \(\sum_{\ell=0}^{k-1}r_{k,\ell}=1\). Assume that there is some \(0\leq c\leq 1\) such that for any \(1\leq l<k\leq K\), it holds \(r_{k,\ell}\geq c\sum_{j=0}^{\ell}r_{k,j}\). Then the mean hitting time_
\[m(X^{[0]})\geq\sum_{k=1}^{K}\Pr(X^{[0]}\in S_{k})\left(\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c}{p_{\max}(X_{\ell},S_{[0,\ell-1]})} \right).\]
An issue with Proposition 3 is that the above Type-\(c\) lower bound is loose on fitness landscapes with shortcuts. This is shown in Case Study 1.
**Proposition 4**.: _(_Sudholt_,_ 2012_, Theorem 4)_ _For any \(0\leq\ell<k\leq K\), let \(p(X_{k},S_{\ell})\geq p_{\min}(X_{k},S_{[0,k-1]})\,r_{k,\ell}\) and \(\sum_{\ell=0}^{k-1}r_{k,\ell}=1\). Assume that there is some \(0\leq c\leq 1\) such that for all \(1\leq l<k\leq K\), it holds \(r_{k,\ell}\leq c\sum_{j=0}^{\ell}r_{k,j}\). Further, assume that for all \(1\leq l\leq K-2\), it
Figure 1: Left: the (1+1) EA on OneMax\((x)\) where \(n=10\). Right: The (1+1) EA on TwoMax\(1(x)\) where \(n=10\). Dotted lines represent transitions. Solid lines are shortcuts: \(x_{6}\in S_{6}\to S_{0}\) and \(x_{9}\in S_{9}\to S_{5}\).
_holds \((1-c)p_{\min}(X_{\ell+1},S_{[0,\ell]})\leq p_{\min}(X_{\ell},S_{[0,\ell-1]})\). Then the mean hitting time_
\[m(X^{[0]})\leq\sum_{k=1}^{K}\Pr(x^{[0]}\in S_{k})\left(\frac{1}{p_{\min}(X_{k},S _{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c}{p_{\min}(X_{\ell},S_{[0,\ell-1]})} \right).\]
Different from the condition of Proposition 3, Proposition 4 adds an extra condition \((1-c)p_{\min}(X_{\ell+1},S_{[0,\ell]})\leq p_{\min}(X_{\ell},S_{[0,\ell-1]})\).
The time bounds were further improved to Type-\(c_{\ell}\) bounds by Doerr and Kotzing (2022) using a coefficient \(c_{\ell}\) (called visit probability) as shown in Propositions 5 to 6.
**Proposition 5**.: _(Doerr and Kotzing, 2022, Theorem 8) For all \(\ell=1,\cdots,K\), let \(c_{\ell}\) be a lower bound on the probability of there being a \(t\) such that \(X^{[t]}\in S_{\ell}\). Then the mean hitting time \(m(X^{[0]})\geq\sum_{\ell=1}^{K}\frac{c_{\ell}}{p_{\max}(X_{\ell},S_{[0,\ell-1 ]})}\)._
Proposition 5 does not provide a formula to calculate the visit probability \(c_{\ell}\) from transition probability between fitness levels. Therefore, a method for estimating \(c_{\ell}\) is given in the following lemma.
**Lemma 3**.: _(Doerr and Kotzing, 2022, Lemma 10) For \(1\leq\ell\leq K\), suppose there is \(c_{\ell}\) such that, for all \(X\in S_{[\ell+1,K]}\) with \(p(X,S_{[0,\ell]})>0\), it holds_
\[c_{\ell} \leq\Pr(X^{[t+1]}\in S_{\ell}\mid X^{[t+1]}\in S_{[0,\ell]},X^{[0 ]}=X)\text{ and } \tag{11}\] \[c_{\ell} \leq\Pr(X^{[0]}\in S_{\ell}\mid X^{[0]}\in S_{[0,\ell]}). \tag{12}\]
_Then \(c_{\ell}\) is a lower bound for visiting \(S_{\ell}\) as required by Proposition 5._
An issue with Lemma 3 is that the lower bound with the coefficient \(c_{\ell}\) (11) is loose on some fitness landscapes with shortcuts. An example is given in Case Study 2. In addition, the second condition (12) is too strong. Consider the common initialization that an EA starts at one level \(S_{k}\). Under this initialization, for any \(\ell\neq k\), \(\Pr(X^{[0]}\in S_{\ell}\mid X^{[0]}\in S_{[0,\ell]})=0\). According to (12), the coefficient \(c_{\ell}=0\). Then the Type-\(c_{\ell}\) lower bound by Lemma 3 degenerates to the Type-\(0\) bound.
Proposition 6 gives a Type-\(c_{\ell}\) upper upper but does not provide a formula to calculate the visit probability \(c_{\ell}\) from transition probability between fitness levels.
**Proposition 6**.: _(Doerr and Kotzing, 2022, Theorem 9) For all \(\ell=1,\cdots,K\), let \(c_{\ell}\) be an upper bound on the probability of there being a \(t\) such that \(X^{[t]}\in S_{\ell}\). Then the mean hitting time \(m(X^{[0]})\leq\sum_{\ell=1}^{K}\frac{c_{\ell}}{p_{\min}(X_{\ell},S_{[0,\ell-1 ]})}\)._
### Case study 1: A loose Type-\(c\) lower bound of the (1+1) EA on TwoMax1
This case study shows that the Type-\(c\) lower bound by Proposition 3 is loose on fitness landscapes with shortcuts. Consider the (1+1) EA for maximizing TwoMax1 function. In Proposition 3, we let \(r_{k,\ell}=r(x_{k},S_{\ell})\). Assume that the EA starts from \(S_{n-1}\). We aim to prove the lower bound by Proposition 3 is only \(O(1)\), that is
\[d_{n-1}=\frac{1}{p(x_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c}{p(x_{\ell},S _{[0,\ell-1]})}=O(1).\]
The coefficient \(c\) is calculated using the shortcut \(x_{n/2}\to S_{0}\) as follows. Since \(x_{n/2+1}\in S_{n/2+1}\) has \(n-1\) zero-valued bits and \(1\) one-valued bit, the transition from \(x_{n/2+1}\) to \(S_{0}\) happens if and only if the one-valued bit is flipped and other bits are unchanged.
\[p(x_{n/2+1},S_{0})=\frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1}. \tag{13}\]
Since a state in \(S_{1}\) has 1 zero-valued bit and \(n-1\) one-valued bits, the transition from \(x_{n/2+1}\) to \(S_{1}\) happens if and only if \(n-2\) zero-valued bits are flipped and other bits are unchanged.
\[p(x_{n/2+1},S_{1})=\left(\frac{1}{n}\right)^{n-2}\left(1-\frac{1}{n}\right)^{2}. \tag{14}\]
From (13) and (14), according to Proposition 3, we have
\[c\leq\frac{r(x_{n/2+1},S_{1})}{r(x_{n/2+1},S_{0})+r(x_{n/2+1},S_{1})}=\frac{p(x _{n/2+1},S_{1})}{p(x_{n/2+1},S_{0})+p(x_{n/2+1},S_{1})}=n^{-n+3}(1-o(1)).\]
Thus, according to Proposition 3, we get a lower bound as
\[d_{n-1}\leq\frac{1}{p(x_{n-1},S_{[0,n-2]})}+n^{-n+3}(1-o(1))\sum_{\ell=1}^{n-2 }\frac{1}{p(x_{\ell},S_{[0,\ell-1]})}. \tag{15}\]
The transition probability \(p(x_{\ell},S_{[0,\ell-1]})\) (where \(\ell=1,\cdots,n/2\)) is calculated as follows. Since a state \(x_{\ell}\in S_{\ell}\) has \(\ell\) zero-valued bits. The transition from \(x_{\ell}\) to \(S_{\ell-1}\subset S_{[0,\ell-1]}\) happens if 1 zero-valued bit is flipped and other bits are unchanged. Thus
\[p(x_{\ell},S_{[0,\ell-1]})\geq\binom{\ell}{1}\frac{1}{n}\left(1-\frac{1}{n} \right)^{n-1}\geq\frac{\ell}{n}e^{-1}. \tag{16}\]
The transition probability \(p(x_{\ell},S_{[0,\ell-1]})\) (where \(\ell=n/2+1,\cdots,n-1\)) is calculated as follows. Since a state \(x_{\ell}\in S_{\ell}\) has \(\ell-n/2\) one-valued bits, the transition from \(x_{\ell}\) to \(S_{[0,\ell-1]}\) happens if 1 one-valued bit is flipped and other bits are unchanged. Thus
\[p(x_{\ell},S_{[0,\ell-1]})\geq\binom{\ell-n/2}{1}\frac{1}{n}\left(1-\frac{1}{ n}\right)^{n-1}\geq\frac{\ell-n/2}{n}e^{-1}. \tag{17}\]
Using the lower bounds of \(p(x_{\ell},S_{[0,\ell-1]})\) in (16) and (17), we have
\[d_{n-1}\leq 2e+n^{-n+3}(1-o(1))\left(\sum_{\ell=1}^{n/2}\frac{en}{\ell}+ \sum_{\ell=n/2+1}^{n-2}\frac{en}{\ell-n/2}\right)=2e+O(n^{-n+4}\ln n).\]
The lower bound is \(O(1)\), much looser than the expected lower bound \(\Omega(n\ln n)\).
### Case study 2: A loose Type-\(c_{\ell}\) lower bound of the (1+1) EA on TwoMax1
This case study shows that the Type-\(c_{\ell}\) lower bound by Lemma 3 and Proposition 5 is loose on fitness landscapes with shortcuts. Still consider the (1+1) EA for maximizing TwoMax1 function. We prove that the lower bound by Lemma 3 and Proposition 5 is only \(O(1)\), that is
\[d=\sum_{\ell=1}^{K}\frac{c_{\ell}}{p_{\max}(x_{\ell},S_{[0,\ell-1]})}=O(1).\]
First, coefficients are calculated by Condition (11) but without Condition (12). For \(\ell=1,\cdots,n/2\), the coefficient \(c_{\ell}\) is calculated using the shortcut \(x_{n/2+1}\to S_{0}\) as follows. According to Condition (11),
\[c_{\ell}\leq\frac{p(x_{n/2+1},S_{\ell})}{p(x_{n/2+1},S_{[0,\ell]})},\quad\ell=1,\cdots,n/2.\]
Since \(x_{n/2+1}\) has \(1\) one-valued bit and a state in \(S_{\ell}\) has \(n-\ell\) one-valued bits, the transition from \(x_{n/2+1}\) to \(S_{\ell}\) happens only if \(n-1-\ell\) zero-valued bits are flipped. Thus,
\[p(x_{n/2+1},S_{\ell})\leq\binom{n-1}{n-1-\ell}\left(\frac{1}{n}\right)^{n-1- \ell}. \tag{18}\]
The transition from \(x_{n/2+1}\) to \(S_{0}\subset S_{[0,\ell]}\) happens if the one-valued bit is flipped and other bits are unchanged. Thus,
\[p(x_{n/2+1},S_{[0,\ell]})\geq\frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1}\geq \frac{1}{en}. \tag{19}\]
Combining (18) and (19), we have
\[c_{\ell}\leq e\binom{n-1}{n-1-\ell}\left(\frac{1}{n}\right)^{n-\ell},\quad \ell=1,\cdots,n/2. \tag{20}\]
For \(\ell=n/2+1,\cdots,n-2\), the coefficient \(c_{\ell}\) is calculated by the shortcut \(x_{n-1}\to S_{n/2}\) as follows. Consider \(x_{n-1}\in S_{n-1}\), then according to Condition(11),
\[c_{\ell}\leq\frac{p(x_{n-1},S_{\ell})}{p(x_{n-1},S_{[0,\ell]})},\quad\ell=n/2 +1,\cdots,n-2.\]
Since \(x_{n-1}\) has \(n/2-1\) one-valued bits and a state in \(S_{\ell}\) has \(\ell-n/2\) one-valued bits, the transition from \(x_{n-1}\) to \(S_{\ell}\) happens only if \(n-1-\ell\) zero-valued bits are flipped. Thus,
\[p(x_{n-1},S_{\ell})\leq\binom{n/2-1}{n-1-\ell}\left(\frac{1}{n}\right)^{n-1- \ell}. \tag{21}\]
Since \(x_{n/2}\) has \(n/2\) one-valued bits, the transition from \(x_{n-1}\) to \(S_{n/2}\) happens if \(1\) zero-valued bit is flipped and other \(n/2-1\) one-valued bits are unchanged. Thus,
\[p(x_{n-1},S_{[0,\ell]})\geq p(x_{n-1},S_{n/2})\geq\binom{n/2+1}{1}\frac{1}{n} \left(1-\frac{1}{n}\right)^{n/2-1}\geq\frac{1}{2e}. \tag{22}\]
Combining (21) and (22), we have
\[c_{\ell}\leq 2e\binom{n/2-1}{n-1-\ell}\left(\frac{1}{n}\right)^{n-1-\ell}, \quad\ell=n/2+1,\cdots,n-2. \tag{23}\]
Next we consider the coefficient \(c_{\ell}\) (where \(\ell=1,\cdots,n-2\)) satisfying Condition (12) besides (11). We assign the probability distribution of the initial state \(x^{[0]}\) as follows so that Condition (12) is met.
\[\Pr(x^{[0]}\in S_{\ell})=\left\{\begin{array}{ll}0&\ell=0,\\ c_{\ell}&0<\ell<n-1,\\ 1-\sum_{j=1}^{n-2}c_{j}&\ell=n-1.\end{array}\right.\]
Using the lower bounds of \(p(x_{\ell},S_{[0,\ell-1]})\) in (16) and (17), according to Proposition 5, we get a lower time bound as
\[d=\sum_{\ell=1}^{K}\frac{c_{\ell}}{p(x_{\ell},S_{[0,\ell-1]})}\leq 2e+\sum_{ \ell=1}^{n/2}c_{k,\ell}\frac{en}{\ell}+\sum_{\ell=n/2+1}^{n-2}c_{k,\ell} \frac{n}{(\ell-n/2)}.\]
Finally, using coefficients \(c_{\ell}\) in (20) and (23), we have
\[d\leq 2e+\sum_{\ell=1}^{n/2}e{n-1\choose n-1-\ell}\left(\frac{1}{n}\right) ^{n-\ell}\frac{en}{\ell}+\sum_{\ell=n/2+1}^{n-2}2e{n/2-1\choose n-1-\ell} \left(\frac{1}{n}\right)^{n-1-\ell}\frac{n}{(\ell-n/2)}\] \[\leq 2e+2e^{2}\sum_{\ell=1}^{n/2}\frac{(n-1)\cdots(l+1)}{(n-1-\ell) \cdots 1}\left(\frac{1}{n}\right)^{n-1-\ell}\frac{n}{\ell}\] \[+2e\sum_{\ell=n/2+1}^{n-2}\frac{(n/2-1)\cdots(\ell-n/2+1)}{(n-1- \ell)\cdots 1}\left(\frac{1}{n}\right)^{n-1-\ell}\frac{n}{(\ell-n/2)}\] \[\leq 2e+2e^{2}\sum_{\ell=1}^{n/2}\frac{2}{(n-1-\ell)!}+2e\sum_{\ell=n /2+1}^{n-2}\frac{2}{(n-1-\ell)!}\leq 2e+\frac{4e^{2}}{(n/2-2)!}+4e(e-1).\]
The lower bound is \(O(1)\), much looser than the expected lower bound \(\Omega(n\ln n)\).
## 4 Metric bounds from fitness levels
This section presents drift analysis with fitness levels, constructs metric bounds and proves that the bounds are the tightest.
### Drift analysis with fitness levels
Drift analysis with fitness levels is a combination of drift analysis and fitness levels. Its workflow is outlined below. For the sake of illustration, only the lower time bound is presented.
First, the search space \(S\) is split into multiple fitness levels \((S_{0},\cdots,S_{K})\) according to the fitness value from high to low, where \(S_{0}=S_{\mathrm{opt}}\).
Secondly, states at the same level are assigned to the same distance from the optimal set, that is, for any \(X\in S_{0}\), \(d(X)=0\) and for any \(k\geq 1\) and \(X\in S_{k}\), \(d(X)=d_{k}\). The distance \(d_{k}\) is constructed using transition probabilities between fitness levels.
Next we need to prove that for any \(k\) and \(X_{k}\in S_{k}\), \(d_{k}\) is a lower bound on the mean hitting time \(m(X_{k})\). Since the EA is elitist, it will never move from \(X_{k}\in S_{k}\) to a fitness level lower than \(S_{k}\). Therefore, the drift satisfies
\[\Delta d(X_{k})=d_{k}-\sum_{\ell=0}^{k}d_{\ell}p(X_{k},S_{\ell})=d_{k}p(X_{k}, S_{[0,k-1]})-\sum_{\ell=1}^{k-1}d_{\ell}p(X,\!S_{\ell}). \tag{24}\]
According to Lemma 2, if for any \(k\geq 1\) and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\leq 1\), then the hitting time \(m(X_{k})\geq d(X_{k})\).
Finally, the tightest lower bound problem is regarded as a constrained multi-objective optimization problem subject to the constraint that \(d_{k}\) is constructed using transition probabilities between fitness levels.
The above drift analysis with fitness levels treats the fitness level method as a special kind of drift analysis. It is completely differs from existing fitness level methods (Wegener, 2003; Sudholt, 2012; Doerr and Kotzing, 2022).
### Metric bounds
Using drift analysis with fitness levels, new bounds are constructed using transition probabilities between fitness levels. Thanks to the sufficient and necessary conditions, Theorem 1 covers all lower metric bounds \(d_{k}\) from fitness levels such that for any \(k\geq 1\)
and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\leq 1\). The condition \(\Delta d(X_{k})\leq 1\) ensures that \(d_{k}\) is a lower bound. This lower bound is called a Type-\(r_{k,\ell}\) lower bound. The best lower bound \(d_{k}^{*}\) is reached when Inequality (25) becomes an equality.
**Theorem 1**.: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p(X_{k},S_{[0,k-1]})\) and \(r(X_{k},S_{\ell})\) (where \(1\leq\ell<k\leq K\)), consider the family of distances \((d_{1},\cdots,d_{k})\) such that for any \(X_{k}\in S_{k}\), \(d(X_{k})=d_{k}\). Then for any \(k>0\) and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\leq 1\) if and only if_
\[d_{k}\leq\min_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{[0,k-1]})}+\sum_{\ell =1}^{k-1}r(X_{k},S_{\ell})d_{\ell}\right\}. \tag{25}\]
Proof.: First we prove the sufficient condition. Suppose that (25) is true. Since the EA is elitist, for any \(k\geq 1\) and \(X_{k}\in S_{k}\), from (24), we have
\[\Delta d(X_{k})=p(X_{k},S_{[0,k-1]})d_{k}-\sum_{\ell=1}^{k-1}p(X_{k},S_{\ell}) d_{\ell}.\]
We replace \(d_{k}\) (but not \(d_{\ell}\)) with (25) and get
\[\Delta d(X_{k})\leq p(X_{k},S_{[0,k-1]})\min_{Y_{k}\in S_{k}}\left\{\frac{1}{p (Y_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}r(Y_{k},S_{\ell})d_{\ell}\right\}- \sum_{\ell=1}^{k-1}p(X_{k},S_{\ell})d_{\ell}. \tag{26}\]
Since
\[\frac{p(X_{k},S_{[0,k-1]})}{\max_{Y_{k}}p(Y_{k},S_{[0,k-1]})}\leq 1, \text{ and }\] \[\min_{Y_{k}}\sum_{\ell=1}^{k-1}r(Y_{k},S_{\ell})\leq\sum_{\ell=1 }^{k-1}\min_{Y_{k}}\frac{p(Y_{k},S_{\ell})}{p(Y_{k},S_{[0,k-1]})}\leq\sum_{ \ell=1}^{k-1}\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,k-1]})},\]
we insert the above two inequalities into (26), and get \(\Delta d(X_{k})\leq 1\).
Secondly, we prove the necessary condition. Suppose that \(\Delta d(X_{k})\leq 1\). Since the EA is elitist, for any \(k\geq 1\) and \(X_{k}\in S_{k}\), using (24), we have
\[\Delta d(X_{k})=d_{k}p(X_{k},S_{[0,k-1]})-\sum_{\ell=1}^{k-1}d_{\ell}p(X_{k},S _{\ell})\leq 1.\]
Then we have
\[d_{k}\leq\min_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{[0,k-1]})}+\sum_{ \ell=1}^{k-1}r(X_{k},S_{\ell})d_{\ell}\right\}. \tag{27}\]
Thus we prove that (25) holds.
Thanks to the sufficient and necessary conditions, Theorem 2 covers all upper metric bounds \(d_{k}\) from fitness levels such that for all \(k\geq 1\) and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\geq 1\).The condition \(\Delta d(X_{k})\geq 1\) ensures that \(d_{k}\) is an upper bound. Its proof is similar to Theorem 1. This upper bound is named a Type-\(r_{k,\ell}\) upper bound. The best upper bound \(d_{k}^{*}\) is reached when Inequality (28) becomes an equality.
**Theorem 2**.: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p(X_{k},S_{[0,k-1]})\) and \(r(X_{k},S_{\ell})\) (where \(1\leq\ell<k\leq K\)), consider the family of distances \((d_{1},\cdots,d_{k})\) such that for any \(X_{k}\in S_{k}\), \(d(X_{k})=d_{k}\). Then for any \(k\geq 1\) and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\geq 1\) if and only if_
\[d_{k}\geq\max_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{[0,k-1]})}+\sum_{\ell =1}^{k-1}r(X_{k},S_{\ell})d_{\ell}.\right\} \tag{28}\]
How to calculate \(d_{k}\)? In this paper, we will not calculate \(d_{k}\) recursively via the metric bound (25) or (28). Instead, they are converted to the linear bound (1) or (2). For example, the upper bound (28) is converted to
\[d_{k}=\frac{1}{p_{\min}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}r_{\max}(X_{k}, S_{\ell})d_{\ell},\qquad k=1,\cdots,K.\]
Then by induction, we represent \(d_{k}\) in a linear form as follows:
\[d_{k}=\frac{1}{p_{\min}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_{k,\ell }}{p_{\min}(X_{\ell},S_{[0,\ell-1]})}.\]
The problem of calculating a metric bound becomes the problem of calculating a linear bound or coefficients \(c_{k,\ell}\). This will be discussed in detail in next section.
### The tightest metric bounds
Consider the tightest lower bound first. The problem can be formulated as follows: given a family of lower bounds constructed using transition probabilities between fitness levels, which bound is the tightest? A family of lower bounds can be represented by a family of distances in drift analysis. Given a fitness level partition \((S_{0},\cdots,S_{K})\), consider the family of distances \((d_{0},\cdots,d_{K})\) such that \(d_{0}=0\) and for any \(k\geq 1\), the drift \(\Delta d(X_{k})\leq 1\). The condition \(\Delta d(X_{k})\leq 1\) ensures that \(d_{k}\) is a lower bound on the mean hitting time \(m(X_{k})\). The distance \(d_{k}\) must be constructed using the two transition probabilities between fitness levels (8) and (9).
The tightest lower bound problem is a constrained multi-objective optimization problem:
\[\max\{d_{k};\Delta d(X_{k})\leq 1\},\quad k=1,\cdots,K, \tag{29}\]
subject to the constraint that \((d_{1},\cdots,d_{K})\) are constructed using the two transition probabilities between fitness levels (8) and (9). Without this constraint, the optimal solution is \(d(X)=m(X)\), because according to Lemmas 1 and 2, if \(\Delta d(X)=1\), then \(d(X)=m(X)\). According to Theorem 1, the best lower bound \(d_{k}^{*}\) in (25) is the tightest.
**Theorem 3**.: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p(X_{k},S_{[0,k-1]})\) and \(r(X_{k},S_{\ell})\) (where \(1\leq\ell<k\leq K\)), consider the family of distances \((d_{0},d_{1},\cdots,d_{k})\) such that \(d_{0}=0\) and for all \(k\geq 1\) and \(X_{k}\in S_{k}\), \(d(X_{k})=d_{k}\) and the drift \(\Delta d(X_{k})\leq 1\). The tightest lower bound within this distance family is_
\[d_{k}^{*}=\min_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{[0,k-1]})}+\sum_{\ell =1}^{k-1}r(X_{k},S_{\ell})d_{\ell}^{*}\right\}. \tag{30}\]
Similarly, the tightest upper bound problem is another constrained multi-objective optimization problem:
\[\min\{d_{k};\Delta d(X_{k})\geq 1\},\quad k=1,\cdots,K. \tag{31}\]
subject to the constrain that \((d_{1},\cdots,d_{K})\) are constructed from the two transition probabilities between fitness levels (8) and (9). According to Theorem 2, the best upper bound \(d_{k}^{*}\) in (28) is the tightest.
Theorem 4.2: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p(X_{k},S_{[0,k-1]})\) and \(r(X_{k},S_{\ell})\) (where \(1\leq\ell<k\leq K\)), consider the family of distances \((d_{0},\cdots,d_{k})\) such that \(d_{0}=0\) and for all \(k\geq 1\) and \(X_{k}\in S_{k}\), \(d(X_{k})=d_{k}\) and the drift \(\Delta d(X_{k})\geq 1\). The tightest upper bound within the distance family is_
\[d_{k}^{*}=\max_{X_{k}\in S_{k}}\left\{\frac{1}{p(X_{k},S_{[0,k-1]})}+\sum_{ \ell=1}^{k-1}r(X_{k},S_{\ell})d_{\ell}^{*}\right\}. \tag{32}\]
## 5 Linear bounds from fitness levels
This section constructs general linear bounds and derives different explicit expressions of coefficients.
### Linear bounds
Based on the metric lower bound (25), the theorem below provides coefficients in the general linear lower bound (1).
Theorem 5.1: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p_{\max}(X_{\ell},S_{[0,\ell-1]})\) and \(r(X_{k},S_{\ell})\) (where \(1\leq\ell<k\leq K\)), construct \(d_{k}\) as_
\[d_{k}=\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_{k,\ell }}{p_{\max}(X_{\ell},S_{[0,\ell-1]})}, \tag{33}\]
_where coefficients \(c_{k,\ell}\in[0,1]\) satisfy_
\[c_{k,\ell}\leq\min_{X_{k}\in S_{k}}\left\{r(X_{k},S_{\ell})+\sum_{j=\ell+1}^{ k-1}r(X_{k},S_{j})c_{j,\ell}\right\}. \tag{34}\]
_Then for any \(k>0\) and \(X_{k}\in S_{k}\), the mean hitting time \(m(X_{k})\geq d_{k}\)._
Proof: According to Lemma 2, it is sufficient to prove that for any \(k\geq 1\) and \(X_{k}\in S_{k}\), the drift \(\Delta d(X_{k})\leq 1\). Since the EA is elitist, from (24), we know
\[\Delta d(X_{k})= p(X_{k},S_{[0,k-1]})d_{k}-\sum_{\ell=1}^{k-1}p(X_{k},S_{\ell})d _{\ell}=p(X_{k},S_{[0,k-1]})\left(d_{k}-\sum_{\ell=1}^{k-1}r(X_{k},S_{\ell})d _{\ell}\right).\]
We replace \(d_{k}\) and \(d_{\ell}\) with (33) and get
\[\begin{split}\Delta d(X_{k})=& p(X_{k},S_{[0,k-1]}) \left[\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_{k, \ell}}{p_{\max}(X_{\ell},S_{[0,\ell-1]})}\right.\\ &\left.-\sum_{\ell=1}^{k-1}r(X_{k},S_{\ell})\left(\frac{1}{p_{ \max}(X_{\ell},S_{[0,\ell-1]})}+\sum_{j=1}^{\ell-1}\frac{c_{\ell,j}}{p_{\max}( X_{j},S_{[0,j-1]})}\right)\right].\end{split} \tag{35}\]
In the double summation \(\sum_{\ell=1}^{k-1}\sum_{j=1}^{\ell-1}\), the first term \(\sum_{\ell=1}\sum_{j=1}^{\ell-1}\) is empty but is kept for the sake of notation. We expand this double summation and then merge the same term \(1/p_{\mbox{\tiny max}}(X_{\ell},S_{[0,\ell-1]})\) (where \(\ell=1,\cdots,k\)) as follows.
\[\begin{split}&\sum_{\ell=1}^{k-1}r(X_{k},S_{\ell})\sum_{j=1}^{l-1 }\frac{c_{l,j}}{p_{\mbox{\tiny max}}(X_{j},S_{[0,j-1]})}=\sum_{\ell=2}^{k-1} \sum_{j=1}^{l-1}\frac{r(X_{k},S_{\ell})c_{l,j}}{p_{\mbox{\tiny max}}(X_{j},S_{ [0,j-1]})}\\ =&\frac{r(X_{k},S_{2})c_{2,1}}{p_{\mbox{\tiny max}} (X_{1},S_{0})}+\cdots+\left(\frac{r(X_{k},S_{k-1})c_{k-1,1}}{p_{\mbox{\tiny max }}(X_{1},S_{0})}+\frac{r(X_{k},S_{k-1})c_{k-1,k-2}}{p_{\mbox{\tiny max}}(X_{k -2},S_{[0,k-3]})}\right)\\ =&\frac{\sum_{j=2}^{k-1}r(X_{k},S_{j})c_{j,1}}{p_{ \mbox{\tiny max}}(X_{1},S_{0})}+\cdots+\frac{\sum_{j=k-1}^{k-1}r(X_{k},S_{j})c _{j,k-2}}{p_{\mbox{\tiny max}}(X_{k-2},S_{[0,k-3]})}\\ =&\sum_{\ell=1}^{k-2}\frac{\sum_{j=\ell+1}^{k-1}r(X _{k},S_{j})c_{j,\ell}}{p_{\mbox{\tiny max}}(X_{\ell},S_{[0,\ell-1]})}=\sum_{ \ell=1}^{k-1}\frac{\sum_{j=\ell+1}^{k-1}r(X_{k},S_{j})c_{j,\ell}}{p_{\mbox{ \tiny max}}(X_{\ell},S_{[0,\ell-1]})}.\end{split} \tag{36}\]
In the double summation \(\sum_{\ell=1}^{k-1}\sum_{j=\ell+1}^{k-1}\), the last term \(\sum_{\ell=k-1}\sum_{j=\ell+1}^{k-1}\) is empty but is added for the sake of notation. Inserting (36) into (35), we have
\[\begin{split}\Delta d(X_{k})\leq& p(X_{k},S_{[0,k-1]})\left[\frac{1}{p_{\mbox{\tiny max }}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_{k,\ell}}{p_{\mbox{\tiny max }}(X_{\ell},S_{[0,\ell-1]})}\right.\\ &\left.-\sum_{\ell=1}^{k-1}\left(\frac{r(X_{k},S_{\ell})+\sum_{j= \ell+1}^{k-1}r(X_{k},S_{j})c_{j,\ell}}{p_{\mbox{\tiny max}}(X_{\ell},S_{[0, \ell-1]})}\right)\right].\end{split}\]
Using Condition (34), we get \(\Delta d(X_{k})\leq 1\) and complete the proof.
The best linear lower bound \(d_{k}^{*}\) and the best coefficients \(c_{k,\ell}^{*}\) are reached when Inequality (34) becomes an equality. In practical calculation, the transition probability \(p_{\mbox{\tiny max}}(X_{k},S_{[0,k-1]})\) can be replaced by an upper bound on it and \(r(X_{k},S_{\ell})\) by a lower bound on it. Theorem 4.1 is still true.
Similarly, the theorem below provides coefficients in the general linear bound (2). Its proof is similar to Theorem 4.1.
Theorem 4.1.: _Given an elitist EA for maximizing \(f(x)\), a fitness level partition \((S_{0},\cdots,S_{K})\), probabilities \(p_{\mbox{\tiny min}}(X_{\ell},S_{[0,\ell-1]})\) and \(r(X_{k},S_{\ell})\) where \(1\leq\ell<k\leq K\), construct \(d_{k}\) as_
\[d_{k}=\frac{1}{p_{\mbox{\tiny min}}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1} \frac{c_{k,\ell}}{p_{\mbox{\tiny min}}(X_{\ell},S_{[0,\ell-1]})}. \tag{37}\]
_where coefficients \(c_{k,\ell}\in[0,1]\) satisfy_
\[c_{k,\ell}\geq\max_{X_{k}\in S_{k}}\left\{r(X_{k},S_{\ell})+\sum_{j=\ell+1}^{k- 1}r(X_{k},S_{j})c_{j,\ell}\right\}. \tag{38}\]
_Then for any \(k>0\) and \(X_{k}\in S_{k}\), the mean hitting time \(m(X_{k})\geq d_{k}\)._
The best linear upper bound \(d_{k}^{*}\) and the best coefficients \(c_{k,\ell}^{*}\) are reached when Inequality (38) becomes an equality. In practical calculation, the transition probability \(p_{\mbox{\tiny min}}(X_{k},S_{[0,k-1]})\) can be replaced by a lower bound on it or \(r(X_{k},S_{\ell})\) by an upper bound on it. Theorem 4.1 is still true.
How to calculate the coefficient \(c_{k,\ell}\) in linear bounds? It can be obtained by solving Inequality (34) or (38) in three different ways:
1. find an explicit expression of \(c_{k,\ell}\) from (34) or (38);
2. recursively calculate \(c_{\ell+1,\ell}\), \(c_{\ell+2,\ell}\), \(\cdots,c_{k,\ell}\) by (34) or (38);
3. combine the above two ways together, that is, for some \(c_{k,\ell}\), use recursive calculation; but for other \(c_{k,\ell}\), use an explicit expression.
Each way is a special fitness level method. Drift analysis with fitness level is a framework that can be used to develop different fitness level methods.
### Explicit expressions for linear bound coefficients
An explicit expression of the coefficient \(c_{k,\ell}\) is more convenient in application. From (34), by induction, it is straightforward to obtain an explicit expression for \(c_{k,\ell}\) as follows.
\[\begin{split} c_{k,\ell}\leq& r_{\text{\tiny min}}(X _{k},S_{\ell})+\sum_{\ell<j_{1}<k}r_{\text{\tiny min}}(X_{k},S_{j_{1}})\,r_{ \text{\tiny min}}(X_{j_{1}},S_{\ell})+\cdots\\ &+\sum_{\ell<j_{k-\ell-1}<\cdots<j_{1}<k}r_{\text{\tiny min}}(X _{k},S_{j_{1}})\,r_{\text{\tiny min}}(X_{j_{1}},S_{j_{2}})\cdots r_{\text{ \tiny min}}(X_{j_{k-\ell-1}},S_{\ell}).\end{split} \tag{39}\]
Inequality (39) provides an explanation of the coefficient \(c_{k,\ell}\). Each product in (39) can be interpreted as a conditional probability of the EA starting from \(X_{k}\) to visit \(S_{\ell}\). The total sum can be regarded as the hitting probability (He and Yao, 2002) of the EA starting from \(X_{k}\) to hit \(S_{\ell}\). Its rigorous analysis is left for future.
Similarly, from (38), by induction, it is straightforward to obtain an explicit expression for \(c_{k,\ell}\) as follows.
\[\begin{split} c_{k,\ell}\geq& r_{\text{\tiny max}}(X _{k},S_{\ell})+\sum_{\ell<j_{1}<k}r_{\text{\tiny max}}(X_{k},S_{j_{1}})\,r_{ \text{\tiny max}}(X_{j_{1}},S_{\ell})+\cdots\\ &+\sum_{\ell<j_{k-\ell-1}<\cdots<j_{1}<k}r_{\text{\tiny max}}(X _{k},S_{j_{1}})\,r_{\text{\tiny max}}(X_{j_{1}},S_{j_{2}})\cdots r_{\text{ \tiny max}}(X_{j_{k-\ell-1}},S_{\ell}).\end{split} \tag{40}\]
The number of summation terms in (39) and (40) is up to \((k-\ell-1)!\). Therefore, the calculation of (39) and (40) is intractable. But there are many ways to construct an explicit expression for \(c_{k,\ell}\) that can be calculated in polynomial time. For example, for the lower bound, a simple expression from (39) is \(c_{k,\ell}\leq r_{\text{\tiny min}}(X_{k},S_{\ell})\). He et al. (2023)1 proposes a simplified version of (39) as follows.
Footnote 1: He, J., Chong, S. Y., and Yao, X. (2023). Fast estimations of linear time bounds of elitist evolution- aryl algorithms. [https://doi.org/10.48550/arXiv.2311.10502](https://doi.org/10.48550/arXiv.2311.10502)
\[c_{k,l}\leq\prod_{i\in[\ell+1,k]}r_{\text{\tiny min}}(X_{i},S_{[\ell,i-1]}). \tag{41}\]
Another simple way is to assign \(c_{k,\ell}=0,1\), \(c\), or \(c_{\ell}\). Although Type-\(0,1\), \(c\) and \(c_{\ell}\) bounds have been studied in (Wegener, 2003; Sudholt, 2012; Doerr and Kotzing, 2022), our proof is completely different. Furthermore, our Type-\(c\) upper and Type-\(c_{\ell}\) lower bounds require weaker conditions. Hence, they are not exactly the same as those in (Sudholt, 2012; Doerr and Kotzing, 2022).
Let \(c_{k,\ell}=0\), then the linear lower bound (33) becomes the same Type-\(0\) lower bound as Proposition 1.
**Corollary 1**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=0\), then \(m(X_{k})\geq\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}\)._
Let \(c_{k,\ell}=c\), then the linear lower bound (33) becomes a Type-\(c\) lower bound.
**Corollary 2**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=c\) to satisfy the inequality_
\[c\leq\min_{1<k\leq K}\min_{\ell:1\leq\ell<k}\min_{X_{k}:p(X_{k},S_{[0,\ell]})> 0}\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}. \tag{42}\]
_Then \(m(X_{k})\geq\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c}{p _{\max}(X_{\ell},S_{[0,\ell-1]})}\)._
Proof.: Condition (42) is equivalent to that for any \(1\leq\ell<k\leq K\) and \(X_{k}\in S_{k}\) such that \(p(X_{k},S_{[0,\ell-1]})>0\), it holds
\[c \leq\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}=\frac{r(X_{k},S_{\ell})}{r(X_{k},S_{[0,\ell]})}.\] \[c \,r(X_{k},S_{[0,\ell]})=c\,(1-r(X_{k},S_{[\ell+1,k-1]}))\leq r(X_ {k},S_{\ell}). \tag{43}\] \[c \leq r(X_{k},S_{\ell})+c\,r(X_{k},S_{[\ell+1,k-1]}). \tag{44}\]
For any \(1\leq\ell<k\leq K\) and \(X_{k}\in S_{k}\) such that \(p(X_{k},S_{[0,\ell-1]})=0\), we have \(r(X_{k},S_{[0,\ell]})=0\) and \(r(X_{k},S_{[\ell+1,k-1]})=1\). Thus the following identical equation holds:
\[c=r(X_{k},S_{\ell})+c\,r(X_{k},S_{[\ell+1,k-1]})=0+c. \tag{45}\]
Combining (44) and (45), we get Condition (34), that is for all \(X_{k}\in S_{k}\),
\[c\leq r(X_{k},S_{\ell})+c\sum_{j=\ell+1}^{k-1}r(X_{k},S_{\ell}).\]
Then the corollary is derived from Theorem 5.
Corollary 2 provides an interpretation of the coefficient \(c\) which is a lower bound on the conditional probability (42). Corollary 2 is more convenient than Proposition 3 because the coefficient \(c\) is calculated directly from probabilities \(p(X_{k},S_{\ell})\) and \(p(X_{k},S_{[0,\ell]})\). Since Inequality (43) is equivalent to the inequality \(r_{k,\ell}\geq c\sum_{j=0}^{\ell}r_{k,j}\) in Proposition 3 under different notation, Corollary 2 is equivalent to Proposition 3.
Let \(c_{k,\ell}=c_{\ell}\), then the linear lower bound (33) becomes a Type-\(c_{\ell}\) lower bound. The proof of Corollary 3 is similar to Corollary 2.
**Corollary 3**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=c_{\ell}\) to satisfy the inequality_
\[c_{\ell}\leq\min_{\ell<k\leq K}\min_{X_{k}:p(X_{k},S_{[0,\ell]})> 0}\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}. \tag{46}\]
_Then \(m(X_{k})\geq\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_ {\ell}}{p_{\max}(X_{\ell},S_{[0,\ell-1]})}.\)_
The above corollary below can be regarded as a combination of Proposition 5 and Lemma 3. However, unlike Lemma 3, Corollary 3 does not require the condition: for any \(\ell\geq 1\), the probability \(\Pr(X^{[0]}\in S_{\ell}\mid X^{[0]}\in S_{[0,\ell]})\geq c_{\ell}\). Therefore, the combination of Proposition 5 and Lemma 3 is a special case of Corollary 3.
Similarly, let \(c_{k,\ell}=1\), then linear upper bound (37) becomes the same Type-\(1\) bound as in Proposition 2.
**Corollary 4**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=1\), then \(m(X_{k})\leq\sum_{\ell=1}^{k}\frac{1}{p_{\min}(X_{\ell},S_{[0,\ell-1]})}\)._
Let \(c_{k,\ell}=c\), then the linear lower bound (37) becomes a Type-\(c\) upper bound. The proof of Corollary 5 is similar to Corollary 2.
**Corollary 5**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=c\) to satisfy_
\[c\geq\max_{1<k\leq K}\max_{1\leq\ell<k}\max_{X_{k}:p(X_{k},S_{[0,\ell]})>0} \frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}. \tag{47}\]
_Then \(m(X_{k})\leq\frac{1}{p_{\max}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c}{ p_{\max}(X_{\ell},S_{[0,\ell-1]})}.\)_
Corollary 5 is more convenient than Proposition 4 because because the coefficient \(c\) is calculated directly from transition probabilities \(p(X_{k},S_{\ell})\) and \(p(X_{k},S_{[0,\ell]})\). Furthermore, unlike Proposition 4, Corollary 5 does not require the condition: for all \(1\leq l\leq K-2\), it holds \((1-c)p_{\min}(X_{\ell+1},S_{[0,\ell]})\leq p_{\min}(X_{\ell},S_{[0,\ell-1]})\). Therefore, Proposition 4 is a special case of Corollary 5.
Let \(c_{k,\ell}=c_{\ell}\), then the linear upper bound (37) becomes a Type-\(c_{\ell}\) upper bound. The proof of Corollary 6 is similar to Corollary 2.
**Corollary 6**.: _For \(1\leq\ell<k\leq K\), choose \(c_{k,\ell}=c_{\ell}\) to satisfy_
\[c_{\ell}\geq\max_{\ell<k<K}\max_{X_{k}:p(X_{k},S_{[0,\ell]})>0}\frac{p(X_{k}, S_{\ell})}{p(X_{k},S_{[0,\ell]})}. \tag{48}\]
_then \(m(X_{k})\leq\frac{1}{p_{\min}(X_{k},S_{[0,k-1]})}+\sum_{\ell=1}^{k-1}\frac{c_{ \ell}}{p_{\min}(X_{\ell},S_{[0,\ell-1]})}.\)_
The above bound is different from the Type-\(c_{\ell}\) upper bound by Proposition 6, because the coefficient \(c_{\ell}\) is directly calculated from transition probabilities \(p(X_{k},S_{\ell})\) and \(p(X_{k},S_{[0,\ell]})\).
### Discussion of different linear bounds
Corollaries 1 to 6 have shown that Types-\(0,1,c,c_{\ell}\) bounds are special cases of the Type-\(c_{k,\ell}\) bound. The Type-\(c_{k,\ell}\) bound is the tightest among them.
**Corollary 7**.: _Given an elitist EA and a fitness level partition \((S_{0},\cdots,S_{K})\), the best Type-\(c_{k,\ell}\) lower bound \(\geq\) Type-\(c_{\ell}\geq\) Type-\(c\). The best Type-\(c_{k,\ell}\) upper bound \(\leq\) Type-\(c_{\ell}\leq\) Type-\(c\)._
Proof.: For the lower bound, a solution to Inequality (42) is a solution to Inequality (46) and then to Inequality (34), thus the best coefficients \(c^{*}\leq c_{\ell}^{*}\leq c_{k,\ell}^{*}\). Similarly for the upper bound, the best coefficients \(c^{*}\geq c_{\ell}^{*}\geq c_{k,\ell}^{*}\). Then we get the conclusions.
Corollary 8 shows that the Type-\(c_{k,\ell}\) bound is the exact hitting time of EAs on level-based fitness landscapes. However, we cannot derive the same conclusion for Type-\(c\) and \(c_{\ell}\) bounds.
**Definition 6**.: _Given an elitist EA for maximizing a function \(f(x)\) and a fitness level partition \((S_{0},\cdots,S_{K})\), the associated fitness landscape is called a level-based fitness landscape if for all \(1\leq\ell<k\leq K\) and \(X_{k}\in S_{k}\), \(p_{\min}(X_{k},S_{\ell})=p_{\max}(X_{k},S_{\ell})\)._
Both OneMax and TwoMax1 are level-based fitness landscapes to the (1+1) EA.
**Corollary 8**.: _Given an elitist EA for maximizing a function \(f(x)\) and a fitness level partition \((S_{0},\cdots,S_{K})\), if the associated fitness landscape is level-based, then the tightest Type-\(r_{k,\ell}\) lower bound \(=\) the tightest Type-\(r_{k,\ell}\) upper bound._
Proof.: It is a direct corollary of Theorems 5 and 6.
In addition, Type-\(c\) and Type-\(c_{\ell}\) lower bounds are loose on fitness landscapes with shortcuts because shortcuts results in coefficients \(c\) and \(c_{\ell}\) as small as \(o(1)\). This claim has been verified in Case Studies 1 and 2 and also is proved by the following theorem more generally.
Theorem 5.1: _If a shortcut exists, that is for some \(1\leq\ell<k\leq K\) and \(X_{k}\in S_{k}\), it holds_
\[\frac{p(X_{k},S_{\ell})}{p(X_{k},S_{[0,\ell]})}=o(1), \tag{49}\]
_then coefficients \(c=o(1)\) in (42) and \(c_{\ell}=o(1)\) in (46)._
Proof: It is directly derived from Condition (49) and Inequalities (42) and (46).
## 6 Applications of the general linear lower bound
### Case study 3: Calculating coefficients in lower linear bounds of the (1+1) EA on OneMax
This case study demonstrates different ways to calculate coefficients in the linear lower bound. Consider the (1+1) EA for maximizing OneMax. Assume that the EA starts at the level \(S_{n}\). According to Theorem 5.1, a Type-\(c_{k,\ell}\) lower bound
\[d_{n}=\frac{1}{p_{\max}(x_{n},S_{[0,n-1]})}+\sum_{\ell=1}^{n-1}\frac{c_{n,\ell }}{p_{\max}(x_{\ell},S_{[0,\ell-1]})}.\]
where
\[c_{n,\ell}\leq r(x_{n},S_{\ell})+\sum_{k=\ell+1}^{n-1}r(x_{n},S_{k})c_{k,\ell },\quad\ell=1,\cdots,n-1. \tag{50}\]
We focus on explaining different calculation methods of the coefficient \(c_{n,\ell}\), so we do not discuss the bound \(d_{n}\) itself. To avoid unnecessarily computation, we only estimate the coefficient \(c_{n,\ell}\) using the constant \(e^{-1}\), but the constant can be improved further. There are two approaches to calculate \(c_{k,\ell}\) via Inequality (50). One is to look for explicit solutions to Inequality (50). The other is recursive calculation level by level. There are different explicit solutions to Inequality (50). It is trivial to get the trivial solution \(c_{n,\ell}=0\) (where \(1\leq\ell\leq n-1\)). A non-trivial explicit solution is \(c_{k,\ell}=c\). According to Corollary 2, the best \(c\) is
\[c=\min_{1<k\leq n}\min_{1\leq\ell<k}\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0, \ell]})}.\]
The two transition probabilities, \(p(x_{k},S_{\ell})\) and \(p(x_{k},S_{[0,\ell]})\), are calculated as follows. The transition from \(x_{k}\) to \(S_{\ell}\) happens if \(k-\ell\) zero-valued bits are flipped and other bits unchanged. Thus
\[p(x_{k},S_{\ell})\geq\binom{k}{k-\ell}\left(\frac{1}{n}\right)^{k-\ell}\left( 1-\frac{1}{n}\right)^{n-k+\ell}. \tag{51}\]
The transition from \(x_{k}\) to \(S_{[0,\ell]}\) happens only if \(k-\ell\) zero-valued bits are flipped. Thus
\[p(x_{k},S_{[0,\ell]})\leq\binom{k}{k-\ell}\left(\frac{1}{n}\right)^{k-\ell}. \tag{52}\]
From (51) and (52), we have
\[\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0,\ell]})}\geq\left(1-\frac{1}{n}\right)^{n- 1}\geq e^{-1},\text{ for }1\leq\ell<k\leq n.\]
Thus we get \(c\geq e^{-1}\). The constant \(e^{-1}\) can be improved further.
The third explicit solution is \(c_{k,\ell}=c_{\ell}\). According to Corollary 3, the best \(c_{\ell}\) is
\[c_{\ell}=\min_{\ell<k\leq n}\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0,\ell]})}.\]
From (51) and (52), we have
\[\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0,\ell]})}\geq\left(1-\frac{1}{n}\right)^ {n-1}\geq e^{-1},\text{ for }1\leq\ell<k\leq n.\]
Thus we get \(c_{\ell}\geq e^{-1}\). The constant \(e^{-1}\) can be improved further.
Since OneMax is a level-based fitness landscape to the (1+1) EA, the best Type-\(c_{k,\ell}\) bound is the exact hitting time. From the explicit expression (39), the best coefficient \(c_{n,\ell}^{*}\) is given by (53).
\[\begin{split} c_{n,\ell}^{*}=& r(X_{n},S_{\ell})+ \sum_{\ell<j_{1}<n}r(X_{n},S_{j_{1}})\,r(X_{j_{1}},S_{\ell})+\cdots\\ &+\sum_{\ell<j_{n-\ell-1}<\cdots<j_{1}<n}r(X_{n},S_{j_{1}})\,r( X_{j_{1}},S_{j_{2}})\cdots r(X_{j_{n-\ell-1}},S_{\ell}).\end{split} \tag{53}\]
Unfortunately the calculation of (53) is intractable. But from (53), it is straightforward to obtain some explicit solutions, like \(c_{n,\ell}=r(x_{n},S_{\ell})\). From (51), we get
\[c_{n,\ell}=r(x_{n},S_{\ell})=\frac{p(x_{n},S_{\ell})}{p(x_{n},S_{[0,\ell]})} \geq\binom{n}{n-\ell}\left(\frac{1}{n}\right)^{n-\ell}\left(1-\frac{1}{n} \right)^{\ell},\quad\ell=1,\cdots,n-1.\]
Inequality (50) can be solved recursively from \(k=\ell+1\) to \(n\). A recursive solution to (50) is calculated as follows. According to (34), we choose the coefficient
\[c_{\ell+1,\ell}=r(x_{\ell+1},S_{\ell})=\frac{p(x_{\ell+1},S_{\ell})}{p(x_{ \ell+1},S_{[0,\ell]})}\geq\left(1-\frac{1}{n}\right)^{n-1}\geq e^{-1}\quad \text{(by (\ref{eq:1}) and (\ref{eq:2}))}.\]
Assume that \(c_{\ell+1,\ell},\cdots,c_{k-1,\ell}\geq e^{-1}\). According to (34), we choose the coefficient
\[\begin{split} c_{k,\ell}&=r(x_{k},S_{\ell})+\sum_{j= \ell+1}^{k-1}r(x_{k},S_{j})c_{j,\ell}\\ &\geq r(x_{k},S_{\ell})+r(x_{k},S_{[\ell+1,k-1]})e^{-1}=r(x_{k},S _{\ell})-r(x_{k},S_{[0,\ell]})e^{-1}+e^{-1}.\end{split}\]
We prove the coefficient \(r(x_{k},S_{\ell})-r(x_{k},S_{[0,\ell]})e^{-1}\geq 0\) as follows.
\[\begin{split}& r(x_{k},S_{\ell})-r(x_{k},S_{[0,\ell]})e^{-1}= \frac{p(x_{k},S_{\ell})-p(x_{k},S_{[0,\ell]})e^{-1}}{p(x_{k},S_{[0,k-1]})}\\ \geq&\frac{\binom{k}{k-\ell}\left(\frac{1}{n}\right) ^{k-\ell}\left(1-\frac{1}{n}\right)^{n-k+\ell}-\binom{k}{k-\ell}\left(\frac{1} {n}\right)^{k-\ell}e^{-1}}{p(x_{k},S_{[0,k-1]})}\geq 0\quad\text{(by (\ref{eq:1}) and (\ref{eq:2}))}.\end{split}\]
By induction, \(c_{k,\ell}\geq e^{-1}\) for \(k=\ell+1,\cdots,n\). The constant \(e^{-1}\) can be improved further.
The second recursive solution to (50) is \(c_{k,\ell}=c_{k}\) (where \(1\leq\ell<k\leq n\)) such that \(c_{k}\leq r(x_{k},S_{\ell})+\sum_{j=\ell+1}^{k-1}r(x_{k},S_{j})c_{j}\). As with the first recursive solution, we have \(c_{k}\geq e^{-1}\) for \(k=\ell+1,\cdots,n\). The constant \(e^{-1}\) can be improved further.
Finally, it is allowed to use a mixture of recursive and explicit solutions, that is, some coefficients are calculated recursively and some coefficients come from an explicit solution. For example, a mix of explicit and recursive solutions is
\[c_{k,\ell} =c\leq r(x_{k},S_{\ell})+\sum_{j=\ell+1}^{k-1}r(x_{k},S_{j})c, \qquad 1\leq\ell<k\leq n-1. \tag{54}\] \[c_{n,\ell} =r(x_{n},S_{\ell})+\sum_{j=\ell+1}^{n-1}r(x_{n},S_{j})c,\qquad 1 \leq\ell\leq n-1. \tag{55}\]
Similar to the analysis of the explicit solution \(c\), for (54), we get \(c\geq e^{-1}\). In (55),
\[r(x_{n},S_{j})=\frac{p(x_{n},S_{j})}{p(x_{n},S_{[0,n-1]})}\geq\frac{p(x_{n},S_ {j})}{1}\geq\binom{n}{n-j}\left(\frac{1}{n}\right)^{n-j}\left(1-\frac{1}{n} \right)^{j}\quad\text{(by \eqref{eq:c_k})},\]
then we get for \(\ell=1,\cdots,n-1\)
\[c_{n,\ell}\geq\binom{n}{n-\ell}\left(\frac{1}{n}\right)^{n-\ell}\left(1-\frac {1}{n}\right)^{\ell}+e^{-1}\sum_{j=\ell+1}^{n-1}\binom{n}{n-j}\left(\frac{1}{n }\right)^{n-j}\left(1-\frac{1}{n}\right)^{j}.\]
In summary, there exist different ways to calculate coefficients \(c_{k,\ell}\) from a trivial coefficient \(0\) to the exact coefficients \(c_{k,\ell}^{\sharp}\).
### Case study 4: Type-\(c_{k,\ell}\) lower bound of the (1+1) on TwoMax1
This case study shows that the Type-\(c_{k,\ell}\) lower bound by Theorem 5 is tight on fitness landscapes with shortcuts through the example of the (1+1) EA on TwoMax1. Assume that the EA starts at \(S_{n-1}\). We prove that the Type-\(c_{k,\ell}\) lower bound by Theorem 5 is \(\Omega(n\ln n)\), that is
\[d_{n-1}=\frac{1}{p(x_{n-1},S_{[0,n-2]})}+\sum_{\ell=1}^{n-2}\frac{c_{n-1,\ell }}{p(x_{\ell},S_{[0,\ell-1]})}\geq\sum_{\ell=1}^{n/2-1}\frac{c_{n-1,\ell}}{p(x _{\ell},S_{[0,\ell-1]})}=\Omega(n\ln n). \tag{56}\]
According to Theorem 5, we choose coefficients
\[c_{n-1,\ell}=\sum_{k=\ell+1}^{n/2}r(x_{n-1},S_{k})c_{k,\ell},\quad\ell=1, \cdots,\frac{n}{2}-1. \tag{57}\]
where (57) is a recursive calculation using \(c_{k,\ell}\) where \(\ell+1\leq k\leq n/2\).
The coefficient \(c_{k,\ell}\) (where \(1\leq\ell<k\leq n/2\)) is calculated using a constant \(c\). According to Theorem 5, for \(1\leq\ell<k\leq n/2\), we let coefficients \(c_{k,\ell}=c\) such that
\[c \leq r(x_{k},S_{\ell})+\sum_{i=\ell+1}^{k-1}r(x_{k},S_{i})c,\] \[c \leq\frac{r(x_{k},S_{\ell})}{1-r(x_{k},S_{[\ell+1,k-1]})}=\frac{r( x_{k},S_{\ell})}{r(x_{k},S_{[0,\ell]})}=\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0, \ell]})}.\]
We choose \(c\) as
\[c=\min_{1<k\leq n/2}\min_{1\leq\ell<k}\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0,\ell ]})}. \tag{58}\]
The above \(c\) is calculated as follows. A state \(x_{k}\in S_{k}\) (where \(1\leq\ell<k\leq n/2\)) has \(k\) zero-valued bits. A state in \(S_{\ell}\) has \(\ell\) zero-valued bits. The transition from \(x_{k}\) to \(S_{\ell}\) happens if \(k-\ell\) zero-valued bits are flipped and other bits are not flipped. Thus
\[p(x_{k},S_{\ell})\geq\binom{k}{k-\ell}\left(\frac{1}{n}\right)^{k-\ell}\left( 1-\frac{1}{n}\right)^{n-k+\ell}\geq\binom{k}{k-l}\left(\frac{1}{n}\right)^{k- \ell}e^{-1}. \tag{59}\]
The transition from \(x_{k}\) to \(S_{[0,\ell]}\) happens only if either (i) at least \(k-\ell\) zero-valued bits are flipped, or (ii) \(x_{k}\) is mutated to \((0,\cdots,0)\). The probability of the first event is at most \(\binom{k}{k-\ell}(\frac{1}{n})^{k-\ell}\). The probability of the second event is \((\frac{1}{n})^{n-k}\). Because \(1\leq\ell<k\leq n/2\), we have \(n-k\geq n/2\geq k-\ell+1\). Thus
\[p(x_{k},S_{[0,\ell]})\leq\binom{k}{k-\ell}\left(\frac{1}{n}\right)^{k-\ell}+ \left(\frac{1}{n}\right)^{n-k}\leq\binom{k}{k-\ell}\left(\frac{1}{n}\right)^{ k-\ell}+\left(\frac{1}{n}\right)^{k-\ell+1}. \tag{60}\]
From (59) and (60), we get for \(k=\ell+1,\cdots,n/2\)
\[\frac{p(x_{k},S_{\ell})}{p(x_{k},S_{[0,\ell]})}\geq\frac{e^{-1}}{1+\frac{1}{ \binom{k}{k-\ell}n}}=e^{-1}(1-o(1)).\]
Then we have \(c\geq e^{-1}(1-o(1))\).
Next the coefficient \(\underline{c_{n-1,\ell}}\) (where \(1\leq\ell<n/2\)) is calculated by (57). From \(c\geq e^{-1}(1-o(1))\) and (57), we get for \(\ell=1,\cdots,n/2\),
\[c_{n-1,\ell}\geq e^{-1}(1-o(1))r(x_{n-1},S_{[\ell+1,n/2]}). \tag{61}\]
The conditional probability \(r(x_{n-1},S_{[\ell+1,n/2]})\) is calculated as follows. Since\(x_{n-1}\) has \(n/2+1\) zero-valued bits, the transition from \(x_{n-1}\) to \(S_{n/2}\) happens if \(1\) zero-valued bit is flipped and other bits are unchanged. Thus
\[p(x_{n-1},S_{[\ell+1,n/2]}) \geq\binom{n/2+1}{1}\frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1} \geq\frac{1}{2e}.\] \[r(x_{n-1},S_{[\ell,n/2]}) =\frac{p(x_{n-1},S_{[\ell,n/2]})}{p(x_{n-1},S_{[0,n-2]})}\geq \frac{p(x_{n-1},S_{[\ell,n/2]})}{1}\geq\frac{1}{2e}.\]
Then from (61), we get \(c_{n-1,\ell}\geq e^{-2}(1-o(1))/2\).
The transition probability \(p(x_{\ell},S_{[0,\ell-1]})\) (where \(1\leq\ell\leq n/2\)) is calculated as follows. A state \(x_{\ell}\in S_{\ell}\) has \(\ell\) zero-valued bits. The transition from \(x_{\ell}\) to \(S_{[0,\ell-1]}\) happens only if either (i) \(1\) zero-valued bit is flipped, or (ii) \(x_{\ell}\) is mutated to \((0,\cdots,0)\). The probability of the first event is at most \(\binom{\ell}{1}\frac{1}{n}\). The probability of the second event happening is at most \((\frac{1}{n})^{n-\ell}\). Thus
\[p(x_{\ell},S_{[0,\ell-1]})\leq\frac{\ell}{n}+\left(\frac{1}{n}\right)^{n-\ell }\leq\frac{\ell+1}{n}. \tag{62}\]
Then we get the lower bound in (56):
\[d_{n-1}\geq\frac{1-o(1)}{2e^{2}}\sum_{\ell=1}^{n/2-1}\frac{n}{\ell+1}=\Omega(n \ln n). \tag{63}\]
Table 2 compares lower linear bounds obtained in Case Studies 1, 2 and 4. The Type \(c_{k,\ell}\) is a tight lower bound on TwoMax1 but Type-\(c\) and \(c_{\ell}\) lower bounds are trivial due to the presence of shortcuts.
## 7 Conclusions and future work
In this paper, we rigorously answer a fundamental question about the fitness level method: what are the tightest lower and upper time bounds that can be constructed using transition probabilities between fitness levels? Drift analysis with fitness levels is developed and the tightest metric bounds from fitness levels are constructed and proven. Based on metric bounds, general linear bounds are constructed. Coefficients in linear bounds can be calculated recursively or explicitly. Different calculation methods result in different fitness level methods. Drift analysis with fitness levels is a framework that can be used to develop different fitness level methods for different types of time bounds. Table 3 summarizes the main bounds discussed in the paper.
The framework is generic and promising. It turns out that Type-\(c_{k,\ell}\) bounds are at least as light as Type-\(c_{\ell}\) and Type-\(c\) bounds on any fitness landscapes, and even tighter on fitness landscapes with shortcuts. This is demonstrated by the case study of the (1+1) EA maximizing the TwoMax1 function. One direction for future research is to simplify the recursive computation in metric and linear bounds.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Type-\(c_{k,\ell}\) & Type-\(c\) & Type-\(c_{l}\) \\ \hline \(\Omega(n\ln n)\) & \(O(1)\) & \(O(1)\) \\ \hline by Theorem 5 & by Proposition 3 & by Proposition 5 and Lemma 3 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of lower bounds of the (1+1) EA on TwoMax1
\begin{table}
\begin{tabular}{|c|c|c|} \hline Type & a bound on the hitting time \(m(X_{k})\) & source \\ \hline \(r_{k,\ell}\) lower & Theorems 1,3 \\ \hline \(r_{k,\ell}\) upper & Theorems 2, 4 \\ \hline \(c_{k,\ell}\) lower & Theorem 5 \\ \hline \(c_{k,\ell}\) upper & Theorem 6 \\ \hline \(c_{k,\ell}\) upper & Theorem 6 \\ \hline \(c_{\ell}\) & \(c_{k,\ell}=c_{\ell}\), a special case of Type-\(c_{k,\ell}\) bounds & Corollaries 3, 6 \\ \hline \(c\) & \(c_{k,\ell}=c\), a special case of Type-\(c_{k,\ell}\) bounds & Corollaries 2, 5 \\ \hline \end{tabular}
\end{table}
Table 3: Type-\(c_{k,\ell}\), \(c_{\ell}\) and \(c\) bounds. Notation refers to Table 1.
#### Acknowledgments
The authors thank Dirk Sudholt for his helpful discussion of the fitness level method, and also Benjamin Doerr and Timo Kotzing for their kind explanations of their work, which helped to greatly improve the quality of this paper.
|
2305.17020 | Diable: Efficient Dialogue State Tracking as Operations on Tables | Sequence-to-sequence state-of-the-art systems for dialogue state tracking
(DST) use the full dialogue history as input, represent the current state as a
list with all the slots, and generate the entire state from scratch at each
dialogue turn. This approach is inefficient, especially when the number of
slots is large and the conversation is long. We propose Diable, a new task
formalisation that simplifies the design and implementation of efficient DST
systems and allows one to easily plug and play large language models. We
represent the dialogue state as a table and formalise DST as a table
manipulation task. At each turn, the system updates the previous state by
generating table operations based on the dialogue context. Extensive
experimentation on the MultiWoz datasets demonstrates that Diable (i)
outperforms strong efficient DST baselines, (ii) is 2.4x more time efficient
than current state-of-the-art methods while retaining competitive Joint Goal
Accuracy, and (iii) is robust to noisy data annotations due to the table
operations approach. | Pietro Lesci, Yoshinari Fujinuma, Momchil Hardalov, Chao Shang, Yassine Benajiba, Lluis Marquez | 2023-05-26T15:26:12Z | http://arxiv.org/abs/2305.17020v3 | # _Diable_: Efficient Dialogue State Tracking as Operations on Tables
###### Abstract
Sequence-to-sequence state-of-the-art systems for dialogue state tracking (DST) use the full dialogue history as input, represent the current state as a list with all the slots, and generate the entire state from scratch at each dialogue turn. This approach is inefficient, especially when the number of slots is large and the conversation is long. We propose _Diable_, a new task formalisation that simplifies the design and implementation of efficient DST systems and allows one to easily plug and play large language models. We represent the dialogue state as a table and formalise DST as a table manipulation task. At each turn, the system updates the previous state by generating table operations based on the dialogue context. Extensive experimentation on the MultiWoz datasets demonstrates that _Diable (i)_ outperforms strong efficient DST baselines, _(ii)_ is \(2.4\)x more time efficient than current state-of-the-art methods while retaining competitive Joint Goal Accuracy, and _(iii)_ is robust to noisy data annotations due to the table operations approach.
## 1 Introduction
Dialogue state tracking (DST; Jacqmin et al., 2022) is the task of tracking user requests from the dialogue history in the form of slot-value pairs Henderson et al. (2014); Mrksic et al. (2015); Rastogi et al. (2020). The slots are defined in a domain-specific schema and represent the fields that need to be extracted from the dialogue to execute queries in the backend and generate responses. Recent generative approaches to DST based on language models Wu et al. (2019); Kim et al. (2020) often use the entire dialogue history as input and represent the state, at each turn, as the concatenation of all the slots in the schema, where inactive slots are reported with a placeholder value (see Figure 2). This representation is known as _cumulative state_Hosseini-Asl et al. (2020); Feng et al. (2021); Zhao et al. (2022) and implies the generation of all the states from scratch at each dialogue turn. This approach is computationally inefficient, especially for long conversations and large schemas.
We propose Efficient Dialogue State tracking as Operations on Tables (_Diable_, shown in Figure 1), a novel task formulation and a new DST approach that better uses the generative capabilities of language models. Our approach simplifies the
Figure 1: _Diable_ approach to DST. The figure presents the first two turns of a dialogue (user’s utterances are orange, system’s are green). When the conversation starts, the state table is empty. At each dialogue turn, the system outputs a table update operation (either INSERT or DELETE), and the state is modified accordingly.
design and implementation of DST systems and works with any sequence-to-sequence model. Our intuition is that a DST system translates conversations into filters for database searches. Inspired by formal languages for databases and the recent success in applying sequence-to-sequence models to text-to-SQL tasks (Yin et al., 2020; Scholak et al., 2021), we represent the dialogue state as an implicit table and frame DST as a table manipulation task. At each turn, the system updates the previous state by generating update operations expressed in a simplified formal language based on the current dialogue context (see Figure 1). _Diable_ is the first end-to-end DST system that outputs state operations and values jointly while processing all slots simultaneously.
Based on extensive experimentation using the MultiWoz benchmark (Budzianowski et al., 2018), we show that _Diable_ can successfully and efficiently translate conversations into filters for database searches. Our approach minimises the number of input and output tokens required resulting in a significant efficiency gain (\(2.4\)x reduction in inference time compared to state-of-the-art cumulative state systems).
Our main contributions are as follows:
* We introduce a novel DST task formulation and a new system, _Diable_, specifically designed to enhance efficiency and leverage the capabilities of state-of-the-art sequence-to-sequence models.
* We show that our DST task formulation does not require ad-hoc data preprocessing, the full history, or extra supervision and works with any sequence-to-sequence model without requiring any architectural modification.
* We demonstrate that _Diable_ achieves better Joint Goal Accuracy on MultiWoz than other efficient baselines while being competitive with state-of-the-art cumulative state systems.
* We show that _Diable_ is robust to noise in the training data, resulting in more stable results across three versions of MultiWoz.
## 2 A Taxonomy of DST Approaches
The goal of DST systems is to handle long, diverse conversations in multiple domains with large schemas and unrestricted vocabulary, potentially without extra supervision (Eric et al., 2020; Rastogi et al., 2020). Achieving this goal has prompted the development of different DST approaches.
_Ontology-based approaches_ _treat DST as either a classification or a token classification task. They assume that all possible slot-value pairs are restricted to a fixed set, or ontology, either predefined or extracted from the training data. Classification-based approaches output a probability distribution over values given the dialogue context and a slot (Henderson et al., 2014) while token classification approaches output a probability distribution over slots for each token (Liao et al., 2021)._
The ontology-based formulation simplifies the DST task considerably, thus the performance of these systems is usually relatively high for specific datasets (Zhong et al., 2018; Ye et al., 2021, 2022). Complex dialogues with large schemas pose a significant challenge for traditional ontology-based approaches as they do not easily generalise to new domains nor scale to large ontologies (Mrksic et al., 2017; Rastogi et al., 2017; Zhong et al., 2018; Ye et al., 2021). For this reason, ontology-based approaches are out of scope for our paper.
_Open-vocabulary approaches_ _address these limitations by formulating DST as either a reading comprehension task wherein for each slot a span is extracted from the dialogue context (Gao et al., 2019; Chao and Lane, 2019), or as a generation task wherein a value for each slot is generated
Figure 2: Cumulative state approach to DST. At each dialogue turn, the system outputs all the slots. Inactive slots are filled with a placeholder value (none).
based on the dialogue history Wu et al. (2019)._
By leveraging sequence-to-sequence models Brown et al. (2020); Lewis et al. (2020); Raffel et al. (2020), generative approaches have recently achieved states-of-the-art results Xu and Hu (2018); Lee et al. (2019); Chao and Lane (2019); Gao et al. (2019); Wu et al. (2019); Kumar et al. (2020); Heck et al. (2020); Hosseini-Asl et al. (2020); Lee et al. (2021); Zhao et al. (2021, 2022). However, these methods predict the dialogue state from scratch at each turn and generate a value for each slot, even when a slot is not active (Figure 2). We argue (SS6) that these are the main sources of inefficiencies of current DST systems. We compare _Diable_ with these methods in the "Cumulative State Models" section of Table 1.
_Efficient approaches_ _seek efficiency by minimising the number of values to generate, thus decomposing DST into two successive sub-tasks: state operation prediction and value generation. In this way, only the slots that need to be changed are considered for value generation Kim et al. (2020)._
These approaches are the most related to _Diable_ in that they target efficiency. We compare them against _Diable_ in section "Efficient Models" of Table 1. Often, these methods Ren et al. (2019); Zhu et al. (2020) use the cumulative state representation which is the primary source of inefficiencies (we discuss this issue in the context of SS5, Table 2) and need to output operations for all slots.
For example, Kim et al. (2020) and Zeng and Nie (2020) predict an operation for each slot in the input by adding a classification head on top of the tokens representing the individual slots and predict four kinds of state operations: "carryover", "delete", "dontcare", and "update". For those slots categorised as "update", the contextual representation is further processed to decode the slot value. However, this approach limits the ability of such systems to deal with large schemas because the full schema needs to fit in the input context. Differently from these approaches, we remove the two-component structure by adopting a sequence-to-sequence approach that allows us to jointly generate operations and values for all slots simultaneously and works with any sequence-to-sequence model. Importantly, we only need to predict operations for the active slots (i.e., the slots actually mentioned in the conversation).
Lin et al. (2020) seek efficiency differently by introducing the notion of "Levenshtein belief span". Based on the concept of _belief span_Lei et al. (2018) that reformats the dialogue state into a text span allowing models to generate slot values dynamically. They propose to only focus on the _differences_ between states at subsequent turns. We take this approach a step further by explicitly outputting operations for all slots changing from one turn to another simultaneously while retaining our minimal state representation.
## 3 _Diable_: Dialogue State Tracking as Operations on Tables
We introduce a novel efficient formulation of the DST task and a system, _Diable_, specifically designed to enhance efficiency and optimise the capabilities of state-of-the-art sequence-to-sequence models. In this section, we describe our approach, formalise the DST problem, and introduce the concepts of _state as a table_ and _state operations_.
### Problem Definition
The goal of DST is defined as learning a mapping from a dialogue context to a dialogue state. Specifically, let \(D_{1:T}=(D_{1},\ldots,D_{T})\) denote a dialogue of \(T\) turns, where \(D_{t}=(U_{t},R_{t})\) represent an utterance composed of the user query and system response at turn \(t\), respectively. At turn \(t\), the dialogue context, \(C_{t}\), is defined as the set of all the information available up to that turn. It always includes the current dialogue utterance, but can additionally contain the previous state, utterances from the dialogue history, and extra supervision (e.g., slot descriptions and the schema). We consider a dialogue context composed by only the previous dialogue turn(s) and the previous state, that is \(C_{t}=(D_{t},\mathcal{B}_{t-1})\). We do not use any schema information and let the model learn it during training.1 The dialogue state at turn \(t\) is defined as a set \(\mathcal{B}_{t}=\{(s,v_{t})|s\in\mathcal{S}_{t}\}\), where \(\mathcal{S}_{t}\subseteq\mathcal{S}\) denotes the subset of active slots at that turn out of all the predefined slots in the schema and \(v_{t}\) is the value corresponding to slot \(s\).
Footnote 1: Our preliminary study showed that passing the schema to the model has little effect on performance but hurts the model’s efficiency as it needs to encode more tokens.
### The _Diable_ Approach
In our approach, instead of directly outputting the dialogue state \(\mathcal{B}\), we learn a mapping from the dialogue context, \(C\), to a set of operations \(\mathcal{O}\). At the beginning of each conversation the state table,
\(\mathcal{B}_{0}\), is initialised empty. At turn \(t\), based on the dialogue context, \(C_{t}\), the DST system generates the set of operations, \(\mathcal{O}_{t}=\{O_{1},\ldots,O_{N_{t}}\}\), where \(N_{t}\) is the number of slots that change between turn \(t-1\) and \(t\). Finally, the generated operations are applied to the previous state to get the new state. The tracker is expected to carry over the previously extracted slots into the current state, i.e., the state at each turn includes all the slots active since the beginning of the conversation up to that point.
We operationalise this process by framing it as a sequence-to-sequence task in which a model, \(f_{\theta}\), receives a textual representation of the dialogue context, \(\tau_{c}(C_{t})\), and outputs a textual representation of the operations needed, \(\tau_{s}(\mathcal{O}_{t})\), where \(\tau_{c}\) and \(\tau_{s}\) are the templating functions that convert the dialogue context and state operations to a string, respectively. We provide more details about these functions in Appendix B. The structure of the system can be described as follows
\[\tau_{s}(\mathcal{O}_{t}) =f_{\theta}(\tau_{c}(C_{t})) \tag{1}\] \[\mathcal{B}_{t} =\text{Interpreter}(\tau_{s}(\mathcal{O}_{t}),\mathcal{B}_{t-1}) \tag{2}\]
where in Eq. (2) we use an operation interpreter to parse the string representation of the operations and apply them to the previous state. Based on the definition of the state operations, the operation interpreter can be based on different formal languages (e.g., Regular Expressions, SQL).
We use T5v1.1 (Raffel et al., 2020) as the backbone for _Diable_. During training, we use teacher forcing (Goyal et al., 2016) and pass the oracle dialogue state in the input, \(\mathcal{B}_{t-1}\). At test time, we pass the previously predicted state, \(\mathcal{\tilde{B}}_{t-1}\), instead. To learn the model, we optimise the negative log-likelihood of the state operations given the dialogue context, that is
\[\mathcal{L}(\theta)_{t}=-\log P(\tau_{s}(\mathcal{O}_{t})|\tau_{c}(C_{t})) \tag{3}\]
where \(f_{\theta}\) is used to parameterise the probabilistic model \(P\). We use the Adafactor optimiser (Shazeer and Stern, 2018) with no weight decay and we set the learning rate to \(10^{-4}\) with a constant schedule. We fix the training budget at \(40\)k optimiser steps and set the batch size to 32. We generate the output sequence using beam search decoding with 4 beams. We describe in detail the training and inference processes in Appendix C.
### Representing the State as a Table
In our approach, we represent the dialogue state as a table that is sequentially updated. Specifically, a state, \(\mathcal{B}\), is represented by a simple two-column table in which the first column is used for the slot name and the second for the slot value (Figure 1). We define the slot name as the concatenation of domain and slot separated by a dash, e.g., restaurant-area (see Appendix B). The state table is passed into the dialogue context by simple "linearisation" (Suhr et al., 2020; Scholak et al., 2021; Shaw et al., 2021): the rows are converted to slot-value tuples, cast to a string using the template {slot} = {value}, and concatenated together using ; as a separator.2 During the linearisation, we randomise the order of the rows to avoid overfitting to specific positional biases.
Footnote 2: More complex table encoding methods can be applied (Herzig et al., 2020; Yang et al., 2022; Nassar et al., 2022). See the discussion in §6.5.
### State Tracking via Table Operations
We introduced how we operationalise table operations as a combination of strings defining the operations and an interpreter that applies the operations to the state. Specifically, in our implementation of _Diable_, we use a simplified formal language consisting of two slot-level operations, Insert and DELETE, and a simple regex-based interpreter. The choice of the available operations is motivated by the nature of the MultiWoz datasets which include mostly insertions and deletions. We use the INSERT operation also to update the value of slots that are already present in the state. When no operation is needed, the target is defined as the literal string none. Updates are less frequent and are caused mostly by inconsistent annotations. In our preliminary experiments, we empirically found that adding an UPDATE operation does not improve performance despite adding complexity; thus, we decided to not use it. We emphasise that the specific definition of the operations is not critical for the efficiency of our method and it can be easily adapted to any specific use case. To convert operations to strings we use the template {command} {slot} = {value}. If multiple operations need to be applied, we concatenate them using ; as a separator (see Appendix B). We define the target sequence as the concatenation of all the slot-level operations. Since the order in which the operations are applied does not affect the output, we randomise their position during training.
## 4 Experiments
In this section, we present our experimental setup and provide details about the baselines approaches.
### Datasets
The MultiWoz dataset Budzianowski et al. (2018) is a collection of \(10\)k multi-domain task-oriented human-to-human conversations. It is one of the most used benchmarks in the DST literature Jacqmin et al. (2022). Nonetheless, it is known to contain annotation errors and previous work proposed different versions Eric et al. (2020); Han et al. (2021); Ye et al. (2022) and data normalisation procedures3 to mitigate this issue. Thus, it is difficult to have a fair comparison of results across the literature. Following the MultiWoz convention Wu et al. (2019), we filter out dialogues in the "bus", "police", and "hospital" domains (and the respective slots from multi-domain dialogues), and remove the invalid dialogue SNG01862.json. We experiment with multiple versions (2.1, 2.2, and 2.4) and use the data as-is (see Appendix B). To construct the training set, we extract the operations automatically from the dataset.
Footnote 3: Most notably the TRADE scripts from Wu et al. (2019) to normalise both text and labels.
### Evaluation
We use Joint Goal Accuracy Henderson et al. (2014), JGA) as the main metric for all experiments: it measures the proportion of turns for which the predicted state (slot-value pairs) exactly match the gold label. At each turn, for each slot, a list of acceptable values is included in the annotation (e.g., hotel-name: ["marriott", "marriott hotel"]). We consider a value correct if it exactly matches one of the available options. Importantly, we perform an uncased evaluation since the annotation casing is not consistent.
### Cumulative State and Efficient Baselines
We compare our results with a set of strong cumulative state models (i.e., models that use all previous turns and output a value for each slot at each turn, see Figure 2), and efficient baseline models. We also implement our own version of a cumulative state model and its "lighter" variant, _light_Cumulative: the state does not include the inactive slots. In all our experiments, the full cumulative models underperform _light_Cumulative while being less efficient (\(\sim\)\(1.18\)x slower). Thus, we only report the results of _light_Cumulative, effectively selecting a stronger baseline.
In the upper part of Table 1 ("Cumulative State Models"), we include results from state-of-the-art generative cumulative state models. In each section,
\begin{table}
\begin{tabular}{l l|c c c c c} \hline \hline
**Model** & **Architecture** & **Extra Supervision** & **Context (\(C_{i}\))** & **2.1** & **2.2** & **2.4** \\ \hline \hline \multicolumn{6}{l}{**Cumulative State Models**} \\ \hline TRADE Wu et al. (2019) & BERT-base (110M) & Schema & \(D_{1x}\) & \({}^{\dagger}\)45.60 & \({}^{\dagger}\)45.40 & \({}^{\dagger}\)55.10 \\ SUMBT Lee et al. (2019) & BERT-base (110M) & Schema & \(D_{1x}\) & \({}^{\dagger}\)49.20 & \({}^{\dagger}\)49.70 & \({}^{\dagger}\)61.90 \\ DS-DST Zhang et al. (2020) & BERT-base (110M) & Schema + Pick. & \(D_{1x}\) & \({}^{\dagger}\)51.21 & \({}^{\dagger}\)51.70 & - \\ TripPy Metek et al. (2020) & BERT-base (110M) & - & \(D_{1x}\) & \({}^{\dagger}\)55.30 & \({}^{\dagger}\)53.52 & \({}^{\dagger}\)59.60 \\ SAVN Wang et al. (2020) & BERT-base (110M) & Schema & \(D_{1x}\) & \({}^{\dagger}\)54.50 & - & \({}^{\dagger}\)60.10 \\ Seq2Seq-DU Feng et al. (2021) & 2x BERT-base (220M) & Schema + Desc. & \(D_{1x}\) & \({}^{\dagger}\)56.10 & \({}^{\dagger}\)54.40 & - \\ \hline SimpleTOD Hossoini-Al et al. (2020) & GPT-2 (117M) & Schema & \(D_{1x}\) & \({}^{\dagger}\)50.3/55.7 & \({}^{\dagger}\)54.02 & - \\ AG-DST Tian et al. (2021) & PLATO-2 (310M) & Schema & \(D_{l-1x}+\mathcal{B}_{l-1}\) & - & 57.26 & - \\ \hline DaP Seo et al. (2021) & TS-base (220M) & Schema + Desc. & \(D_{1x}\) & - & 51.20 & - \\ Dar Dar (ind) Lee et al. (2021) & TS-base (220M) & Schema + Desc. & \(D_{1x}\) & 56.66 & 57.60 & - \\ Seq2Seq2 Zhou et al. (2021) & TS-base (220M) & Pre-training & \(D_{1x}\) & \({}^{\dagger}\)52.80 & 57.60 & 67.10 \\ _light_Cumulative (Our impl.) & TSv1.1-base (247M) & - & \(D_{1x}\) & \({}^{\dagger}\)53.91\({}_{\pm 0.63}\) & 57.01\({}_{\pm 0.65}\) & 67.56\({}_{\pm 0.52}\) \\ D3ST Zhao et al. (2022) & TS-base (220M) & Schema + Pick. + Desc. & \(D_{1x}\) & 54.20 & 56.10 & \({}^{\dagger}\)72.10 \\ \hline \hline \multicolumn{6}{l}{**Efficient Models**} \\ \hline MinTL Lin et al. (2020) & BART-large (406M) & - & \(\mathcal{B}_{l-1}\) & 53.62 & - & - \\ SOM-DST Kim et al. (2020) & BERT-base + GRU (113M) & Schema & \(D_{l-4x}+\mathcal{B}_{l-1}\) & \({}^{\dagger}\)33.68 & \({}^{\dagger}\)53.81 & \({}^{\dagger}\)66.80 \\ Transfer-DST Zeng and Nie (2020) & BERT-base (110M) & Schema & \(D_{l-4x}+\mathcal{B}_{l-1}\) & \({}^{\dagger}\)**55.35** & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: JGA on the test sets of MultiWoz (2.1, 2.2 and 2.4) for models trained on the respective training sets (note that 2.1 and 2.4 share the same training data). Baseline results reported from the original papers or, when not available, from \(\star\): Tian et al. (2021), \(\dagger\): Wang et al. (2022), \(\ddagger\): Zhao et al. (2021). The column “Context” reports the dialogue context: \(D\) and \(\mathcal{B}\) denote the dialogue utterances and the set of previous states, respectively. The notation \({}_{i:j}\) indicates turns from \(i\) to \(j\) (included). The column “Extra supervision” reports the additional information used (e.g., data augmentation, pre-training, etc.). For Multiwoz 2.1 most baselines use data preprocessing; we denote methods that do not use data preprocessing with \(\diamond\). Underlined the best results overall; **bold** the best results within efficient methods.
we report details and results for encoder-based, sequence-to-sequence, and T5-based models, respectively. The latter class of models based on T5 is related to our implementation of _Diable_ in that they share the same backbone. However, they are not directly comparable due to the additional text preprocessing and label normalisation. The results of our own re-implementation of a cumulative state model, _light_Cumulative, are directly comparable as we adopted the same experimental conditions.
In the bottom part of Table 1 ("Efficient Models"), we report the JGA of the latest generative efficient DST approaches in the literature. Despite being related to our implementation of _Diable_, these approaches are not directly comparable since they rely on additional information (e.g. the schema) or are based on a different backbone model.
## 5 Results
In this section, we discuss our experimental results. In Table 1, we summarise the JGA on three versions of MultiWoz (2.1, 2.2, and 2.4) for both _Diable_ and the baseline models. The results for the baseline models are taken from previous work Tian et al. (2021); Wang et al. (2022); Zhao et al. (2021) when better or if missing for a particular version in the original papers. The results for _Diable_ and _light_Cumulative implementation are averaged across 5 random seeds.
### _Diable_ and Cumulative State Models
We compare _Diable_'s performance to cumulative state models, i.e., models that have access to all previous turns in the conversation. We emphasise that _Diable_ uses none or a limited number of previous turns and thus has less context with respect to these models. On one hand, our goal is to evaluate the trade-off between efficiency and performance; on the other hand, to study the capability of the system to generate correct table operations.
The cumulative state model results are shown in the first part of Table 1. First, D3ST Zhao et al. (2022) achieves the best JGA score on MultiWoz 2.4 when the backbone is T5-base. Similarly to _Diable_, D3ST is based on a T5 model; however, it has access to more information such as the schema, the slot descriptions, and the list of possible values ("picklist") for categorical slots. Nonetheless, _Diable_ scores within 1 standard deviation in terms of JGA, while being more than \(2\)x more efficient.
When the backbone model of D3ST Zhao et al. (2022) is T5-xxl (\(11\)B), it scores \(57.80\), \(58.70\), and \(75.90\), respectively, on the three versions of the MultiWoz dataset. These scores are significantly higher than all other baselines. However, this improvement is solely due to increasing the model size, and we argue that the same performance improvement can be achieved by scaling the backbone of _Diable_ to larger models. In particular, error analysis shows that most of the errors in our instantiation of _Diable_-based systems are due to the model not recognising active slots ("under-predictions"). A larger backbone model can alleviate this issue by picking up less obvious contextual cues. Finally, the difference is more significant for version 2.1 because D3ST also applies text preprocessing, as used in other baselines. Moreover, baselines that use smaller models (the first part of Table 1) consistently score lower than those based on the larger and better pre-trained T5 checkpoints. The only exception is AG-DST Tian et al. (2021) but their backbone model has \(310\)M parameters.
We further compare _Diable_ to our implementation of a cumulative state T5-based model (_light_Cumulative). This comparison is fairer, as the models are put in the exact same experimental conditions. Our goal here is to quantify the improvements due to our proposed approach isolating additional effects from model pre-training and architectural changes. The results show that our _Diable_ approach has a significantly better JGA (\(+3\) absolute points) on the less noisy version of MultiWoz (i.e., 2.4) and has similar performance on 2.1 and 2.2, while still being more efficient.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Context** & **Runtime (ms)** & **Speedup (\(\uparrow\))** \\ \hline \hline
**Cumulative** & & \\ \hline \(D_{1:t}\) & 40.83 & 1.00 \\ \hline \hline _light_**Cumulative** & & \\ \hline \(D_{1:t}+\mathcal{B}_{t-1}\) & 34.28 & 1.19 \\ \(D_{1:t}\) & 33.65 & 1.21 \\ \(D_{t-4t}+\mathcal{B}_{t-1}\) & 32.56 & 1.25 \\ \(\mathcal{B}_{t-1}\) & 30.62 & 1.33 \\ \hline \hline _Diable_ & & \\ \hline \(D_{1:t}+\mathcal{B}_{t-1}\) & 19.59 & 2.09 \\ \(D_{1:t}\) & 19.41 & 2.11 \\ \(D_{t-4t}+\mathcal{B}_{t-1}\) & 18.29 & 2.23 \\ \(\mathcal{B}_{t-1}\) & 17.17 & 2.38 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Median instance-level runtime in milliseconds and relative speed-up _vs_ a cumulative state baseline.
Next, we compare _Diable_ with two other strong models based on the T5 architecture (making them directly comparable, besides the preprocessing steps): _DaP (ind)_(Lee et al., 2021) and _Seq2seq_(Zhao et al., 2021). Both models achieve a slightly higher JGA than _Diable_ on 2.2 (1 point absolute); however, they are again less efficient and have access to a larger context. _DaP_ relies on slot descriptions (thus, the schema) and runs inference once for every slot, which is not scalable to large schemas. The improvements in _Seq2seq_ are likely due to its extensive DST-specific pre-training.
Our results confirm that _Diable_-based systems while being efficient, achieve competitive JGA on MultiWoz as compared to both other strong efficient DST baselines and cumulative state state-of-the-art DST systems without requiring any ad-hoc data preprocessing, access to the full history, extra supervision, or large backbone models.
### _Diable_ and Efficient Models
Comparing _Diable_ to other efficient state-of-the-art DST models, that are based on state operations, we see significant improvements up to almost 4 JGA points on version 2.4 (shown in the "Efficient Models" section of Table 1). Only Transformer-DST (Zeng and Nie, 2020) is able to outperform our model on 2.1. However, they use data preprocessing (text and label normalisation) and extra supervision (schema). This model is an improved version of SOM-DST (Kim et al., 2020), therefore the same argument applies to the latter, which achieves slightly lower performance even using the same extra supervision and text normalisation.
### Latency Analysis
Table 2 reports the median4 inference time and the speed-up factor of _Diable_ relative to _light_Cumulative. Our approach is more than \(2\)x faster, even when using the full history as context. These results clearly show that the biggest efficiency gains are obtained by shortening the output sequence, that is, replacing the cumulative state with state operations. Consequently, adding only the last \(N\) turns comes at a small cost for a _Diable_ while potentially helping the model to recover values not present in the current dialogue context. Using the Inference Time Complexity (ITC) notation introduced by Ren et al. (2019), our proposed approach has \(\Omega(1)\) and \(O(N)\), where \(N\) is the number of slots in the schema, as the best and worst case ITC, respectively. Whereas, SOM-DST and Transformer-DST have a best-case ITC of \(\Omega(N)\).
Footnote 4: We report the median as the distribution of the inference time is left-skewed.
### Robustness to Noisy Annotations
Table 3 compares the performance of models trained on MultiWoz 2.2 with different context and state representations. Notably, when evaluated on the cleaner 2.4 version (bottom row for both parts of the table), _Diable_ consistently outperforms _light_Cumulative. In fact, regardless of the dialogue context, _Diable_ achieves a better JGA on 2.4. We hypothesise that the lower accuracy of _light_Cumulative is due to overfitting the noisy annotations of the training set. In particular, we think that since it generates the full state from scratch at every turn, the decoder might learn wrong correlations amongst slots that are wrongly annotated in the training set. For example, hotel-type and attraction-type are inconsistently and sparsely annotated in the training set, while in the test set of version 2.4 they tend to appear almost always together with the respective hotel-name and attraction-name slots. Thus, a cumulative state model can learn to not generate one when the other is present. Instead, being _Diable_ based on state _changes_, we presume that it learns to treat slots more independently.
## 6 Discussion
Our task formalisation is intuitively simple and is especially beneficial for large pre-trained sequence-to-sequence models. First, the state is expanded sequentially and thus only includes the necessary slots. This minimises the size of the input context, allowing the models to scale to larger schemas before reaching their maximum input length. Second, since the model needs to focus on the state _changes_, the decoder only needs to generate operations for a
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Train \(\rightarrow\) Test** & _light_**Cumulative** & _Diable_ \\ \hline \hline \multicolumn{3}{c}{**Context:**\(D_{1:t}\)} \\ \hline \(2.2\to 2.2\) & \(57.01_{\pm 0.45}\) & \(55.63_{\pm 0.68}\) \\ \(2.2\to 2.4\) & \(63.11_{\pm 0.83}\) & \(64.95_{\pm 0.55}\) \\ \hline \hline \multicolumn{3}{c}{**Context:**\(\mathcal{B}_{t-1}\)} \\ \hline \(2.2\to 2.2\) & \(56.50_{\pm 0.47}\) & \(56.30_{\pm 0.67}\) \\ \(2.2\to 2.4\) & \(63.52_{\pm 0.96}\) & \(66.13_{\pm 0.97}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect on JGA (mean \(\pm 1\) standard deviation) of different context and state representations.
limited number of slots (previous slots persist implicitly in the state, no need for explicit "carryover" operations). Third, our system is general in that it deals with span-based and categorical slots in the same way, and outputs both the operations and the slot-value pairs in a single forward pass, without the need for specialised architectures. Finally, since not all pre-defined slots are needed in the input, we do not have to access the schema beforehand, and thus it can be learned from the data directly.
### Impact of the Dialogue History
Table 3 compares the effect of the context size for both _light_Cumulative and _Diable_ trained on version 2.2. Comparing the results from the upper and bottom parts of the table, we see that using only the previous state barely changes the JGA of _light_Cumulative but benefits _Diable_. We hypothesise that being a cleaner and more compact representation of the conversation, the previous state introduces less noise than the full history. This is especially true in conversations for which the value of a slot is changed or removed throughout the conversation. However, completely removing the dialogue history reduces the ability of the model to recover values referenced at the beginning of the conversation. We hypothesise that this negative effect is not too evident because of the entity bias present in the MultiWoz dataset Qian et al. (2021) that allows the model to memorise and correctly predict values for certain slots even when not present in the dialogue context (SS6.4). Finally, when evaluated on the cleaned version 2.4, _Diable_ consistently matches or outperforms _light_Cumulative.
### Impact of the Model Size
Table 4 compares the performance of the base and large version of T5v1.1 for both _light_Cumulative and _Diable_ models. We find that scaling up model size does not improve JGA, however, we hypothesise that scaling it further can improve the performance similarly to D3ST Zhao et al. (2022).
### Impact of the State Representation
When replacing the tabular state representation with a cumulative one in _Diable_, _ceteris paribus_, we find a \(3\%\) reduction in JGA for version 2.4 and up to \(5\%\) for other versions. Specifically, at the beginning of the conversation, the state includes all the slots with the none value. In this case, the INSERT operation is unchanged while the DELETE operation becomes an update with a none value.
### Error Propagation
_Diable_, like any recursive state model Zhao et al. (2021), is affected by error propagation: since we pass the previous predicted state at each turn, errors can be persisted. We measure the potential gains stemming from completely avoiding the error propagation by using the _gold_ previous state rather than the predicted one in the dialogue context. Table 5 reports the upper bound on JGA for our simple _Diable_ instantiation and highlights that there is potential to improve JGA by adopting recent methodologies targeted at reducing error propagation Zhang et al. (2022).
In our experiments, we identify two main sources of error propagation that account for more than \(60\%\) of the total mistakes: state "underprediction" (i.e., the model does not recognise that a certain state is active) and value misprediction. Under-prediction happens when the system is unable to recognise that specific slots are active. Since MultiWoz presents a strong entity bias--e.g., "Cambridge" appears in \(50\%\) of the destination cities in the training data Qian et al. (2021)--a possible direction to address this issue is to use data augmentation methods targeted at reducing entity bias and annotation inconsistency Summerville et al. (2020); Lai et al. (2022) by improving the overall slot recall. Value misprediction happens when
\begin{table}
\begin{tabular}{l c|c c} \hline \hline
**Context** & **Test Set** & \multicolumn{2}{c}{**JGA**} \\ \hline \hline
**Cumulative state** & & base & large \\ \hline \(D_{1:t}\) & 2.2 & \(57.37\) & \(57.01\) \\ \(D_{1:t}\) & 2.4 & \(65.82\) & \(63.11\) \\ \hline \hline _Diable_ & & base & large \\ \hline \(D_{t-4:t}+\mathcal{B}_{t-1}\) & 2.2 & \(56.74\) & \(56.48\) \\ \(D_{t-4:t}+\mathcal{B}_{t-1}\) & 2.4 & \(65.01\) & \(65.35\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: MultiWoz 2.2 and 2.4 test set JGA for T5v1.1 base and large trained on the MultiWoz 2.2.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Train \(\rightarrow\) Test** & **Predicted** & **Gold** \\ \hline \(2.1\to 2.1\) & \(53.91_{\pm 0.70}\) & \(80.65_{\pm 0.24}\) \\ \(2.1\to 2.4\) & \(70.03_{\pm 0.95}\) & \(90.14_{\pm 0.30}\) \\ \(2.2\to 2.2\) & \(56.30_{\pm 0.67}\) & \(82.50_{\pm 0.28}\) \\ \(2.2\to 2.4\) & \(66.13_{\pm 0.97}\) & \(88.29_{\pm 0.35}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: JGA (mean \(\pm 1\) standard deviation) with gold and the predicted previous state in the input context.
the value for a correctly predicted slot is wrong. This is especially evident when the same slot is discussed in multiple turns and its value can potentially change. One way to address this limitation is by automatically pre-selecting the previous dialogue turns to include the relevant information about a specific slot in the context window Yang et al. (2021); Guo et al. (2022); Wang et al. (2022).
We do not constrain the generation in any way, and thus _Diable_ can generate invalid slots or values (e.g., attraction-time). In our experiments, errors due to invalid states are rare (less than 2% of the total mistakes): in fact, using the schema to filter incorrectly predicted slots at each turn did not improve the JGA significantly (less than 1%). There are several promising techniques that can further improve the performance of our system, at a minor efficiency cost, such as amendable generation Tian et al. (2021), constrained generation Lin et al. (2020), and schema descriptions Lee et al. (2021); Zhao et al. (2022). Finally, with larger schemas and more diverse conversations, constraining the set of values that the model can predict can potentially further improve performance and safety.
### Future Directions
In SS5, we showed that _Diable_ is an effective DST approach, at the same time it is competitive with budget-matched (in terms of parameter count) cumulative state baselines. We emphasise that our goal is not to reach state-of-the-art JGA on the MultiWoz dataset. We intentionally keep our _Diable_-based models as simple as possible, by not adding extra supervision signals, to clearly measure the effectiveness of our approach. However, the benefits coming from _Diable_ can be easily added on top of other methods. We believe our approach can be improved and expanded in several ways.
**Explicitly Modelling Slot Dependence.**_Diable_ treats slots independently of each other and implicitly uses the model's capability of learning their co-occurrence patterns. However, as the schema becomes larger and the dialogues longer, slot dependence becomes more complex and the model might fail to learn it effectively. Explicitly modelling the slot dependence can potentially improve performance, robustness (to spurious correlations), and efficiency. For example, selecting only relevant turns from the dialogue history as context to predict slot values. In our experiments, we show consistent improvement across all MultiWoz versions by adding the previous 4 dialogue turns in the dialogue context (Table 1 - last 2 rows). However, this simple heuristic might be suboptimal when the schema is large and the dialogue is long because relevant turns may not be the immediately preceding ones and we might add irrelevant context or omit relevant information. Instead, adopting a more granular turn selection method based on the slot dependence Yang et al. (2021); Guo et al. (2022) can improve both performance and efficiency.
**Improving Table Representations.** When passing the previous state in the context, we simply linearise the table. That is, we represent the previous states as discrete tokens passed in the input context for the next turn. This allowed us to use the T5 architecture without modification. A promising direction for future work is to use continuous representations for the state table Wu et al. (2022). This representation can potentially require fewer or no tokens to represent the state, thus further improving the efficiency of our approach.
## 7 Conclusions
In this paper, we introduce a novel efficient formulation of the DST task and a new system, _Diable_, specifically designed to enhance efficiency and optimise the capabilities of state-of-the-art sequence-to-sequence models. _Diable_ represents the dialogue state as an implicit table and updates it using a sequence-to-sequence model to generate table operations at each turn. Our task formalisation provides a significant efficiency gain (up to \(2.4\mathrm{x}\) speed-up in inference time) compared to cumulative state approaches adopted by the current state-of-the-art DST systems. Moreover, this sizeable improvement comes with a minimal efficiency-accuracy trade-off. In fact, _Diable_ outperforms other efficient DST approaches in the literature by more than 3 absolute JGA points on MultiWoz 2.4 and shows a competitive performance with respect to current DST state-of-the-art systems. _Diable_ comes with other advantages: it is simple and general (it makes no assumptions about the schema and does not require any specialised architecture) and it is robust to noise. Moreover, it allows to plug and play sequence-to-sequence models without any architectural modification easily. Finally, our approach goes beyond the dialogue setting and can be adapted to the sequential processing of long documents for information extraction tasks with memory-constrained language models.
## Acknowledgements
We thank the anonymous reviewers for their helpful questions and comments, which have helped us improve the quality of the paper. We sincerely thank Miguel Ballesteros, Yassine Benajiba, Yogarshi Vyas, Neha Anna John, Yi Zhang, Paula Czarnowska, Laura Aina, Thomas Mueller, and Marcello Federico for their constructive and detailed feedback on the early versions of the paper. We thank Tiago Pimentel, Josef Valvoda, Davide Lesci, and Marco Lesci for their feedback on the final version.
## Limitations
In Section 6.4, we already discussed the limitations and challenges of the model proposed (e.g., the model has access to less contextual information from the conversation history, errors can propagate more easily as it does not re-predict the entire cumulative state at each step, and mistakes could only be fixed by explicit delete or update operation). In the following, we concentrate on the limitations that refer to the scope of this work.
**Languages.** We experimented with a limited number of languages (English) and datasets (MultiWoz 2.1, 2.2 and 2.4). We do not have experimental evidence that our method can work for other languages, including languages with a richer morphology. Still, our system has been built without any language-specific constraints or resources, other than the T5 checkpoints and the manually annotated training set. Our method can be applied to any other language (without modification) for which these resources are available, or by applying cross-lingual techniques and resources (e.g., multilingual language models, translation/projection of the training set) to transfer to other languages zero-shot. In those cases, the expected quality is lower, but the efficiency advantage of _Diable_ remains.
**Models.** We experimented with two models (T5v1.1 base and large). This is due to the restriction on our computational budget to be both economically- and environmentally-friendly, which made it infeasible to conduct thorough experiments using larger-scale language models. However, we re-emphasise that _Diable_ allows one to easily plug and play arbitrary language models and the efficiency advantage of _Diable_ remains.
**Diversity in the Evaluation Dataset.** We experimented with three different versions of the MultiWoz dataset (2.1, 2.2, and 2.4). Although this is the current benchmark for DST accepted by the community, and we followed the standard evaluation methodology and metrics, we are aware that the results presented might not be directly generalisable to other datasets or real-world scenarios with a considerable data shift with respect to MultiWoz. Additionally, MultiWoz has a certain level of noise and this can have an impact on the evaluation and the generalisation capabilities of the model trained.
|
2307.06754 | Ranking Handball Teams from Statistical Strength Estimation | In this work, we present a methodology to estimate the strength of handball
teams using a statistical method. We propose the use of the
Conway-Maxwell-Poisson distribution to model the number of goals scored by a
team as a flexible discrete distribution which can handle situations of non
equi-dispersion. From its parameters, we derive a mathematical formula to
determine the strength of a team. We propose a ranking based on the estimated
strengths to compare teams across different championships. Applied to female
handball club data from European competitions over the season 2022/2023, we
show that our new ranking can have an echo in real sports events and the
results. | Florian Felice | 2023-07-13T13:46:34Z | http://arxiv.org/abs/2307.06754v1 | # Ranking Handball Teams from Statistical Strength Estimation
###### Abstract
In this work, we present a methodology to estimate the strength of handball teams using a statistical method. We propose the use of the Conway-Maxwell-Poisson distribution to model the number of goals scored by a team as a flexible discrete distribution which can handle situations of non equitis dispersion. From its parameters, we derive a mathematical formula to determine the strength of a team. We propose a ranking based on the estimated strengths to compare teams across different championships. Applied to female handball club data from European competitions over the season 2022/2023, we show that our new ranking can have an echo in real sports events and the results.
## 1 Background and related work
Handball is a popular sport with growing interest across the world. To date, there does not exist any official ranking tool to compare clubs and players performances, the only quantitative metrics available provided by the European and International Handball Federations (EHF and IHF) are coefficient ranks that compare countries (based on championships). Handball also suffers from a lack of literature (Saavedra, 2018), in particular in the predictive or analytical fields. In this work, we aim to establish a methodology to estimate the strength of a team via a statistical procedure.
Estimating the strength of a team has long been discussed in the literature, and particularly for football. Rating methods often assume some probability distributions to represent the number of goals scored by a team. Some methods are based on the Thurstone-Mosteller model (Thurstone, 1927) or the Bradley-Terry model (Bradley and Terry, 1952) to model the outcome of a match, based on some probability distribution whose location parameter corresponds to the strengths of the modeled teams. These popular techniques, however, assume that the underlying probability distribution is continuous which is, by nature, in contradiction with the structure of the majority of sports data.
The choice of the underlying probability distribution to represent the number of goals scored by a team also leads to debates. Reep et al. (1971) demonstrated that the Negative Binomial is a suitable distribution to model scores in several ball games. Maher (1982) however, argued that tests for goodness-of-fit plead in favor of the independent Poisson distribution to model football scores. Ley et al. (2019) further investigated the idea of Poisson distributions and, based on a broad comparison of models, suggested the bivariate Poisson model (Karlis and Ntzoufras, 2003) to represent the outcome of football games. From the estimated parameter \(\lambda\) obtained via Maximum Likelihood Estimation, they assume a structure from the parameter for team \(i\) with opponent team \(j\) as
\[\log(\lambda_{i})=\beta_{0}+(r_{i}-r_{j})+h\cdot\mathds{1}(\text{team $i$ playing at home}) \tag{1}\]
where \(\beta_{0}\in\mathbb{R}\) is a common intercept and \(h>0\) is the effect of playing at home. The parameters \(r_{i}>0\) and \(r_{j}>0\) represent the abilities of team \(i\) and \(j\) that are used as estimation of team's strength.
In the context of handball, Groll et al. (2020) analyzed historical international games to determine the best probability distribution to model the number of goals scored in handball matches. Given the level of under-dispersion observed,
they concluded that the standard Poisson distribution cannot be used and a Gaussian distribution with low variance is the most appropriate.
In this article, we propose a method to derive a ranking based on handball teams strengths. These strengths are obtained using the estimated parameters of an appropriate discrete probability distribution by means of maximum likelihood. We define formulae to transform such statistical estimates into sports abilities and shall observe how mathematical expressions can translate into actual sports facts. To illustrate our results, we apply our method to historical European female matches over the season 2022/2023 and obtain a ranking which is linked to the end of season standings.
Our work is organized as follows. In Section 2, we will compare methods from the existing literature with the Conway-Maxwell-Poisson distribution. After motivating the use of this flexible discrete probability distribution, we will generate a metric representing the strength of a team. In Section 3, we will illustrate the results of the proposed methodology on female club data and propose a ranking of the best performing team based on statistical facts. Finally, we will discuss next steps and future considerations in Section 4 and conclude in Section 5.
## 2 Methodology
In this section, we present the methodology for modeling handball data to represent the strength of a team. We first justify why the classical Poisson distribution cannot be used as the underlying probability distribution. As an alternative, we propose the Conway-Maxwell-Poisson distribution as a flexible probability distribution to estimate, from its parameters, the strength of a team.
### Non equi-dispersion from handball data
When analyzing historical data from female handball matches, one can observe situations with non equi-dispersion. We define the dispersion index \(DI\) as the ratio between the expectation \(\mathbb{E}(X)\) and the variance \(\mathbb{V}(X)\) of a random variable:
\[DI=\frac{\mathbb{E}(X)}{\mathbb{V}(X)}. \tag{2}\]
When \(DI<1\), we are in the situation of over-dispersion, the variance being larger than the expectation. When \(DI>1\), the variance is lower than the average which corresponds to under-dispersion. The final situation where \(DI=1\) leads to equi-dispersion.
To measure such index for female handball data, we analyzed games over the season 2022/2023 in European championships and observed that the empirical mean \(\tilde{\mathbb{E}}(X)=27.9\) is lower than the empirical variance \(\tilde{\mathbb{V}}(X)=31.5\). This leads to a dispersion index \(DI=0.88\), suggesting over-dispersion. Therefore, aligned with conclusions from Groll et al. (2020), the equi-dispersed Poisson distribution cannot be used to model scored goals during handball matches.
### Modelling handball games with Conway-Maxwell-Poisson
As an alternative to the standard Poisson distribution, we consider the Conway-Maxwell-Poisson (CMP) distribution (Sellers, 2022). It is a generalization of the common Poisson distribution, but can handle situations with under- and over-dispersion. Its probability mass function is defined by
\[\mathbb{P}(X=x|\lambda,\nu)=\frac{\lambda^{x}}{(x!)^{\nu}}\frac{1}{\sum_{j=0}^ {\infty}\frac{\lambda^{j}}{(j!)^{\nu}}}. \tag{3}\]
The parameter \(\nu\geq 0\) represents the level of dispersion. When \(\nu=1\), one retrieves the equi-dispersed Poisson distribution. When \(\nu<1\), we are in the situation of over-dispersion while \(\nu>1\) represents under-dispersion. Though it does not have an explicit interpretation, \(\lambda>0\) can be seen as a location parameter whose value gets closer to the mean as \(\nu\to 1\). Other special cases of the Conway-Maxwell-Poisson distribution approach the Bernoulli with parameter \(\lambda/(1+\lambda)\) as \(\nu\rightarrow\infty\) and the geometric distribution with probability of success \(1-\lambda\) when \(\lambda<1\) and \(\nu=0\). The distribution can thus be a good alternative to the classical Poisson distribution given its flexibility to handle different levels of dispersion.
To evaluate the goodness of fit of the distribution on handball data, we compare the CMP with the Gaussian and Negative Binomial distributions as mentioned in Groll et al. (2020). In Table 1, we report the estimated log-likelihood
(\(\hat{L}\)) and the associated Akaike Information Criterion (AIC) estimated for the club of Metz Handball over the season 2022/2023.
We observe from Table 1 that, although the three distributions seem to similarly fit the data, the Conway-Maxwell-Poisson distribution minimizes the AIC. Although the AIC aims to penalize complex distributions with numerous parameters to estimate, given that \(k=2\) for all three distributions, minimizing the AIC or maximizing the log-likelihood leads to the same conclusion. One can also argue that, given the flexibility of the distribution, which can handle under-, equi- and over-dispersion situations, this Conway-Maxwell-Poisson distribution can be the most appropriate choice. We also noted from our experiments that these results and conclusions also apply for other teams.
We represent in Figure 1 the relation between the empirical mean from a CMP distribution and its associated parameters \(\lambda\) and \(\nu\). We notice a logarithmic relation between the parameter \(\lambda\) and the empirical mean. This relation is of particular interest in the next Section 2.3 when defining the team's strength.
### Estimation of team strengths
The strength of a team can be expressed by its ability to perform in attack and in defense. We thus introduce different formulae to represent defense and attack strengths of a team. We then define the overall strength of a team as a combination of attack and defense abilities.
#### 2.3.1 Defense strength
Adopting the selected Conway-Maxwell-Poisson (CMP) distribution, we use its parameters to represent the strength of a team in defense. The distribution of goals conceded by a team, denoted by \(Y_{d}\), is assumed to follow a \(CMP(\lambda_{d},\nu_{d})\), where the parameter \(\lambda_{d}>0\) can act as a location parameter and \(\nu_{d}\geq 0\) as the dispersion parameter. We then define the defense strength as
\[s_{d}=\frac{\nu_{d}}{\log(\lambda_{d})}. \tag{4}\]
The strength of a team's defense is inversely proportional to the goals it concedes. This is reflected in equation (4) in the sense that the higher the average number of conceded goals (i.e. the higher \(\lambda_{d}\)) the lower the strength \(s_{d}\). We notice
\begin{table}
\begin{tabular}{c c c} \hline \hline Distribution & Log-likelihood & AIC \\ \hline Conway-Maxwell-Poisson & -127.36 & 258.72 \\ Gaussian & -127.39 & 258.78 \\ Negative Binomial & -127.66 & 259.32 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of log-likelihood and AIC evaluated on scored goals by Metz Handball over season 2022/2023.
Figure 1: Relation between CMP parameters \(\lambda\) and \(\nu\) and the empirical expectation
the logarithmic transformation \(\log(\lambda_{d})\) to account for the relation with the empirical mean as mentioned and illustrated in Figure 1. On the other hand, we want to penalize for irregularities of a team, therefore we want the parameter \(\nu_{d}\) to be as large as possible corresponding to under-dispersion. We can thus interpret formula (4) as follows: a team is a strong defender if it constantly concedes few goals during matches.
#### 2.3.2 Attack strength
We also assume that the distribution of scored goals follows a CMP distribution, \(Y_{a}\sim CMP(\lambda_{a},\nu_{a})\). A team is considered strong in attack if the average number of scored goals is large. The logic can therefore be considered as the inverse from equation (4). We define the attack strength of a team as
\[s_{a}=\frac{\log(\lambda_{a})}{\nu_{a}} \tag{5}\]
where the location parameter \(\lambda_{a}\) is used as the numerator to show that a high number of goals scored on average increases the attach strength. The parameter \(\nu_{a}\) is used as a penalty as we expect teams to have regular performances over the season, but they should also be capable of occasionally scoring numerous goals when facing weaker teams. From this mixed requirements, the dispersion parameter is used as the denominator.
#### 2.3.3 Global strength
A team is considered strong when it can perform well in attack and defense. We consider the overall strength of a team as the combination of attack and defense strengths by
\[s=s_{a}\cdot s_{d}=\frac{\log(\lambda_{a})\cdot\nu_{d}}{\nu_{a}\cdot\log( \lambda_{d})}. \tag{6}\]
We observe that a high score for overall strength can be driven by two factors. On the one hand, the team should have a high average of scored goals while demonstrating constant defense performances over time. On the other hand, a team should be able to adapt its attack strategies to teams and be able to score more than expected, taking their opponent by surprise. It should also be able to prevent conceding too many goals and have a low average of conceded goals.
In other words, the goal difference in the competitions ranking should be as large as possible. This can usually be verified in different competitions where leading teams tend to have a high difference (+229 goals for Metz Handball in the French female championship at end of season 2022/2023 or +257 for Vipers Kristiansand in Norway) while teams at the bottom of the season standings have highly negative goal differences (-107 for Toulon Metropole Var Handball in France for the same season or -170 for Volda in Norway.)
We can now note the importance of the nonlinear transformation for \(\lambda_{a}\) and \(\lambda_{d}\). Given the logarithmic rate of these parameters, one may have to record a much higher average of scored goals to distinguish itself from other teams. Indeed, considering the slope of the strength with respect to scored goals as \(\frac{\partial\mathcal{B}_{a}}{\partial\lambda_{a}}\propto\frac{1}{\lambda_{ a}}\), as teams get stronger, \(\lambda_{a}\) gets higher and differentiators between teams become marginal since \(\lim_{\lambda_{a}\rightarrow\infty}(\frac{1}{\lambda_{a}})=0\). On the contrary, any improvement in defense performances can lead to more important improvements in the overall strength. Because \(\frac{\partial s}{\partial\lambda_{d}}\propto\lambda_{d}\), the strength will grow linearly as the average of conceded goals decreases.
These statements can also have an echo in sports terms. It is common knowledge for handball players and coaches that the best way to improve a team's performance is to start by improving defense.
## 3 Illustrative applications
As illustrated in Table 1 from Section 2.2, the CMP distribution seems to be the most appropriate choice to model goals scored during a handball match. We plot in Figure 2 the histogram of scored goals over the season 2022/2023 for the female club of Metz Handball (France) and compare with the fitted theoretical CMP distribution. Furthermore, we estimate the strength parameters for European female clubs and display the ranking in Table 2. The estimations are derived from all games over the season 2022/2023, played in all first division female competitions (from friendly games, to the regular championships and champions' league).
We can observe that the teams considered the strongest are mostly strong competitors in the female European Champions League. In particular, the top clubs Gyori Audi ETO KC and Vipers Kristiansand were part of the EHF final four
in June 2023 and the latter club won the competition. Other clubs are leading their championships in their respective countries.
We also notice that in Table 2, even though the ranking is sorted by the overall estimated strength, the average number of goals scored and conceded seems to follow a hierarchy. The top clubs clearly show a high average number of scored goals and a relatively lower number of conceded goals. Some clubs (e.g. SC BBM Biettigheim) can record higher scored goals and still be ranked lower (e.g. compared to Team Esbjerg) due to lower defense ability but also more irregularities. Indeed, such teams suffer from a higher value for \(\nu_{a}\) or lower value for \(\nu_{d}\) suggesting irregularities in attack or defense and penalizing them in the final strength ranking. This justifies the need of formulae (4) and (5) instead of purely relying on average scored goals.
## 4 Discussion
Our proposal offers an estimation of attack and defense strengths in order to rank teams and generate features that can be informative and meaningful in subsequent modelling tasks. Provided that one has access to such data, the presented exercise can be extended to other objectives such as estimation of player abilities or generalized to other sports.
### From team strengths to player abilities
Using more granular data (not publicly available) on player performances for each game and over several seasons, one can also estimate the attack strength of a player. Considering that the data will most likely also suffer from under- or over-dispersion, the CMP distribution seems a good choice to fit the number of scored goals by a player. Using formula (5), we can therefore estimate the attack strength of an individual player. Not focusing only on goals scored, playing ability could also include components such as passing ability and combine scoring and pass abilities as a global attack
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Team & Avg. scored & Avg. conceded & Attack strength & Defense strength & Strength \\ \hline Gyóri Audi ETO KC & 33.32 & 24.32 & 3.49 & 3.16 & 11.00 \\ Vipers Kristiansand & 37.62 & 26.38 & 3.57 & 3.07 & 10.96 \\ Podravka Vegeta & 30.50 & 21.75 & 3.39 & 3.21 & 10.89 \\ Metz handball & 33.58 & 24.00 & 3.47 & 3.12 & 10.85 \\ Team Esbjerg & 33.33 & 24.67 & 3.48 & 3.11 & 10.83 \\ SG BBM Biettigheim & 34.63 & 25.21 & 3.54 & 3.05 & 10.80 \\ HC Dunajska Streda & 29.37 & 22.62 & 3.38 & 3.15 & 10.63 \\ Herming-Ikast Handbold & 28.71 & 23.29 & 3.39 & 3.13 & 10.61 \\ DVSC Schaeffler & 30.83 & 24.11 & 3.39 & 3.12 & 10.59 \\ CSM Bucuresti & 33.13 & 25.83 & 3.48 & 3.05 & 10.58 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Top 10 strongest female teams in Europe for season 2022/2023.
Figure 2: Histogram of goals scored by Metz Handball over season 2022/2023 vs. theoretical CMP distribution
strength. Accessing data such as interception, successful blocks (e.g. faults with no penalty such as yellow card, 2 minutes penalty, etc.), the defense ability can be modeled in a similar fashion and one can derive a defense ability at player level.
Therefore, combining attack and defense abilities as defined by equation (6), one can estimate the individual abilities and derive a ranking. Such ranking can help subsequent modelling exercises by adding informative variables regarding the strength of the individual players and not only the global strength of a team. Additionally, the individual ranking can be used as a new source of information for team managers to assess the potential abilities of a player when recruiting. Indeed, one can obtain a time dependent ranking and observe the evolution of a player over several seasons. This can further lead to forecasting exercises in order to identify players with high potential to be added to the squad.
### Generalization to other sports from Conway-Maxwell-Poisson distribution
Modelling sports requires to rely on discrete distributions though the issue of over- or under-dispersion is a recurrent problem (Karlis and Ntzoufras, 2008; Van Bommel et al., 2021). Given the similar constraints as in the present work, one can replicate the discussed logic on other sports' data. The methodology from Ley et al. (2019) can be merged with our proposed methodology in order to obtain football team abilities based on a distribution that can handle the problem of under-dispersion. One can thus define new rankings and generate new informative features to include in predictive Machine Learning models. Using the framework of Statistically Enhanced Learning (SEL) (Felice et al., 2023) one can include such generated features in the feature set to improve the predictive model.
## 5 Conclusion
Handball is a fast-paced sport of which goals cannot be analyzed via standard count distributions due to the problem of under- or over-dispersion. We showed that, using an appropriate probability distribution, one can define meaningful statistical estimates that approximate the strength of a team.
The proposed methodology allows to generate very informative features that can be included in predictive models in the spirit of Statistically Enhanced Learning. They can also offer the possibility to consider data-driven analyses of a team's performance to later support team managers in their definition of sports strategies. With access to more granular data, this methodology can be adapted to the estimation of player abilities and offer tools to allow coaches take data-driven decisions in their recruitment processes.
|
2303.05972 | Classifying the evolution of COVID-19 severity on patients with combined
dynamic Bayesian networks and neural networks | When we face patients arriving to a hospital suffering from the effects of
some illness, one of the main problems we can encounter is evaluating whether
or not said patients are going to require intensive care in the near future.
This intensive care requires allotting valuable and scarce resources, and
knowing beforehand the severity of a patients illness can improve both its
treatment and the organization of resources. We illustrate this issue in a
dataset consistent of Spanish COVID-19 patients from the sixth epidemic wave
where we label patients as critical when they either had to enter the intensive
care unit or passed away. We then combine the use of dynamic Bayesian networks,
to forecast the vital signs and the blood analysis results of patients over the
next 40 hours, and neural networks, to evaluate the severity of a patients
disease in that interval of time. Our empirical results show that the
transposition of the current state of a patient to future values with the DBN
for its subsequent use in classification obtains better the accuracy and g-mean
score than a direct application with a classifier. | David Quesada, Pedro Larrañaga, Concha Bielza | 2023-03-10T15:05:32Z | http://arxiv.org/abs/2303.05972v1 | Classifying the evolution of COVID-19 severity on patients with combined dynamic Bayesian networks and neural networks
###### Abstract
When we face patients arriving to a hospital suffering from the effects of some illness, one of the main problems we can encounter is evaluating whether or not said patients are going to require intensive care in the near future. This intensive care requires allotting valuable and scarce resources, and knowing beforehand the severity of a patients illness can improve both its treatment and the organization of resources. We illustrate this issue in a dataset consistent of Spanish COVID-19 patients from the sixth epidemic wave where we label patients as critical when they either had to enter the intensive care unit or passed away. We then combine the use of dynamic Bayesian networks, to forecast the vital signs and the blood analysis results of patients over the next 40 hours, and neural networks, to evaluate the severity of a patients disease in that interval of time. Our empirical results show that the transposition of the current state of a patient to future values with the DBN for its subsequent use in classification obtains better the accuracy and g-mean score than a direct application with a classifier.
keywords: Dynamic Bayesian networks, Neural networks, Forecasting, Classification, COVID-19 +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Throughout the COVID-19 pandemic, healthcare systems all around the world have suffered a staggering pressure due to the sheer number of infected patients that arrived to medical centers. The nature of this pandemic was such that patients could range from completely asymptomatic to presenting
critical respiratory issues. As such, and given that the amount of resources in medical centers is limited, it was a crucial task to discern whether or not a patient presented symptomatology that could devolve into a critical condition or into only mild afflictions.
The issue of predicting the clinical outcome of COVID-19 patients has seen much interest in recent years. Some authors opted for discerning the severity of the illness depending on certain comorbidities like heart failure [1], neurodegenerative diseases [2], cardiovascular diseases [3], or chronic pulmonary diseases [4]. These studies have shown that comorbidities related to COVID-19 increase the risk of death of a patient. As such, many efforts are also put into preprocessing clinical data and selecting an appropriate set of variables that define the effect of the illness.
From the point of view of predicting the outcome from data, many machine learning approaches have been tested in the literature. Some authors opted for performing a statistical analysis and applying logistic regression for classifying mortality [2; 5; 6; 7]. Another popular approach consists of training simple perceptron or multilayer neural network models to approximate a function that relates the variables in the system and classifies patient instances [8; 9; 10; 11]. Tree-based models like random forests [10; 12; 13; 14] or XGBoost [10; 15; 16; 17] are also some of the most popular and best performing tools for this task. In the case of interpretable models, Bayesian networks have also been applied to predicting the severity of COVID-19 on patients while also trying to gain some insight on the problem at hand [18; 19]
Another possible approach is to view the problem as a time series forecasting issue. Each patient that arrives at a hospital has its vital signs measured and has blood analysis performed on them. Afterwards, if the patient is not discharged and requires further care, new recordings are performed on a semi-regular basis. This generates time series data of each patient, where measurements are taken over a period of several hours each until either the patient overcomes the illness or passes away. In this scenario, time series models can be applied to forecast the state of a patient and predict whether they will be suffering from severe symptoms in the near future or not. This approach has also been explored in the literature with models like dynamic Bayesian networks [20], recurrent neural networks [21] and dynamic Markov processes [22].
In this work, we took a hybrid approach between static and dynamic models. We used data recovered from patients infected with the sixth Spanish COVID-19 wave that arrived to the Fundacion Jimenez Diaz hospitals in
Madrid. After preprocessing this data and selecting an appropriate variable set, we trained hybrid models between dynamic Bayesian networks (DBN) as forecasting models and neural networks (NN) as classifier models. The main idea of our proposal is to obtain the first vital signs and blood analysis from a patient and then perform forecasting of these variables with the DBN model up to a certain point in the near future. Afterwards, we can use the classifier model to identify the forecasted values as critical or not critical. This procedure can help identifying whether a patient that just arrived to triage in a medical center is going to worsen significantly in the following days.
The rest of this paper is organized as follows. Section 2 gives some background on dynamic Bayesian network models. Section 3 explains the architecture of the hybrid model with the neural network, where this classifying model is interchangeable with any other static classifier. Section 4 shows the experimental results of the tested models. Finally, Section 5 gives some conclusions and introduces future work.
## 2 Dynamic Bayesian networks
Dynamic Bayesian networks [23] are a type of probabilistic graphical model that represent conditional dependence relationships between variables using a directed acyclic graph. They extend the framework of Bayesian networks to the case of time series. Similarly to static BNs, each of the nodes in the graph represents a variable in the original system and the arcs represent their probabilistic relationships. In the case of DBNs, time is discretized into time slices that represent consecutive instants. This way, we have a representation of all the variables in our system across time. Let \(\mathbf{X}^{t}=\{X_{0}^{t},X_{1}^{t},\ldots,X_{n}^{t}\}\) be the set of all the variables in the time slice \(t\). Then, we can define the joint probability distribution of the network up to some horizon \(T\) as:
\[p(\mathbf{X}^{0},\ldots,\mathbf{X}^{T})\equiv p(\mathbf{X}^{0:T})=p(\mathbf{ X}^{0})\prod_{t=0}^{T-1}p(\mathbf{X}^{t+1}|\mathbf{X}^{0:t}), \tag{1}\]
where \(p(\mathbf{X})=\prod_{i=0}^{n}p(X_{i}|\mathbf{Pa}_{i})\) represents the probability distribution of a set of nodes \(\mathbf{X}\) and \(\mathbf{Pa}_{i}\) represents the set of parent nodes of \(X_{i}\) in the graph. However, in Equation 1 all time slices \(\mathbf{X}^{0:T}\) have to be taken into account to calculate the joint probability distribution. In this scenario, it is
very common to assume that the future state of the system is independent of the past given the present. A DBN that follows this assumption is called a first-order Markovian network. This implies that only the last instant is used to calculate the next one and it simplifies the calculation of the joint probability distribution greatly:
\[p(\mathbf{X}^{0:T})=p(\mathbf{X}^{0})\prod_{t=0}^{T-1}p(\mathbf{X}^{t+1}| \mathbf{X}^{t}). \tag{2}\]
An example of the structure of a DBN with Markovian order 1 is shown in Fig. (1). One advantage that DBN models present is that they do not need to be trained with time series of constant length. Due to the Markovian order assumption in Equation (2), we only need to recover several batches of two consecutive instants from the original dataset to learn the structure and parameters of the network. We can use several time series with different lengths recovered from the same stochastic process to train a DBN model from data. The reason for this is that we only need the values of the variables inside the temporal window defined by the Markovian order to train our model, so the total length of the time series is not relevant in the
Figure 1: Example of the structure of a first-order Markovian DBN with two time slices \(t_{0}\) and \(t_{1}\). To calculate the future values in \(t_{1}\), we would only need to know the current values of our variables in \(t_{0}\).
learning phase. This helps when applying this kind of model to real-world problems, where the length of the data from processes can vary depending on circumstances outside of the system.
## 3 Combining DBNs and static classifiers
When we predict COVID-19 severity on patients in the near future, we face several issues. On one hand, we only have the data of their vital signs and blood analysis when the patient first arrives at the hospital. As we are interested on their state on the following days, we need to forecast the evolution of these variables over time. On the other hand, we need a mechanism that identifies given a state vector of a patient whether they are in a critical state or not.
### Forecasting the state vector
When a patient afflicted with COVID-19 stays in intensive care for a prolonged period of time, they are monitored and new readings of their vital signs and blood analysis are recorded on a semi-regular basis of several hours. All the variables in these instances form a state vector \(\mathbf{S}=[s_{0},s_{1},\ldots,s_{n}]\) at each point in time, and the final data recovered from a patient \(k\) is a vector of instances \(\mathbf{P}_{k}=\left[\mathbf{S}^{0},\mathbf{S}^{1},\ldots,\mathbf{S}^{T}\right]\) ordered in time from the oldest vital sign readings and blood analysis to the most recent ones. When we combine several patients data, it generates a time series dataset that can be used to train a time series forecasting model. It is worth noting that the length \(T\) of the data from each patient depends on the time they spent in the hospital. If a patient is discharged with only one vital sign reading and blood analysis, then we do not have data with a time component. In this situation, this patient could not be used for training our temporal model.
Given that in our case all the variables in a state vector \(\mathbf{S}^{t}\) are continuous, we will use a Gaussian DBN to model the dependencies and to perform forecasting. A DBN model can help us gain some insight on which variables have a greater impact on the evolution of a patient. Furthermore, the ability of DBNs to be trained with different length time series after deciding a Markovian order is also relevant in this problem, given that the number of instances per patient varies greatly. By setting a Markovian order 1, we will be able to use the data from all patients except the aforementioned ones with a single reading, where no temporal data at all can be used.
After training the DBN model, we can use it to forecast the state vector of a patient up to a certain point in the future. This forecasting represents an estimate of the evolution that the patient will undergo, and it can be used to assess whether it will lead to severe symptoms or not. This process effectively gives an estimate of the future vital signs and blood analysis of a patient without spending additional resources and time on it.
### Classifying critical values
The task of evaluating whether a patient is in a critical state of the COVID-19 infection has been performed in the literature mainly through some kind of medical score [24] or by labelling instances due to some external indicator, for example being transferred to the intensive care unit. If we obtain a labelled dataset of patients through any of these methods, we can then take a machine learning approach by training classifier models that identify whether a patient is in a critical state given their state vector \(\mathbf{S}\).
If we combine this approach with the forecasting of the state vector, we get a hybrid model between static classifiers and time series models that is capable of evaluating the present and near future condition of a person suffering from COVID-19. When a patient arrives at a hospital and gets their vital signs and blood analysis recorded, we obtain the state vector \(\mathbf{S}^{0}\) of the very first instant of time. Then we can feed \(\mathbf{S}^{0}\) to a trained classifier model to evaluate whether this patient is already in a critical state or not. If this is not the case, we can then use \(\mathbf{S}^{0}\) as the starting point for our DBN to perform forecasting. This will return us the values of \(\mathbf{S}^{1},\mathbf{S}^{2},\ldots,\mathbf{S}^{t}\) up to a certain point \(t\) in time. All these state vectors can in turn be classified to evaluate the expected severity of the symptoms in that patient. With this method, we can see if a patient is expected to end up suffering from critical COVID-19 and when approximately will this situation occur. To illustrate this whole process, a schematic representation of this framework can be seen in Fig. (2).
Our proposed framework supports any kind of classifier that is able to produce a discrete prediction given a continuous state vector \(\mathbf{S}^{t}\). We used a modular implementation where the classifier used can be a support vector machine, an XGBoost, a neural network and a Bayesian classifier. All these classifiers have seen use in the literature and could find applications where one is more effective than the others. Due to this architecture, any other classifier model could potentially be introduced as a new module if the need arises.
In our case, the architecture that was most effective was the combination with a neural network. The network had an internal structure of 5 hidden dense layers with 64, 32, 16, 16 and 8 neurons each. They all used RELU activation functions and had their weights initialized with the identity. The last layer used a single neuron with a sigmoid activation function for binary classification. A result greater than 0.5 is equated to predicting a critical status for a patient, and a result lesser or equal to 0.5 predicts a non-critical scenario. A representation of this structure can be seen in Fig. (3).
## 4 Experimental results
For our experiments, we used a dataset consisting of anonymous data recovered from 4 different Spanish hospitals from the Fundacion Jimenez Diaz in Madrid. After preprocessing it, we used this data to fit our proposed model and evaluate its capabilities to predict the future critical status of patients suffering from COVID-19 infections.
Figure 2: Schematic representation of the classifier-DBN framework. After obtaining a state vector \(\mathbf{S}_{0}\) from a patient, we can use it to forecast the next \(t\) state vectors with the DBN model and check if they are critical with our static classifier.
### Preprocessing
Our raw dataset covers the period from the 27th of October 2021 to the 23rd of March 2022. In total, there are 21.032 rows with incomplete data from 15.858 patients and 532 variables, most of which present missing values for the majority of patients. This is a common occurrence in a medical dataset of these characteristics, given that not the same tests are performed to all the patients and some of the results have to be recorded manually. This data covered patients that had confirmed cases of COVID-19 via a positive PCR test.
The consecutive rows in the dataset that correspond to a same patient are ordered in time forming time series sequences. However, the frequency at which the instances were recorded is uneven. This is due to the fact that performing blood analysis from patients and obtaining the results does not take a fixed amount of time and is not always performed after fixed intervals. To tackle this issue, we established a period of 4 hours between each row and formed batches of instances where missing data was filled with the average values of the rest of instances in the same batch. This 4 hour period was chosen because usually new tests were performed on average roughly after every 4 hours in our dataset.
From the 21.032 rows, 13.971 were from patients that appear only in a single instance, where the vast majority were discharged from the hospital
Figure 3: Structure of the neural network model used in the experiments.
afterwards due to mild symptomatology and only 48 of these patients passed away. This data cannot be used to train the DBN models, given that a single register is not enough to form a time series sequence. However, it will be used to train the classifier models. From the remaining patients with more than a single instance, the majority of them have either two or three rows of recorded values. To illustrate this, we show a histogram with the distribution of the number of instances per patient in Fig. (4).
Regarding the 532 variables in our dataset, most of them correspond to specific values in uncommon tests and analysis, and they have over 70% of missing values across all instances. In our case, we have opted for reducing the number of variables to only those that are obtained from the vital signs of a patient, like their body temperature and their heart rate, their descriptive characteristics like age, gender and body mass index, and the variables from
Figure 4: Histogram with the number of instances per patient greater than 1 in the dataset. Inside the last bracket we have grouped all the patients with 10 or more instances. A higher number of instances indicates a longer stay in the hospital and as such a more severe case of COVID-19, which is far less common than a mild case.
a regular blood analysis like the albumin and D-dimer values. All these variables are routinely taken when a patient arrives at urgent care and obtaining them does not pose a severe expense of resources. This reduced the number of variables to 62, and from those we chose to retain the vital sign readings and the descriptive characteristics, while allowing feature subset selection on the blood analysis related variables. This subset selection was performed via random forest importance on classification on our objective variable, which will be whether or not a patient was put in the intensive care unit or passed away. This is what defines our critical cases of COVID-19, which are only a 18.8% of the total number of patients in our dataset.
### Experiment results
In this section we show the experimental results obtained with different combinations of classifier-DBN models. For our experiments, we used an XGBoost, a support vector machine, a neural network and a Bayesian classifier. In particular, this Bayesian classifier is a tree-augmented naive Bayes built following the hill climbing super-parent (HCSP) algorithm [25]. All the project was coded in R and is publicly available online in a GitHub repository1. The dataset used is not made public due privacy and legal reasons.
Footnote 1: [https://github.com/dkesada/Class-DBN](https://github.com/dkesada/Class-DBN)
Regarding the software we used in our experiments, the DBN models where trained using our own public package "dbnR"2, the XGBoost models where trained with the "xgboost" package [26], the support vector machines where trained with the "e1071" package [27], the neural networks with the "keras" R interface [28] and the Bayesian classifiers were trained with the "bnclassify" package [29]. The parameters of each classifier were optimized using differential evolution with the R package "DEoptim" [30] based on the geometric mean (g-mean) [31] of the models. This metric is defined as \(g_{m}=\sqrt{recall*specificity}\), which uses all values in the resulting confusion matrix when calculating the final score. Using both the recall and the specificity of the predictions ensures that the imbalance between critical cases and non-critical cases is taken into account when optimizing the parameters. We do not want a model optimized solely on accuracy because it would lead to models that only predict the majority class of non-critical for all patients.
Footnote 2: [https://github.com/dkesada/dbnR](https://github.com/dkesada/dbnR)
To alleviate the issue of imbalanced data, we also applied SMOTE oversampling with the 'DMwR' package [32; 33] to synthetically generate in
stances of both critical and non-critical cases. This is a common practice that creates synthetic data to offset the difference between the number of instances of the majority and minority classes. In our case, we will use SMOTE to create modified datasets for training our classifiers. This will help the models to avoid getting stuck on predicting the majority non-critical class for almost all instances.
To test our hybrid models, we take the state vector of a patient in an instance and forecast up to 10 instants into the future with the DBN model. Then, we use the classifier model to classify each of this forecasts as critical or not and we compare the predicted label with the true label of the instance. Given that each instance is separated from the next one by 4 hours, in total we forecast 40 hours into the future with the DBN model. With this method, we will be able to see the behaviour of the classifiers and the changes in accuracy and g-mean as we use state vectors from further into the future. The average results obtained across all forecasts of the models can be seen in Table 1.
The results in Table 1 show that, on average, the most accurate model is the neural network in both accuracy and g-mean. The performance of both the SVM and the HCSP are very similar in terms of accuracy, but the difference in g-mean score of the SVM shows that it is able to discern better the more uncommon critical instances. For this particular case, although the XGBoost model is very popular in the literature, it obtains worse overall results than the rest of the classifiers. In our experiments, due to the imbalance between classes we had to find a compromise between the global accuracy and the accuracy of the minoritary class. If left unchecked, the models would become biased to the majoritary class and predict almost unanimously every
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **Accuracy** & **g-mean** & **Train (h)** & **Exec (s)** \\ \hline XGBoost & 0.698 & 0.455 & 1.950 & 9.634 \\ SVM & 0.735 & 0.522 & 1.145 & 9.654 \\ NN & 0.771 & 0.541 & 1.384 & 9.863 \\ HCSP & 0.736 & 0.468 & 1.046 & 9.878 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean results in terms of the accuracy, g-mean score, training and execution time of the models on average for all the experiments. It is worth noting that training time includes optimization of parameters, which involves the creation of multiple models to evaluate different configurations.
single instance as non-critical, invalidating the use of the model while obtaining accuracies close to 90%. By using the g-mean as optimization metric in combination with the SMOTE oversampling, we were able to alleviate this problem. A high accuracy on the majoritary class of non-critical patients will be able to help reduce the oversaturation of ICU resources, given that all models can evaluate whether a patient will reach a critical state of the COVID-19 infection or not in less than 10 seconds. On the other hand, being able to discern the few critical cases that arise is also needed to help doctors determine which patients need more specific care to try to reduce the mortality rate. On the topic of training time, training and tuning the models takes on average between one and two hours. Given that these kind of models should not need to be retrained until some significant issue happens with the disease, like a new variant or new specific symptoms appear on patients that differ from the training data used, this training times should be reasonable to be performed once.
Given that the model with the NN obtains the best average results, we show in Fig. (5) the details of its performance depending on the time horizon. The first instant at 0 hours is equivalent to performing classification with the NN model directly to the state vector obtained from the patient. From
Figure 5: Classification results of the neural network model as we feed it state vectors further ahead in time with the DBN model. The classification performance of the neural network improves monotonically by combining it with the DBN forecastings.
there, we perform forecasting up to 40 hours with the DBN model of this state vector and use the results as input for the NN model. We can see that the NN model performs considerably better if we pair it with the DBN to classify the forecasted state of the patients rather than their initial state. As we forecast the state vector of patients further into the future, the NN improves its classification performance monotonically.
In addition, DBNs perform multivariate inference and are interpretable models. This allows them to offer doctors the forecasted values of any variable in the system as well as the underlying relationships between the rest of variables that led to those results. In the case of relevant values like the oxygen saturation of a patient, which is a good indicator of the state of a patient
Figure 6: Subset of relevant variables to the forecasting of maximum oxygen saturation (light blue) in the DBN model. The initial and maximum oxygen saturation variables from the last instant (in red) affect the calculation of the next maximum oxygen saturation value. Other variables like body temperature, systolic and diastolic blood pressures and heart rate also influence this value in the forecast.
suffering from respiratory issues, we show an example of the relationships present in the DBN model in Fig. (6). This subgraph shows the variables directly related with the maximum oxygen saturation registered in a 4 hours interval. We can see the previous maximum value of oxygen saturation from the last instant, which is to be expected due to the autoregressive component of time series. On a similar note, the initial values of oxygen saturation registered serve the model to define the range of the maximum value: lower initial oxygen saturation will likely lead to lower maximums and vice versa. Additionally, we also find in that the body temperature leading to fever, maximum diastolic blood pressure and minimum heart rate are also direct indicators of maximum oxygen saturation and play an important role in its forecasting. Lastly, minimum systolic and diastolic blood pressures are also affected. A lower level of oxygen saturation will cause higher blood pressure, increasing both minimums. This situation is reflected in the fact that this values are child nodes that depend on the current value of oxygen saturation.
## 5 Conclusions
In this work we have presented a hybrid model between DBNs and static classifiers where the state vector recovered from a patient suffering from COVID-19 is used to forecast their future state. This information is then used to assess how severe will their infection be in the future 40 hours based on their current vital signs and blood analysis. This method shows the best performance when combining DBN and NN models. While the NN is capable of discerning whether or not a patient will reach a critical state with better accuracy than the other classifiers, the DBN adds an explainable layer regarding the variables extracted from the patient. This model could help doctors decide whether or not a patient needs further specialized care and allow for a better organization of the resources available in medical centers. Additionally, we offer the code of all our models online for future reference and use.
For future work, this model could be applied in different industrial environments that require forecasting time series and classifying the state of the system. The combination of a generative model that forecasts the state of a system with a classifier model that evaluates this expected future state is a promising framework that could prove useful in applications like remaining useful life estimations. Another possible improvement could be the potential use of the DBN model as a simulator, introducing interventions in the
forecasting in order to see the effects that possible actions can have in the expected future. In the medical case, the effects of specific meds or treatments could potentially be reflected in the DBN predictions, and in other industrial cases this could lead to optimizing the expected future based on possible interventions in the initial state.
## Acknowledgements
This work was partially supported by the Madrid Autonomous Region through the "MadridDataSpace4Pandemics-CM" (REACT-EU) project. We are also grateful to the Fundacion Jimenez Diaz for providing the data for this work and to the doctors Sara Heili and Lucia Llanos for their valuable insights.
|
2309.01597 | Revealing the True Cost of Locally Differentially Private Protocols: An
Auditing Perspective | While the existing literature on Differential Privacy (DP) auditing
predominantly focuses on the centralized model (e.g., in auditing the DP-SGD
algorithm), we advocate for extending this approach to audit Local DP (LDP). To
achieve this, we introduce the LDP-Auditor framework for empirically estimating
the privacy loss of locally differentially private mechanisms. This approach
leverages recent advances in designing privacy attacks against LDP frequency
estimation protocols. More precisely, through the analysis of numerous
state-of-the-art LDP protocols, we extensively explore the factors influencing
the privacy audit, such as the impact of different encoding and perturbation
functions. Additionally, we investigate the influence of the domain size and
the theoretical privacy loss parameters $\epsilon$ and $\delta$ on local
privacy estimation. In-depth case studies are also conducted to explore
specific aspects of LDP auditing, including distinguishability attacks on LDP
protocols for longitudinal studies and multidimensional data. Finally, we
present a notable achievement of our LDP-Auditor framework, which is the
discovery of a bug in a state-of-the-art LDP Python package. Overall, our
LDP-Auditor framework as well as our study offer valuable insights into the
sources of randomness and information loss in LDP protocols. These
contributions collectively provide a realistic understanding of the local
privacy loss, which can help practitioners in selecting the LDP mechanism and
privacy parameters that best align with their specific requirements. We
open-sourced LDP-Auditor in \url{https://github.com/hharcolezi/ldp-audit}. | Héber H. Arcolezi, Sébastien Gambs | 2023-09-04T13:29:19Z | http://arxiv.org/abs/2309.01597v3 | # Revealing the True Cost of Local Privacy: An Auditing Perspective
###### Abstract.
While the existing literature on Differential Privacy (DP) auditing predominantly focuses on the centralized model (_e.g._, in auditing the DP-SGD algorithm), we advocate for extending this approach to audit Local DP (LDP). To achieve this, we introduce the LDP-Auditor framework for empirically estimating the privacy loss of locally differentially-private mechanisms. This approach leverages recent advances in designing privacy attacks against LDP frequency estimation protocols. More precisely, through the analysis of eight state-of-the-art LDP protocols we extensively explore the factors influencing the privacy audit, such as the impact of different encoding and perturbation functions. Additionally, we investigate the influence of the domain size and the theoretical privacy loss parameter \(\epsilon\) on local privacy estimation. In-depth case studies are also conducted to explore specific aspects of LDP auditing, including distinguishability attacks on LDP protocols for longitudinal studies and multidimensional data. Finally, we present a notable achievement of our LDP-Auditor framework, which is the discovery of a bug in a state-of-the-art LDP Python package. Overall, our LDP-Auditor framework as well as our study offer valuable insights into the sources of randomness and information loss in LDP protocols. These contributions collectively provide a realistic understanding of the local privacy loss, which can help practitioners in selecting the LDP mechanism and privacy parameters that best align with their specific requirements.
Local differential privacy, Privacy auditing, Privacy attacks +
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
+
Footnote †: journal: Accepted in 2018
noisy output, enabling LDP-Auditor to directly evaluate the privacy guarantees offered by LDP mechanisms making it well-suited for privacy auditing. Expanding beyond (Beng et al., 2017; Li et al., 2018), we also propose distinguishability attacks for two additional LDP protocols.
Figure 1 exemplifies an instance of our auditing results for a theoretical upper bound of \(\epsilon=2\) (indicated by the dashed red line) across eight LDP frequency estimation protocols: Generalized Randomized Response (GRR) (Krishnan et al., 2017), Subset Selection (SS) (Shi et al., 2017; Li et al., 2018), Symmetric Unary Encoding (SUE - a.k.a. Basic One-time RAP-POR) (Shi et al., 2017), Optimal Unary Encoding (OUE) (Shi et al., 2017), Thresholding with Histogram Encoding (THE) (Shi et al., 2017), Summation with Histogram Encoding (SHE) (Shi et al., 2017), Binary Local Hashing (BLH) (Li et al., 2018) and Optimal Local Hashing (OLH) (Shi et al., 2017). Among these, GRR demonstrated a tight lower bound estimation for \(\epsilon_{lb}\) as it does not require a specific encoding. On the other hand, other LDP protocols presented lower bounds within \(\leq 2\)x of the theoretical \(\epsilon\) (such as SUE, THE and SHE), and even within \(\leq 4\)x of the theoretical \(\epsilon\) (like BLH). _These results indicate that either the state-of-the-art attacks are still not representative of the worst-case scenario or that the upper bound analyses of these LDP protocols are not tight. The latter assumption might occur for LDP protocols that incorporate sources of randomness not captured in the worst-case definition of LDP in Equation_ (1).
_We have investigated several factors influencing the audit_, including the effect of the theoretical privacy loss \(\epsilon\) in low, mid and high privacy regimes as well as the impact of the domain size \(k\) on local privacy estimation. Additionally, _we conducted an analysis of in-depth case studies to explore specific aspects of LDP auditing_. Notably, given that protocols employing Local Hashing (LH) encoding, such as BLH and OLH, exhibited the least tight empirical \(\epsilon_{lb}\), we investigated the privacy cost of LH without LDP randomization. Furthermore, we examined the degradation of local privacy loss in repeated data collections compared to the theoretical upper bound imposed by (L)DP sequential composition (Li et al., 2018). In this context, within a generic framework, we proposed distinguishability attacks on LDP protocols in _longitudinal studies_ (_cf._ Algorithm 2). Additionally, we addressed the case of _multidimensional data_, proposing distinguishability attacks for LDP protocols following the RS+FD (Beng et al., 2017) solution (_cf._ Algorithm 3). Finally, we also show how LDP-Auditor successfully identified a bug in one state-of-the-art LDP Python package, in which the estimated lower bound \(\epsilon_{lb}\) contradicts the upper bound \(\epsilon\) (see Figure 8).
Taking all these aspects into account, the coverage of our analysis is broadened, allowing for a more comprehensive assessment of the robustness of various LDP protocols in realistic data collection scenarios. More precisely, our main contributions of this paper can be summarized as follows:
* We introduce the LDP-Auditor framework, which is designed to estimate empirical lower bounds for the privacy loss of LDP mechanisms. This framework provides a realistic assessment of privacy guarantees, which is essential for making informed decisions about privacy-utility trade-offs and on stimulating the research of new privacy attacks.
* We introduce novel distinguishability attacks specifically tailored to LDP protocols for longitudinal studies and multidimensional data. These attacks enrich the set of privacy analysis techniques available for evaluating LDP mechanisms in practical settings.
* We conduct an extensive audit of various LDP protocols, analyzing the impact of factors such as privacy regimes, domain size and multiple data collections. This comprehensive analysis provides valuable insights into the resilience and effectiveness of eight state-of-the-art LDP mechanisms, fundamental building blocks for applications such as frequency monitoring (Beng et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018), heavy hitter estimation (Li et al., 2018; Li et al., 2018) and machine learning (Li et al., 2018; Li et al., 2018).
* We demonstrate the bug detection capabilities of LDP-Auditor by identifying an issue in a state-of-the-art LDP Python package. This highlights the practical significance of our framework in validating LDP implementations.
### Related Work
Differential privacy auditing, as introduced by Jagielski et al. (Jagielski et al., 2018), involves employing various techniques to empirically assess the extent of privacy leakage in a DP algorithm and estimate the \(\epsilon\) privacy parameter. These techniques are particularly valuable when known analytical bounds on the DP loss lack precision, allowing for empirical measurements of privacy in such cases. For instance, DP auditing has been extensively investigated in evaluating the mathematical analysis for the well-known DP-SGD algorithm proposed by Abadi et al. (Abadi et al., 2018) in the domain of privacy-preserving machine learning. The research literature on DP-SGD auditing covers both centralized (Jagielski et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018) and federated (Beng et al., 2017; Li et al., 2018) learning.
Beyond privacy-preserving machine learning, privacy auditing has also been studied for standard DP algorithms (Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). For instance, some of these works consider a fully black-box scenario (_i.e._, unknown DP mechanism) with the goal of estimating the \(\epsilon\)-(L)DP guarantee provided (Li et al., 2018; Li et al., 2018; Li et al., 2018). Another line of research (Li et al., 2018; Li et al., 2018) has been tailored to identify errors in algorithm analysis or code implementations, especially when derived lower bounds contradict theoretical upper bounds.
Figure 1. Comparison of estimated privacy loss \(\epsilon_{lb}\) with theoretical upper bound \(\epsilon=2\) for eight LDP frequency estimation protocols. The dashed red line corresponds to the certifiable upper bound. While GRR closely aligns with the theoretical bound, others exhibit lower bounds within \(\leq 2\)x (_e.g.,_ SHE) or even \(\leq 4\)x (_i.e.,_ BLH) of the theoretical value.
While the works in [14; 21] could also be used to certify the \(\epsilon\)-LDP guarantee through Monte Carlos estimations, our work considers realistic privacy attacks to LDP mechanisms to estimate lower bounds for the privacy loss \(\epsilon_{lb}\). In other words, they would be able to answer "_is the claimed \(\epsilon\)-LDP correct in this code implementation?_", whereas we alternatively answer "_is the claimed \(\epsilon\)-LDP worst-case guarantee tight under state-of-the-art attacks?_".
This distinction highlights our emphasis on assessing the tightness of privacy guarantees under stringent adversarial conditions. Consequently, we envision our auditing analysis as an stimulus for advancing the current state-of-the-art in privacy attacks on LDP protocols and achieve tight lower bound \(\epsilon_{lb}\) estimates. In this context, the existing literature on privacy attacks in the context of LDP comprises several categories: (i) Distinguishability attacks [8; 16; 25] (adopted in this work), which enable adversaries to differentiate between two distinct inputs based on the noisy outputs; (ii) Pool inference attacks [27], allowing adversaries to deduce a user's preferences or attributes from the aggregated data, such as inferring a user's preferred skin tone used in emojis; and (iii) Re-identification attacks [8; 43], aiming to uniquely identify a specific user within a larger population. In this paper, we also contribute by introducing distinguishability attacks tailored to two additional LDP frequency estimation protocols, as well as general distinguishability attacks on LDP protocols for longitudinal studies (see Algorithm 2) and on LDP protocols for multidimensional data (see Algorithm 3). These new attacks enrich the privacy analysis techniques available for examining the robustness of LDP mechanisms in practical settings.
OutlineThe rest of this paper is organized as follows. First in Section 2, we review the background on the audited LDP frequency estimation protocols before introducing in Section 3 the proposed LDP-Auditor framework. Afterwards in Section 4, we detail the experimental settings considered before presenting our results. Finally, we conclude with a discussion and perspectives in Section 5.
## 2. LDP frequency estimation protocols
In this section, we review the necessary notation and background information of the LDP frequency estimation protocols. Throughout the paper, let \([n]=\{1,2,\ldots,n\}\) denote the set of integers and \(V=\{v_{1},\ldots,v_{k}\}\) represent a sensitive attribute with a discrete domain of size \(k=|V|\). We consider a distributed setting with \(n\) users and one untrusted server collecting the data reported by these users. The fundamental premise of \(\epsilon\)-LDP, as stated in Equation (1), is that the input to \(\mathcal{M}\) cannot be confidently determined from its output, with the level of confidence determined by the parameter \(\epsilon\). Therefore, the user's privacy is considered compromised if the adversary can distinguish between inputs \(v_{1}\) and \(v_{2}\).
In recent works [8; 25], the authors introduced **distinguishability attacks** to evaluate state-of-the-art LDP frequency estimation protocols. These attacks enable an adversary to predict the users' value \(\hat{v}=\mathcal{A}(y)\), in which \(y=\mathcal{M}_{\epsilon}(v)\) represents the reported value obtained through the \(\epsilon\)-LDP protocol. In essence, although each LDP protocol employs different encoding and perturbation functions, the adversary's objective remains the same, namely to predict the user's true value by identifying the most likely value that would have resulted in the reported value \(y\). The notion of distinguishability attacks provides a unified approach to evaluate the privacy guarantees offered by different LDP protocols.
We now provide a brief overview of eight state-of-the-art LDP frequency estimation protocols, along with their respective attack approaches denoted as \(\mathcal{A}\). The attack \(\mathcal{A}\) generally relies on a "support set" [57], denoted as \(\mathbb{1}\), which is built upon the reported value \(y\). The combination of these protocols and attack strategies will enable us to comprehensively audit the privacy protection provided by various LDP mechanisms in practical scenarios.
**Generalized Randomized Response (GRR).** The GRR [30] mechanism generalizes the randomized response surveying technique proposed by Warner [60] for \(k\geq 2\) while satisfying \(\epsilon\)-LDP. Given a value \(v\in V\), \(\text{GRR}(v)\) outputs the true value \(v\) with probability \(p\), and any other value \(v^{\prime}\in V\setminus\{v\}\), otherwise. More formally:
\[\Pr\left[\mathcal{M}_{\text{GRR}(\epsilon,k)}(v)=y\right]=\begin{cases}p= \frac{\epsilon^{\epsilon}}{\epsilon^{\epsilon+k-1}}\text{ if }y=v\\ q=\frac{1}{\epsilon^{\epsilon}+k-1}\text{ if }y\neq v,\end{cases} \tag{2}\]
in which \(y\in V\) is the perturbed value sent to the server. The support set for GRR is simply \(1_{\text{GRR}}=\{y\}\). From Equation (2), \(\Pr[y=v]>\Pr\{y=v^{\prime}\}\) for all \(v^{\prime}\in V\setminus\{v\}\). Therefore, the attack strategy \(\mathcal{A}_{\text{GRR}}\) is to predict \(\hat{v}=y\), [25].
**Subset Selection (SS).** The SS [56; 63] mechanism was proposed for the case in which the obfuscation output is a subset of values \(\Omega\) of the original domain \(V\). The optimal subset size that minimizes the variance is \(\omega=|\Omega|=\max\left(1,\left\lfloor\frac{k}{\epsilon^{\alpha}+1}\right\rfloor\right)\). Given an empty subset \(\Omega\), the true value \(v\) is added to \(\Omega\) with probability \(p=\frac{\omega\epsilon^{\alpha}}{\omega\epsilon^{\alpha}+k-\omega}\). Finally, values are added to \(\Omega\) as follows:
* If \(v\in\Omega\), then \(\omega-1\) values are sampled from \(V\setminus\{v\}\) uniformly at random (without replacement) and are added to \(\Omega\);
* If \(v\notin\Omega\), then \(\omega\) values are sampled from \(V\setminus\{v\}\) uniformly at random (without replacement) and are added to \(\Omega\).
Afterward, the user sends the subset \(\Omega\) to the server. The support set for SS is the subset of all values in \(\Omega\), \(\iota\epsilon\), \(\iota_{\text{SS}}=\{v|v\in\Omega\}\). Therefore, the attack strategy \(\mathcal{A}_{\text{SS}}\) is to predict \(\hat{v}=\text{Uniform }(\mathbb{1}_{\text{SS}})\)[8; 25].
**Unary Encoding (UE).** UE protocols [26; 57] encode the user's input data \(v\in V\), as a one-hot \(k\)-dimensional vector before obfuscating each bit independently. More precisely, let \(\mathbf{v}=[0,\ldots,0,1,0,\ldots,0]\) be a binary vector with only the bit at the position \(v\) set to \(1\) while the other bits are set to \(0\). The obfuscation function of UE mechanisms randomizes the bits from \(\mathbf{v}\) independently to generate \(\mathbf{y}\) as follows:
\[\forall i\in[k]:\quad\Pr[\mathbf{y}_{i}=1]=\begin{cases}p,\text{ if }\mathbf{v}_{i}=1,\\ q,\text{ if }\mathbf{v}_{i}=0,\end{cases} \tag{3}\]
in which \(\mathbf{y}\) is sent to the server. There are two variations of UE mechanisms: (i) Symmetric UE (SUE) [26] that selects \(p=\frac{\epsilon^{\alpha/2}}{\epsilon^{\alpha/2}+1}\) and \(q=\frac{1}{\epsilon^{\alpha/2}+1}\) in Equation (3), such that \(p+q=1\); and (ii) Optimal UE (OUE) [57] that selects \(p=\frac{1}{2}\) and \(q=\frac{1}{\epsilon^{\alpha}+1}\) in Equation (3). With \(\mathbf{y}\), the adversary can construct the subset of all values \(v\in V\) that are set to \(1\), \(\iota_{\text{UE}}=\{v|\mathbf{y}_{v}=1\}\). There are two possible attack strategies [8; 25]:
* \(\mathcal{A}_{\text{UE}}^{0}\) is a random choice \(\hat{v}=\text{Uniform }([k])\), if \(\mathbb{1}_{\text{UE}}=\emptyset\);
* \(\mathcal{A}_{\text{UE}}^{1}\) is a random choice \(\hat{v}=\text{Uniform }(\mathbb{1}_{\text{UE}})\), otherwise.
**Local Hashing (LH).** LH protocols (Hernandez et al., 2017; Goyal et al., 2018) use hash functions to map the input data \(v\in V\) to a new domain of size \(g\geq 2\), and then apply GRR to the hashed value. Let \(\mathcal{H}\) be a universal hash function family such that each hash function \(\mathrm{H}\in\mathcal{H}\) hashes a value \(v\in V\) into \([g]\) (_i.e._, \(\mathrm{H}:V\rightarrow[g]\)). There are two variations of LH mechanisms: (i) Binary LH (BLH) (Hernandez et al., 2017) that just sets \(g=2\), and (ii) Optimal LH (OLH) (Goyal et al., 2018) that selects \(g=\lfloor e^{\epsilon}+1\rfloor\). Each user first selects a hash function \(\mathrm{H}\in\mathcal{H}\) at random and obfuscates the hash value \(h=\mathrm{H}(v)\) with GRR. In particular, the LH reporting mechanism is \(\mathcal{M}_{\mathrm{LH}(\epsilon)}(v):=\langle\mathrm{H},\mathcal{M}_{ \mathrm{GRR}(\epsilon,g)}(h)\rangle\), in which \(\mathcal{M}_{\mathrm{GRR}(\epsilon,g)}\) is given in Equation (2) while operating on the new domain \([g]\). Each user reports the hash function and obfuscated value \(\langle\mathrm{H},y\rangle\) to the server. With these elements, the adversary can construct the subset of all values \(v\in V\) that hash to \(y\), _i.e._, \(\mathbb{1}_{\mathrm{LH}}=\{v|\mathrm{H}(v)=y\}\). There are two possible attack strategies (Hernandez et al., 2017; Goyal et al., 2018):
* \(\mathcal{A}^{0}_{\mathrm{LH}}\) is a random choice \(\hat{\epsilon}=\mathrm{Uniform}\left([k]\right)\), if \(\mathbb{1}_{\mathrm{LH}}=\emptyset\);
* \(\mathcal{B}^{1}_{\mathrm{LH}}\) is a random choice \(\hat{\epsilon}=\mathrm{Uniform}\left(\mathbb{1}_{\mathrm{LH}}\right)\), otherwise.
**Histogram Encoding (HE).** HE protocols (Goyal et al., 2018) encode the user value as a one-hot \(k\)-dimensional histogram, \(\mathbf{v}=[0.0,0.0,\ldots,1.0,0.0,\ldots,0.0]\) in which only the \(v\)-th component is 1.0. HE(\(\mathbf{v}\)) perturbs each bit of \(\mathbf{v}\) independently using the Laplace mechanism (Hernandez et al., 2017). Two different input values \(v_{1},v_{2}\in V\) will result in two vectors with L1 distance of \(\Delta=2\). Thus, HE will output \(\mathbf{y}\) such that \(\mathbf{y}_{1}=\mathbf{v}_{1}+\mathrm{Lap}\left(\frac{2}{\epsilon}\right)\). In this paper, we propose distinguishability attacks on two variations of HE protocols:
* **Summation with HE (SHE) (Hernandez et al., 2017)**. With SHE, there is no post-processing of \(\mathbf{y}\). Instead of constructing a support set, we describe our attacking strategy to SHE as follows. Let \(P_{V}(v)\) be the prior probability of input value \(v\), and let \(P_{Y}(\mathbf{y}|v)\) be the likelihood of observing \(\mathbf{y}\) given the true input value \(v\). By the Bayes' theorem, the posterior probability of input value \(v\) given the observed \(\mathbf{y}\) is: \[P_{V}(v|\mathbf{y})=\frac{P_{Y}(\mathbf{y}|v)P_{V}(v)}{\sum_{i=1}^{k}P_{Y}( \mathbf{y}|i)P_{V}(i)}.\] We can compute the likelihood \(P_{Y}(\mathbf{y}|v)\) as follows. For a given \(v\), the corresponding one-hot encoded histogram is \(\mathbf{v}\). The reported value \(\mathbf{y}\) is the sum of \(\mathbf{v}\) and noise from a Laplace distribution with scale \(b=2/\epsilon\). Therefore, the likelihood of observing \(\mathbf{y}\) given \(v\) is: \[P_{Y}(\mathbf{y}|\mathbf{v})=\frac{1}{(2b)^{k}}\exp\left(-\frac{|\mathbf{y}- \mathbf{v}|_{1}}{b}\right),\] in which \(|\mathbf{y}-\mathbf{v}|_{1}\) is the \(L_{1}\) distance between \(\mathbf{y}\) and \(\mathbf{v}\). To perform the attack, we compute the posterior probability \(P_{V}(v|\mathbf{y})\) for each possible input value \(v\in V\) and output the most probable input value. In other words, given the reported \(\mathbf{y}\), our Bayes optimal attack outputs: \[\hat{\epsilon}=\arg\max_{v\in V}P_{V}(v|\mathbf{y}).\] Note that this attack requires knowledge of the prior probability distribution \(P_{V}(v)\). If the prior is unknown (assumed in this paper), one can use a uniform prior.
* **Thresholding with HE (THE) (Goyal et al., 2018)**. With THE, the server can construct the support set as \(\mathbb{1}_{\mathrm{THE}}=\{v\mid\mathbf{y}_{v}>\theta\}\), _i.e._, each noise count whose value \(>\theta\). The optimal threshold value for \(\theta\) that minimizes the protocol's variance is within \((0.5,1)\). With \(\mathbb{1}_{\mathrm{THE}}=\{v\mid\mathbf{y}_{v}>\theta\}\), we propose an adversary with two attack strategies: \(-\)\(\mathcal{A}^{0}_{\mathrm{THE}}\) is a random choice \(\hat{\epsilon}=\mathrm{Uniform}\left([k]\right)\), if \(\mathbb{1}_{\mathrm{THE}}=\emptyset\); \(-\)\(\mathcal{A}^{1}_{\mathrm{THE}}\) is a random choice \(\hat{\epsilon}=\mathrm{Uniform}\left(\mathbb{1}_{\mathrm{THE}}\right)\), otherwise.
## 3. Local Differential Privacy Auditing
In this section, we introduce our LDP-Auditor framework (Section 3.1) and our distinguishability attacks considering multiple data collections (Section 3.2 and Section 4.5).
### LDP-Auditor
The LDP-Auditor framework, presented in Algorithm 1, builds upon previous work on central DP auditing (Hernandez et al., 2017) with slight modifications tailored for LDP auditing. The main difference between DP-SGD auditing (Hernandez et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018) to LDP auditing is the test statistic, _i.e._, the distinguishability of input values in LDP versus distinguishability of neighboring datasets in central DP. Given an LDP mechanism \(\mathcal{M}\), our objective is to estimate the probabilities \(\hat{p}_{0}=\mathrm{Pr}[\mathcal{M}(v_{1})=y]\) and \(\hat{p}_{1}=\mathrm{Pr}[\mathcal{M}(v_{2})=y]\) from Equation (1) in order to compute the empirical privacy loss \(\epsilon_{lb}=\ln\left(\hat{p}_{0}/\hat{p}_{1}\right)\). To account for statistical uncertainty, we use the well-established Clopper Pearson confidence intervals (Goyal et al., 2018), as commonly adopted in DP audit literature (Hernandez et al., 2017; Goyal et al., 2018; Goyal et al., 2018).
```
Input : Theoretical \(\epsilon\), LDP protocol \(\mathcal{M}_{\epsilon}\), values \(v_{1},v_{2}\in V\), trial count \(T\), confidence level \(\alpha\). Output : Estimated lower bound \(\epsilon_{lb}\).
1:\(c_{0}=0\), \(c_{1}=0\)
2:for\(i\in[T]\)do
3:if\(\mathcal{M}_{\epsilon}(v_{1})=y\)\(c_{0}=c_{0}+1\)
4:if\(\mathcal{M}_{\epsilon}(v_{2})=y\)\(c_{1}=c_{1}+1\)
5:endfor
6:\(\hat{p}_{0}=\mathrm{ClopperPearsonLower}(c_{0},T,\alpha/2)\)
7:\(\hat{p}_{1}=\mathrm{ClopperPearsonUpper}(c_{1},T,\alpha/2)\)return\(:\epsilon_{lb}=\ln(\hat{p}_{0}/\hat{p}_{1})\)
```
**Algorithm 1** LDP-Auditor.
Without loss of generality, in this work, we use distinguishability attacks (Goyal et al., 2018; Goyal et al., 2018) to establish our test statistic as: "\(y\) comes from \(v_{1}\)". In other words, this involves determining whether an adversary \(\mathcal{A}\) can distinguish between two different input values based on the noisy output, (_i.e._, whether \(\mathcal{A}\) returns the same prediction value for two distinct inputs). For better understanding, consider an example using the GRR protocol with a binary input space \(V=\{0,1\}\). Assume that we fix two different input values as \(v_{1}=0,v_{2}=1\), and the output \(y=0\), Equation (2) provides a theoretical guarantee that \(\epsilon^{\epsilon}=\frac{\hat{p}}{\hat{q}}\). Following Algorithm 1, LDP-Auditor estimates \(p=\hat{p}_{0}\) and \(q=\hat{p}_{1}\) for a fixed trial count \(T\) and confidence interval \(\alpha\), ultimately returning the estimated empirical privacy loss \(\epsilon_{lb}\). Similar procedures can be applied for other LDP protocols using their respective distinguishability attacks described in Section 2.
By leveraging distinguishability attacks and adopting an empirical approach, LDP-Auditor provides an effective and practical means of auditing LDP mechanisms and estimating their privacy guarantees. The framework's versatility allows it to be readily applied
to various locally private protocols, advancing the understanding and evaluation of local privacy protection in practical scenarios. Moreover, the choice of the test statistic is generic, and one can envision instantiating LDP-Auditor with other existing adversarial analyses, such as re-identification attacks (Kumar et al., 2017; Wang et al., 2018), pool inference attack (Kumar et al., 2017) and data change detection attack (Kumar et al., 2017).
**Limits on the estimated privacy loss.** The \(\epsilon_{lb}\) reported by Algorithm 1 has two upper bounds. The theoretical \(\epsilon\) and an upper bound imposed by Monte Carlo estimation. Let \(\alpha=0.01\) to get a \(99\%\)-confidence bound and \(T=10^{4}\) trials. Even if we get perfect inference accuracy with \(c_{0}=T\) and \(c_{1}=0\), the Clopper Pearson confidence interval would produce \(\hat{p}_{0}=0.9994\) and \(\hat{p}_{1}=0.0006\), which implies a lower bound of \(\epsilon_{lb}=7.42\). In this paper, similar to (Kumar et al., 2017), we refer to this upper bound as \(\epsilon_{OPT}\).
### LDP-Auditor for Longitudinal Studies
In practice, the server often needs to collect users' data periodically throughout multiple data collections (_i.e._, _longitudinal studies_). Nevertheless, one known result in (L)DP is that **repeated data collections have a linear privacy loss due to the sequential composition (Kumar et al., 2017)**. This occurs because attackers can exploit "averaging attacks" to distinguish the user's actual value from the added noise. For this reason, well-known LDP mechanisms for longitudinal studies such as RAPPOR (Kumar et al., 2017) (deployed in Google Chrome) and \(d\)BitFlipM (Krishnan et al., 2017) (deployed in Windows 10), were designed with a _memoization-based_ solution.
Memoization enables longitudinal collections through memorizing a randomized version of the true value \(v\) and consistently reusing it (Kumar et al., 2017; Krishnan et al., 2017). Alternatively, it can be employed by using this memorized value as the input for a subsequent round of sanitization, essentially chaining two LDP protocols (Kumar et al., 2017; Wang et al., 2018; Kumar et al., 2018; Wang et al., 2018). When auditing memoization-based protocols that reuse the same randomized value, our LDP-Auditor operates equivalently to auditing in a single data collection scenario (_i.e._, Algorithm 1). Conversely, in the audit of mechanisms utilizing memoization that chains two LDP frequency estimation protocols, there are two levels of privacy guarantees (Kumar et al., 2017): \(\epsilon_{1}\) for the first report and \(\epsilon_{\infty}\) for infinity reports. In essence, our LDP-Auditor framework can be also directly applied to audit the empirical privacy loss in \(t\rightarrow\infty\) data collections.
Therefore, in this work, our aim is to precisely investigate the privacy leakage of LDP frequency estimation protocols across multiple data collections (denoted as \(\tau\)) _without relying on memoization_. This approach will enable us to audit the realistic privacy loss of each LDP protocol in comparison to the upper bound of \(\tau\)-LDP, which is imposed by the (L)DP sequential composition.
In Algorithm 2, we present the extension of distinguishability attacks on LDP protocols to longitudinal studies. In this context, the adversary's objective remains the same: to predict the user's true value by determining the most probable value that would have generated the reported value \(y^{t}\) in each \(t\in[\tau]\) data collection. Notably, the adversary now possesses an increased knowledge due to random fresh noise being added to the user's value \(v\) over \(\tau\) times. To perform the "averaging attack", in each data collection, the adversary constructs the "support set" based on the reported value \(y^{t}\) and LDP mechanism \(\mathcal{M}_{e}\). The support set is then used to increment the knowledge (_i.e._, count) about the user's true value and what constitutes noisy data, ultimately predicting \(\delta\).
One exception is the SHE protocol, in which the notion of a support set is not applicable, rendering Algorithm 2 inapplicable. In the SHE protocol, Laplace noise with a mean of \(0\) is added in each data collection. Consequently, the "averaging attack" is straightforward as it involves determining \(\hat{b}\) by taking the argmax of the summation of all reports. Formally, this is expressed as \(\hat{\sigma}=\text{argmax}\left(\sum_{t=1}^{\tau}\mathbf{y}^{t}\right)\).
Finally, our LDP-Auditor framework (Algorithm 1) can be employed to estimate the privacy loss of LDP protocols in longitudinal studies. To achieve this, one can simply replace \(v_{1}\) and \(v_{2}\) with vectorized versions \(\mathbf{v_{1}}\) and \(\mathbf{v_{2}}\) in Lines 3 and 4 of Algorithm 1. The test statistic remains unchanged as it is derived from distinguishability attacks as per Algorithm 2 and defined as: "\(\mathbf{y}\) comes from \(\mathbf{v}\)".
### LDP-Auditor for Multidimensional Data
Another dimension of interest to the server is _multidimensional data_ (_i.e._, \(d\geq 2\) attributes), aiming to enable more comprehensive decision-making. Considering potential correlations among these attributes, the principles of DP sequential composition (Kumar et al., 2017) remain applicable in this context. Therefore, the existing solutions for multidimensional data, represented as \(\mathbf{v}=[v_{1},v_{2},\ldots,v_{d}]\), include:
* **Splitting (SPL):** This naive method involves partitioning the privacy budget \(\epsilon\) among the \(d\) attributes, collecting each attribute under \(\frac{\epsilon}{d}\)-LDP. Examples based on this SPL solution are the LoPub (Kumar et al., 2017) and Castell (Castell, 2018) mechanisms, which are designed for joint distribution estimation.
* **Sampling (SMP):** In this approach, the population is divided into \(d\) disjoint subgroups. Each sub-group \(j\in[d]\) then reports the \(j\)-th attribute under \(\epsilon\)-LDP. Examples utilizing the SMP solution include the CALM (Kumar et al., 2017) and FELIP (Kumar et al., 2017) mechanisms, proposed for marginal estimation, and works such as (Kumar et al., 2017; Wang et al., 2018), which introduced LDP mechanisms for mean estimation.
* **Random Sampling Plus Fake Data (RS+FD) (Kumar et al., 2017):** In this solution, each user samples a single attribute \(j\in[d]\) to report \(v_{j}\) under \(\epsilon^{\prime}\)-LDP and reports uniform fake data for the \(d-1\) non-sampled attributes (Kumar et al., 2017). Because the sampling result is not disclosed to the aggregator, there is amplification by
sampling (Groff et al., 2017; Zhang et al., 2018). For this reason, RS+FD utilizes an amplified privacy budget \(\epsilon^{\prime}=\ln\left(d\cdot(\epsilon^{\prime}-1)+1\right)\) for the sampled attribute. An example based on RS+FD is the GRR-FS mechanism (Groff et al., 2017), designed for node-level LDP on graph data, to enable training of graph neural networks.
Upon closer examination of the three solutions, one can notice that both SPL and SMP solutions can be considered as straightforward instances of reporting one attribute (at a time for SPL) with a given LDP mechanism. Consequently, our LDP-Auditor framework can be directly used to estimate empirical lower bounds \(\epsilon_{lb}\) for LDP mechanisms following these solutions. Therefore, in this work, our focus shifts to auditing the RS+FD solution, for which there is a privacy amplification effect due to uncertainty on the server side.
In Algorithm 3, we introduce the distinguishability attack designed for LDP protocols employing the RS+FD solution. Here, the adversary's objective is twofold: first, to predict the attribute that the user has sampled, and subsequently, to predict the user's actual value. Since each user selects an attribute \(j\in[d]\) uniformly at random, the Bayes optimal guess for the adversary is \(j=\text{Uniform}([d])\). Once the attribute is predicted, the adversary constructs the "support set" based on the reported value \(y_{j}\) and LDP mechanism \(\mathcal{M}_{\epsilon}\). With the support set, as in Section 2, the adversary predicts the user's value \(\hat{\sigma}_{j}\).
Lastly, our LDP-Auditor framework (Algorithm 1) can be employed to estimate the privacy loss of RS+FD protocols. To achieve this, one should replace \(v_{1}\) and \(v_{2}\) with the multidimensional versions \(\mathbf{v_{1}}\) and \(\mathbf{v_{2}}\) in Lines 3 and 4 of Algorithm 1. The test statistic remains unchanged, as it is derived from distinguishability attacks as per Algorithm 3 and defined as: "\(y_{j}\) comes from \(y_{j}\)". The difference here is that even if the user did not sample the attribute \(\hat{j}\), the attack can still predict the user's value \(v_{j}\) correctly due to uniform fake data generation for that attribute.
```
Input : User values \(\mathbf{v}=[v_{1},v_{2},\ldots,v_{d}]\), domain size \(\mathbf{k}=[k_{1},k_{2},\ldots,k_{d}]\), privacy guarantee \(\epsilon\), RS+FD protocol \(\mathcal{M}_{\epsilon}\). Output : Predicted value \(\hat{\sigma}_{j}\):
1User-side randomization \(\mathbf{y}=\mathcal{M}_{\epsilon}(\mathbf{v},\mathbf{k})\)\(\triangleright\)_cf._(Bill et al., 2017)
2Predict user's sampled attribute \(\hat{j}=\text{Uniform}([d])\)
3Given \(y_{j}\), construct support set \(\mathbb{1}_{\mathcal{M}}\)
4Predict \(\hat{\sigma}_{j}=\text{Uniform}(\mathbb{1}_{\mathcal{M}})\)\(\triangleright\)_cf._ Section 2return :\(\hat{\sigma}_{j}\)
```
**Algorithm 3** Distinguishability Attacks on RS+FD Protocols.
## 4. Experimental Evaluation
This section presents our experimental setting to assess the proposed audit framework as well as the main results obtained.
### General Setup of Experiments
For all experiments, we have used the following setting:
* **Environment.** All algorithms are implemented in Python 3 with the Numpy (Zhu et al., 2017), Numba (Zhu et al., 2017) and Ray (Ray, 2018) libraries, and run on a local machine with 2.50GHz Intel Core i9 and 64GB RAM. The codes and tool developed will be made publicly available in a GitHub repository.
* **Audited LDP Mechanisms.** We audited the eight LDP frequency estimation protocols described in Section 2 following two state-of-the-art LDP libraries: multi-freq-ldpy (Chen et al., 2017) and pure-ldp (Chen et al., 2017; Zhang et al., 2018).
* **Audit parameters.** We set \(T=10^{6}\) trial counts and use Clopper-Pearson confidence intervals with \(\alpha=0.01\) (_i.e._, our estimates hold with 99% confidence). These parameters establish an upper bound of \(\epsilon_{OPT}=12.025\).
* **Theoretical upper bound.** We evaluated the LDP frequency estimation protocols in high, mid and low privacy regimes over the range \(\epsilon\in\{0.25,0.5,0.75,1,2,4,6,10\}\).
* **Domain size.** We also varied the domain size \(k\in\{25,50,100,150,200\}\) as it influences the performance of the distinguishability attacks.
* **Stability.** Since LDP protocols are randomized, we report average results with standard deviation over 5 runs.
### Main Auditing Results
Figure 2 illustrates our main auditing results for eight state-of-the-art LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE and THE. The graph presents the relationship between the theoretical \(\epsilon\) values (x-axis) and the estimated \(\epsilon_{lb}\) values (y-axis), showcasing the comparison of various domain sizes \(k\), using our LDP-Auditor framework (Algorithm 1).
**Comparison of LDP protocols.** On the one hand, from Figure 2, one can notice that _GRR is the unique LDP protocol that achieves tight lower bound estimates_. As exemplified in Section 3.1, auditing GRR's privacy guarantees is straightforward since there is no specific encoding (_i.e._, the input and output spaces are equal). On the other hand, all other LDP protocols (_i.e._, SS, UE-, LH- and HE-based) incorporate specific preprocessing encoding functions, which may result in information loss and/or additional randomness. For instance, BLH hashes the input set \(V\) of size \(k\) to \(\{0,1\}\) and, thus results in excessive loss of information due to collisions. Indeed, even if the bit is transmitted correctly after the GRR perturbation, the server can only obtain one bit of information about the input (_i.e._, to which half of the input domain the value belongs to). For these reasons, _BLH consistently led to the worst auditing results_ with a flat \(\epsilon_{lb}<1\) estimation after \(\epsilon\geq 2\).
Concerning the SS protocol, the lower bound estimates of \(\epsilon_{lb}\) demonstrated similar results to other LDP protocols in high privacy regimes. However, an exception occurs in low privacy regimes, in which SS equals GRR due to a subset size \(\omega=1\). Regarding UE-based protocols, in high-privacy regimes (\(\epsilon\leq 1\)), both SUE and OUE presented similar lower bounds estimates for \(\epsilon_{lb}\). In mid-privacy regimes (\(1<\epsilon\leq 4\)), OUE presented higher lower bound estimates for \(\epsilon_{lb}\) than SUE. However, OUE reached a "plateau" in low privacy regimes (\(\epsilon>4\)), explained by an upper bound on the distinguishability attack (see (Zhu et al., 2017)). In the case of HE-based protocols, both THE and SHE presented similar estimates for \(\epsilon_{lb}\) in all privacy regimes, but with a different sensibility to the domain size \(k\) (discussed afterwards).
Lastly, although both LH protocols present similar lower bound estimates for \(\epsilon_{lb}\) in high privacy regimes (_the lowest among all other LDP protocols_), the difference is remarkable in favor of OLH in mid to low privacy regimes. Moreover, a similar "plateau" behavior, as
Figure 2: Theoretical \(\epsilon\) values (x-axis) versus estimated \(\epsilon_{lb}\) values (y-axis) using our LDP-Auditor framework. We compare different domain sizes \(k\) for eight state-of-the-art LDP frequency estimation protocols: GRR [30], SS [56; 63], SUE [26], OUE [57], BLH [12], OLH [57], SHE [23] and THE [57].
observed for OUE, is noted for the OLH protocol in low privacy regimes due to a comparable upper bound on attacker effectiveness (see (Han et al., 2017)). Therefore, _besides OLH being able to preserve more utility than BLH, it also provides tighter estimates when auditing the lower bound \(\epsilon_{lb}\)_.
**Impact of domain size.** As the domain size \(k\) increases, one can observe a direct impact on the lower bound estimation of \(\epsilon_{lb}\) for all LDP protocols, in which the gap with the theoretical \(\epsilon\) increases. However, the impact is minor for the GRR protocol, even in high privacy regimes. Conversely, for all other LDP protocols, this impact is substantial, with lower bound estimates ranging within \(\leq 2.5\)x of the theoretical \(\epsilon\) (when \(k=25\)) up to \(\leq 5\)x (when \(k=200\)). These results are consistent with the distinguishability attack effectiveness, which decreases according to higher \(k\) (_i.e._, more uncertainty) (Kang et al., 2017; Han et al., 2017). Exceptions exist for both OUE and OLH protocols, in which in low privacy regimes (when \(\epsilon\geq 4\)), a larger domain size \(k\) leads to tighter estimates of \(\epsilon_{lb}\) than smaller domain sizes. Although to a small extent, the THE protocol also yields more accurate estimates for higher \(k\) when \(\epsilon=10\). Taking OUE as an example, these results can be attributed to the fact that the bit corresponding to the user's value is transmitted with a probability of \(\frac{1}{2}\) (_i.e._, fully random). Consequently, if the domain size is small, it results in a higher false positive rate, which subsequently decreases the estimated \(\epsilon_{lb}\).
### Case Study #1: Auditing the Privacy Cost of Local Hashing Encoding
As discussed previously in Section 4.2, both LH protocols presented the least tight estimates for \(\epsilon_{lb}\) in high privacy regimes. Even worse, BLH's estimated privacy loss remained below \(\epsilon_{lb}<1\) for \(\epsilon\geq 2\), leading to lower bounds \(\leq 10\)x of the theoretical \(\epsilon\). Motivated by these observations, we performed an additional study to audit the impact of local hashing encoding but with no LDP perturbation (_i.e._, \(\epsilon=+\infty\)), which we refer to as Local Hashing Only (LHO). For these experiments, we varied the hash domain size \(g\in\{2,4,6,8,10\}\) and used the same distinguishability attack \(\mathcal{A}_{\text{LH}}\) described in Section 2. With these considerations in mind, Figure 3 presents the estimated \(\epsilon_{lb}\) values (x-axis) for LHO protocols according to the hash domain sizes \(g\) (y-axis) using our LDP-Auditor framework for different domain sizes \(k\).
Observations from Figure 3 underscore that, even for a binary hash domain (\(g=2\)), the estimated privacy loss remains \(\epsilon_{lb}<1\), aligning with high privacy regimes suitable for real-world applications. Even though there is no LDP randomization of the hashed value \(x\in\{0,1\}\), the adversary still has a random guess on the support set \(\mathbb{1}_{\text{LHO}}\). Indeed, given a general (universal) family of hash functions \(\mathcal{H}\), each input value \(v\in V\) is hashed into a value in \([g]\) by a hash function \(\mathrm{H}\in\mathcal{H}\), and the universal property requires:
\[\forall v_{1},v_{2}\in V,v_{1}\neq v_{2}:\quad\Pr_{\mathrm{H}\in\mathcal{H} }\left[\mathrm{H}(v_{1})=\mathrm{H}(v_{2})\right]\leq\frac{1}{g}.\]
In other words, approximately \(k/g\) values can be mapped to the same hashed value \(h=\mathrm{H}(v)\) in \([g]\). This significant loss of information in the encoding step suggests potential privacy gains for LH protocols due to the presence of many random collisions. For instance, in a similar context, DP-Sniper (Sen et al., 2017), a method developed to finds violations of DP, also encountered difficulties estimating \(\epsilon\) for the original RAPPOR protocol (Kang et al., 2017), which is based on Bloom filters and employs hash functions.
One could expect a similar privacy gain for other LDP mechanisms based on sketching such as Apple's Count-Mean Sketch (CMS) (Kang et al., 2016) and Hadamard (Bradley et al., 2016) mechanisms, which we leave as for future audit investigations. Furthermore, as we increase the hash domain size \(g>2\) without introducing any LDP perturbation, the estimated \(\epsilon_{lb}\) starts to rise, achieving medium privacy regimes \(1<\epsilon_{lb}\leq 2.5\). This outcome is expected since preserving more information during the encoding step decreases the support set size \(|\mathbb{1}_{\text{LHO}}|\), which naturally enhances the accuracy of the distinguishability attack \(\mathcal{A}_{\text{LH}}\). Therefore, the estimated privacy loss \(\epsilon_{lb}\) for LH-based protocols will be lower if the domain size \(k\) is high and/or if the new hashed domain \(g\) is small.
### Case Study #2: Auditing the LDP Sequential Composition in Longitudinal Studies
As discussed in Section 3.2, we aim to audit the realistic privacy loss of each LDP protocol in longitudinal studies (_i.e._, \(\tau\) data collections). More precisely, this will allow to assess the gap between empirical local privacy loss \(\epsilon_{lb}\) and the theoretical upper bound imposed by the (L)DP sequential composition, which is \(\tau\epsilon\)-LDP. For these experiments, we use both Algorithms 1 and 2 with the following parameters value:
* The number of data collections within the range \(\tau\in\{5,10,25,50,75,100\}\).
* The per report privacy guarantee in high privacy regimes, within the range \(\epsilon\in\{0.25,0.5,0.75,1\}\).
* The domain size \(k\in\{2,100\}\).
Figure 4 (for \(k=2\)) and Figure 5 (for \(k=100\)) illustrate the estimated \(\epsilon_{lb}\) values (y-axis) for the eight LDP protocols according to the the number of data collections \(\tau\) (x-axis) and per report \(\epsilon\), using our LDP-Auditor framework.
Figure 3. Estimated \(\epsilon_{lb}\) (y-axis) versus hash domain \(g\) (x-axis) using our LDP-Auditor framework comparing different domain sizes \(k\) for LH encoding with no LDP randomization.
Figure 4: Estimated \(\epsilon_{lb}\) (y-axis) versus the number of data collections \(\tau\) (x-axis) using our LDP-Auditor framework for a domain size \(k=2\). We compare different per report \(\epsilon\)-LDP guarantee considering the eight LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE and THE.
Figure 5: Estimated \(\epsilon_{lb}\) (y-axis) versus the number of data collections \(\tau\) (x-axis) using our LDP-Auditor framework for a domain size \(k=100\). We compare different per report \(\epsilon\)-LDP guarantee considering the eight LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE and THE.
In Figure 4, both GRR and SS protocols have equal \(\epsilon_{lb}\) estimates, as for \(k=2\), the subset size \(\omega=1\) (_i.e.,_ GRR). Moreover, it is evident that even after \(\tau=100\), none of the LDP protocols, when \(\epsilon<1\), achieves the optimal upper bound \(\epsilon_{OPT}\) imposed by the Monte Carlo estimation. Despite GRR presenting tight estimates in Section 4.2, it only aligns with \(\epsilon_{lb}=\epsilon_{OPT}\) when \(\tau=100\) and the per-report \(\epsilon=1\). Although not explicitly experimented with, as the number of data collections becomes sufficiently large (_i.e.,_\(\tau\to\infty\)), we anticipate that \(\epsilon_{lb}\) will converge to \(\epsilon_{OPT}\) for all LDP protocols.
From Figure 5, one can notice that for a higher domain size of \(k=100\), the results obtained are reversed. The GRR protocol now yields the lowest \(\epsilon_{lb}\) estimation for all experimented \(\tau\) values, followed by the SHE protocol. The reason for this is that the probability of "being honest" \(p=\frac{e^{\epsilon}}{\epsilon+k-1}\) in Equation (2), is directly proportional to the domain size \(k\). Therefore, even after many data collections \(\tau\), the adversary has still too much noisy reports to filter, which makes the distinguishability attack less efficient. While all other LDP protocols exhibit similar \(\epsilon_{lb}\) estimations across both domain sizes, none of them reaches the Monte Carlo upper bound \(\epsilon_{OPT}\).
_These results are quite surprising since one would imagine the privacy leakage to be higher for repeated data collections when random fresh noise is added per report._ Nevertheless, as the domain size increases, the performance of the distinguishability attack decreases (Brandt et al., 2013; D'Alessio et al., 2014). Consequently, for real-world deployments with substantial domain sizes (_e.g.,_ list of Internet domains), exclusively relying on theoretical \(\epsilon\)-LDP guarantees may prove unrealistic. Privacy auditing becomes imperative in such scenarios, aiding in the establishment of appropriate privacy parameters, to avoid adding more noise than required, when considering realistic attackers.
_Notably, these auditing results emphasize a crucial aspect for longitudinal studies: a substantial gap exists between theory (sequential composition) and practice (LDP auditing)._ To narrow this gap, one could consider designing more powerful attacks for longitudinal studies beyond those proposed here in Algorithm 2. Alternatively, research efforts could be directed towards advancing the theory to develop more sophisticated compositions for \(\epsilon\)-LDP mechanisms.
### Case Study #3: LDP Auditing with Multidimensional Data
As discussed in Section 3.3, our audit results outlined in Section 4.2 are also valid for LDP mechanisms based on the standard SPL and SMP solutions for multidimensional data. Thus, in this section, we aim to audit LDP protocols following the RS+FD (Brandt et al., 2013) solution. For these experiments, we use both Algorithms 1 and 3 using the following configurations:
* We audit five RS+FD protocols: RS+FD[GRR], RS+FD[SUE-z], RS+FD[SUE-r], RS+FD[OUE-z] and RS+FD[OUE-r]. The difference between UE-z and UE-r lies on how to generate the fake data (Brandt et al., 2013). More precisely, UE-z initializes a zero-vector and UE-r initializes a random one-hot-encoded vector. Next, SUE or OUE is used to sanitize these vectors.
* We vary the domain size over \(k\in\{2,100\}\) and we vary the number of attributes over \(d\in\{2,10\}\). More specifically, \(\mathbf{k}=[2,2]\) and \(\mathbf{k}=[10,10]\) when \(d=2\) and, correspondingly, for \(k=100\).
Figure 6. Theoretical \(\epsilon\) (\(\mathbf{x}\)-axis) versus estimated \(\epsilon_{lb}\) (\(\mathbf{y}\)-axis) using our LDP-Auditor framework comparing different number of attributes \(d\) for five RS+FD (Brandt et al., 2013) protocols and \(k=2\).
Figure 7. Theoretical \(\epsilon\) (\(\mathbf{x}\)-axis) versus estimated \(\epsilon_{lb}\) (\(\mathbf{y}\)-axis) using our LDP-Auditor framework comparing different number of attributes \(d\) for five RS+FD (Brandt et al., 2013) protocols and \(k=100\).
The comparison of theoretical \(\epsilon\) values (x-axis) with estimated \(\epsilon_{lb}\) values (y-axis) for the five RS+FD protocols, based on the number of attributes \(d\), is presented in Figure 6 (for \(k=2\)) and Figure 7 (for \(k=100\)), utilizing our LDP-Auditor framework.
In these figures, it is evident that, once again, GRR exhibits tighter \(\epsilon_{lb}\) lower bounds than UE-based protocols following the RS+FD solution. However, in contrast to Section 4.2, the estimated lower bound \(\epsilon_{lb}\) for GRR now displays a "plateau behavior" after theoretical \(\epsilon\geq 4\). This plateau arises because the probability of reporting the true value under GRR reaches high values with \(\epsilon\geq 4\), thus, having little changes on the audit result.
Notably, among the family of UE protocols, SUE demonstrates a tighter empirical \(\epsilon_{lb}\) than OUE when the domain is binary (see Figure 6). However, SUE exhibits lower \(\epsilon_{lb}\) than OUE when \(k=100\) (see Figure 7). This observation can be attributed to the advantage of SUE in transmitting the true bit with a probability \(p>\frac{1}{2}\), while OUE has \(p=\frac{1}{2}\). Consequently, the distinguishability attack achieves higher accuracy for SUE, increasing the true positive rate and decreasing the false positive rate, resulting in higher \(\epsilon_{lb}\) lower bounds. Moreover, different fake data generation procedures for UE protocols (UE-z _vs_ UE+) did not result in significant changes in the audit results.
Another intriguing result is that the empirical privacy loss is lower for a binary domain compared to when \(k=100\). This behavior is primarily due to the impact of fake data on distinguishability attacks. _In a binary domain, fake data significantly increases the false positive rate, leading to a decrease in the estimated lower bound \(\epsilon_{lb}\)._ However, for a higher domain size, fake data has a lesser impact on the false positive rate, as the distinguishability attack has more rooms for errors. Overall, these nuanced relationships underscore the intricate interplay between domain size, the use of fake data and the tightness of local privacy loss estimation in the context of RS+FD protocols.
### Case Study #4: Debugging a Python Implementation of UE Protocols
Finally, we show how our LDP-Auditor framework can also serve as a tool for verifying the correctness of LDP implementations. In our case study, we focused on the pure-LDP (Srivastava et al., 2017) package (version 1.1.2) and show that their UE protocols fail to meet the claimed level of \(\epsilon\)-LDP. Our objective here is not to point out issues with respect to a particular code or library but rather to demonstrate the potentiality of our approach for verifying and debugging LDP protocols. Following the same experimental setup outlined in Section 4.1, Figure 8 presents a comparison of the theoretical \(\epsilon\) values (x-axis) with the estimated \(\epsilon_{lb}\) values (y-axis) using our LDP-Auditor framework. We consider different domain sizes \(k\) for both the SUE and OUE protocols, implemented in the pure-LDP package. The inconsistencies we found between the lower and upper bounds are highlighted through the orange rectangle.
From Figure 8, it is clear that LDP-Auditor has detected inconsistencies between the lower and upper bounds, which are highlighted by the orange rectangle. After conducting an investigation into the pure-LDP code, we were able to identify the specific location of the implementation error. The error arises from the following steps in the _perturb function of the UEClient class:
1. The user initializes a zero-vector \(\mathbf{y}=[0,0,\ldots,0]\) of size \(k\);
2. The user samples indexes of values in \(\mathbf{y}\) that will flip from \(0\) to \(1\) with probability \(q\) (as indicated in Equation (3)).
3. With probability \(p\) (as indicated in Equation (3)), the index at position \(\mathbf{y}_{v}\) (representing the user's true value) is flipped from \(0\) to \(1\).
4. **Missing step:** if \(\mathbf{y}_{v}\) was set to \(1\) in step (2) but not in step (3), there should be a correction to revert it back to \(0\).
This was a simple mistake that _was directly fixed by the authors_(Srivastava et al., 2017) following our communication with them. However, it is crucial to emphasize that this minor error had implications for the \(\epsilon\)-LDP guarantees. Specifically, the bit corresponding to the user's value was transmitted more time than intended, particularly in high privacy regimes. In mid to low privacy regimes, the bug might go unnoticed, given the already high probability of transmitting the bit as \(1\). This explains why LDP-Auditor failed to detect inconsistencies between the lower and upper bounds for \(\epsilon\geq 1\). In such cases, specialized tools designed for identifying DP violations, like DP-Sniper (Srivastava et al., 2017), would likely have been effective in detecting the bug. Therefore, we strongly encourage end-users of the pure-LDP package experimenting with UE protocols to update to the latest version 1.2.0.
Figure 8. Theoretical \(\epsilon\) (x-axis) versus estimated \(\epsilon_{lb}\) (y-axis) using our LDP-Auditor framework comparing different domain sizes \(k\) for both SUE and OUE protocols, implemented in the pure-LDP package. The orange rectangle highlights inconsistencies between the observed empirical lower bound and the theoretical upper bound.
## 5. Conclusion and perspectives
In this work, we have introduced the LDP-Auditor framework as a powerful tool for empirically estimating the privacy loss of LDP mechanisms. In summary, LDP-Auditor provides a clearer picture of the privacy-utility trade-offs of LDP frequency estimation protocols, providing insights into the actual privacy loss in practical scenarios. Through several case studies, we have demonstrated the framework's effectiveness in identifying significant discrepancies between theoretical guarantees and empirical privacy loss. These findings contribute to a nuanced understanding of the challenges and considerations in the design and implementation of LDP mechanisms. As LDP continues to gain prominence in privacy-preserving data analysis, LDP-Auditor can serve as a valuable resource for practitioners and researchers aiming to assess and enhance the privacy guarantees of their systems. In the following two subsections, we summarize the key findings, substantiating the various claims made in our paper, as well as limitations and perspectives.
### Factors Influencing the Audit
**Generality of Our Findings.** The eight LDP frequency estimation protocols we audited in Section 4.2 are the building blocks of LDP mechanisms proposed for more complex tasks such as: heavy hitter estimation (Han et al., 2015; Wang et al., 2016), joint distribution estimation (Han et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018), frequent item-set mining (Wang et al., 2016; Wang et al., 2017), machine learning (Wang et al., 2016; Wang et al., 2017), frequency estimation of multidimensional data (Bahdan et al., 2016; Wang et al., 2017; Wang et al., 2018) and frequency monitoring (Bahdan et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018). Therefore, our audit results provide generic insights that shed light on several critical factors influencing the estimation of the local privacy loss in more complex tasks.
**Effect of Encoding and Perturbation Functions.** It is important to note that LDP frequency estimation protocols employ different encoding and perturbation functions, leading to varying levels of susceptibility to distinguishability attacks. As exemplified by our results, protocols like GRR, which has no specific encoding, exhibit tighter lower bound estimates for \(\epsilon_{lb}\) (see Figure 2). In contrast, protocols utilizing local hashing encoding, such as BLH (Han et al., 2015), tend to have less robust empirical privacy estimates, with lower bounds up to 10x of the theoretical \(\epsilon\). For this reason, we conducted an additional audit study of the privacy guarantees associated with using local hashing without LDP perturbation to gain a better understanding of the sources of randomness and information loss (see Figure 3). Although local hashing by itself has no proven DP guarantees, our findings suggests that under the state-of-the-art distinguishability attacks (Wang et al., 2016; Wang et al., 2017), there is sufficient loss of information when the hash domain is small (\(\epsilon\)., \(g=2\)) leading to \(\epsilon_{lb}<1\).
**Impact of Domain Size.** Another lesson learned is the influence of the domain size \(k\) on the local privacy estimation of \(\epsilon_{lb}\). Larger domain sizes generally lead to wider gaps between theoretical \(\epsilon\) values and estimated lower bounds \(\epsilon_{lb}\). Intuitively, as the domain size \(k\) increases, it becomes more challenging to perform a successful distinguishability attack. For instance, in the case of GRR, the probability \(p=\frac{e^{\epsilon}}{\epsilon^{k}+k-1}\) of "being honest" decreases proportionally to \(k\). In other mechanisms, there is a higher likelihood of introducing noise in the output \(y\), such as by flipping more bits from 0 to 1 in UE protocols. Moreover, the analysis reveals exceptions for certain protocols, such as OUE and OLH, in which higher domain sizes result in tighter lower bound estimates in low privacy regimes.
**Longitudinal Studies and Sequential Composition.** Our investigation into the privacy loss in longitudinal studies underscores a large discrepancy between theoretical upper bound based on the (L)DP sequential composition and practical LDP auditing. Our results reveal that, even after a significant number of data collections (\(\tau=100\)), none of the LDP frequency estimation protocols achieved the upper bound \(\epsilon_{OPT}\) imposed by Monte Carlo estimation when the theoretical \(\epsilon<1\). This challenges the assumptions of worst-case theoretical guarantees and emphasizes the importance of privacy auditing in real-world applications, for which realistic adversaries could be considered. In addition, this also motivates the need to consider more advanced composition theorems that are tailored to the specific context considered.
**LDP Mechanisms for Multidimensional Data.** Auditing LDP protocols handling multidimensional data, specifically those following the RS+FD solution (Bahdan et al., 2016), uncovers further complexities. The impact of fake data on distinguishability attacks is a critical factor, influencing the false positive rate and, consequently, the estimated lower bound \(\epsilon_{lb}\). For instance, and somewhat intriguingly, a binary domain yields lower empirical privacy loss, emphasizing the nuanced dynamics between domain size, fake data and the precision of local privacy loss estimation in the context of RS+FD protocols.
**Debugging LDP Implementations.** Our investigation of a Python implementation of UE protocols (pure-LDP (Lup
## Acknowledgments
The work of Heber H. Arcelezi was partially supported by the European Research Council (ERC) project HYPATIA under the European Union's Horizon 2020 research and innovation programme. Grant agreement n. 835294. Sebastien Gambs is supported by the Canada Research Chair program as well as a Discovery Grant from NSERC.
|
2306.11101 | Metric $f(R)$ gravity with dynamical dark energy as a scenario for the
Hubble tension | We introduce a theoretical framework to interpret the Hubble tension, based
on the combination of a metric $f(R)$ gravity with a dynamical dark energy
contribution. The modified gravity provides the non-minimally coupled scalar
field responsible for the proper scaling of the Hubble constant, in order to
accommodate for the local SNIa pantheon+ data and Planck measurements. The
dynamical dark energy source, which exhibits a phantom divide line separating
the low red-shift quintessence regime ($-1<w<-1/3$) from the phantom
contribution ($w<-1$) in the early Universe, guarantees the absence of
tachyonic instabilities at low red-shift. The resulting $H_0(z)$ profile
rapidly approaches the Planck value, with a plateau behaviour for $z\gtrsim 5$.
In this scenario, the Hubble tension emerges as a low red-shift effect, which
can be in principle tested by comparing SNIa predictions with far sources, like
QUASARS and Gamma Ray Bursts. | Giovanni Montani, Mariaveronica De Angelis, Flavio Bombacigno, Nakia Carlevaro | 2023-06-19T18:08:06Z | http://arxiv.org/abs/2306.11101v2 | # Metric f(R) gravity with dynamical dark energy as a paradigm for the Hubble Tension
###### Abstract
We introduce a theoretical framework to interpret the Hubble tension, based on the combination of a metric \(f(R)\) gravity with a dynamical dark energy contribution. The modified gravity provides the non-minimally coupled scalar field responsible for the proper scaling of the Hubble constant, in order to accommodate for the local SNIa pantheon+ data and Planck measurements. The dynamical dark energy source, which exhibits a phantom divide line separating the low red-shift quintessence regime (\(-1<w<-1/3\)) from the phantom contribution (\(w<-1\)) in early Universe, guarantees the absence of tachyonic instabilities at low red-shift. The resulting \(H_{0}(z)\) profile rapidly approaches the Planck value, with a plateau behaviour for \(z\gtrsim 5\). In this scenario, Hubble tension emerges as a low red-shift effect, which can be in principle tested by comparing SNIa predictions with far sources, like QUASARS and Gamma Ray Bursts.
## I Introduction
Modern Cosmology [1] suffers from a significant number of unexplained features, among which the problem of the late Universe acceleration (dark energy) stands for its relevance on the incoming observational tasks (see for instance Euclid [2]) [3; 4; 5; 6].
Many different scenarios have been conjectured on the nature of the present acceleration and they could be distinguished into two main classes. On one hand, we have models requiring the presence of exotic sources, leading to matter with negative pressure mimicking vacuum energy density contribution; on the other hand, we can consider modified gravity effects able to reproduce the same phenomenology via non-Einsteinian dynamics [7].
Recently, this puzzling picture has been enriched by the so-called \(4.9\sigma\) "Hubble tension" [8], consisting of the non-concealable discrepancy between the value of the Hubble constant \(H_{0}\), as measured in \(\mathrm{km\,s}^{-1}\mathrm{Mpc}^{-1}\) by the Planck satellite (\(67.4\pm 0.5\)) and by the nearby standard candles like Cepheids and Supernovae Ia, hereafter SNe Ia (\(73.04\pm 1.04\)). This feature suggests that the Hubble constant measurement could be affected by the red-shift in its determination. Indeed, from the Cosmic Microwave Background (CMB) radiation coming from \(z\simeq 1100\) it appears significantly lower in value than from the nearby sources, saying at \(z\gtrsim 3\).
Furthermore, the analysis provided in [9] (see also [10; 11]), outlines the existence of a dependence of the type \(H_{0}(z)\propto(1+z)^{-\alpha}\) where \(\alpha\sim 10^{-2}\) from a red-shift binned analysis of the SNe Ia distribution in the Pantheon sample [12]. This result has been obtained within \(2\sigma\) of statistical significance and it suggests that the Hubble tension can be intended as a monotonic decreasing of the \(H_{0}\) value from \(z=0\) up to \(z\simeq 1100\).
This behavior for the Pantheon sample could be explained by a red-shift evolution of the SNe Ia as astrophysical sources, and, hence, as standard candles. However, this perspective is not entirely shared by the community investigating on these sources (see for instance the Pantheon+ analysis presented in [13]), and it seems legit to interpret the \(H_{0}(z)\) profile of the Pantheon SNe Ia sample as the Hubble tension in its-self, by introducing a new physics background.
In [9], it was argued that the profile \(H_{0}(z)\) is the result of the Einstein constant rescaling via the non-minimally coupled scalar field of metric \(f(R)\) gravity in the Jordan frame. However, a Hu-Sawicki model was tested in [10], and the associated luminosity distance turned out to not account for the desired effect. This is essentially due to the typical behavior of the modified gravity theories providing a Universe acceleration: they reproduce a \(\Lambda\)CDM model [1] only at larger red-shift, while small deviations are today available. This feature is quite in contradiction with the aim of dealing with the Hubble tension, thought as a \(H_{0}(z)\) profile, for which the deviation from the stan
dard \(\Lambda\)CDM model must take place just at large values of the red-shift. This has been successfully implemented in [14], where a metric \(f(R)\) gravity model is constructed able to reproduce the late Universe acceleration and the \(H_{0}(z)\) profile according to the specific law fixed in [9]. A smooth decreasing behavior of \(H_{0}(z)\) is indeed obtained, matching with the detected values of the SNe Ia and the CMB data. As originally conjectured, it is just the non-minimally coupled scalar field of the scalar-tensor representation to be responsible for the Hubble constant scaling, making it apparently depending on the red-shift of the sources used for its determination.
Here, we address the problem of the Hubble tension by searching for a physical effect, able to rescale the Universe expansion rate, but without requiring a priori the specific dependence on the red-shift \(z\) introduced in [14]. Still in the framework of metric \(f(R)\) gravity, we study from a general perspective the evolution in \(z\) of the dynamical system formed by the Hubble function, the non-minimally coupled scalar field, and its potential term. In particular, in order to ensure the physical (non-tachyonic) character of the model, we introduce, in addition to the standard matter energy density contribution, a dynamical dark energy component exhibiting a state parameter running with the red-shift (see also [15; 16; 17; 18; 19; 20; 21])
The model we construct is able to provide the required rescaling of the Hubble constant, via an effective function \(H_{0}(z)\), induced by the red-shift variation of the non-minimally coupled scalar field of the Jordan frame representation of \(f(R)\) gravity. The function \(H_{0}(z)\) rapidly decreases from the SNIa value to the CMB one, which is already reached at \(z\simeq 5\) (even at \(z\gtrsim 2\) if error bars are considered).
The modified gravity theory deviates from General Relativity only at low red-shifts, and Einstein-Hilbert action in the presence of a cosmological constant (up to a rescaling of the gravitational constant) is promptly recovered. The dynamical dark energy is driven by two parameters, ruling the transition from quintessence regime to a phantom energy phase (\(w<-1\)), with the phantom divide line (PDL, [22; 23; 24; 25; 26; 27]) occurring around \(z\simeq 0.8\). The parameters are determined within certain ranges, depending on the error bars of the \(H_{0}\) measurements at low and high red-shifts.
The most important phenomenological feature of our model is that it discriminates between the SNIa as testers, through whose sample \(H_{0}(z)\) fastly runs (also according to the analysis in [9; 10]) and higher red-shift sources, like QUASARS and Gamma Ray Bursts, for which the value of \(H_{0}\) has to coincide with the CMB. This fact offers an interesting validation perspective of the proposed theoretical framework via the expected increase in the statistics for those astrophysical objects. The paper is organized as follows. In Sec. II we introduce the metric \(f(R)\) theory of gravity and we apply it to the late universe. This implementation results in a generalized Friedmann equation that encompasses the presence of both baryonic and non-baryonic matter, along with dynamic dark energy fluid. The aim of Sec. III is to find a dynamical framework which could be understood as a \(\Lambda\)CDM model, wherein the Hubble constant \(H_{0}\) exhibits a red-shift-dependent variation. This feature arises as a consequence of the evolution of the non-minimally coupled scalar field \(\phi(z)\). Moreover, we express the equation of state for the dynamical dark energy. In Sec. IV we show how the presence of a dynamical dark energy source is a viable tool to ensure the absence of tachyonic instabilities which could otherwise impact the modified gravity model under consideration at low red-shifts. Finally, it is discussed the \(f(R)\) profile which manifests small deviations from General Relativity only at low-red-shift. In Sec. V conclusions are drawn. Spacetime signature is chosen mostly plus \((-,+,+,+)\), and we adopted units with \(c=1\), with \(\chi\) denoting the Einstein constant.
## II Modified gravity model
Here, we consider a metric \(f(R)\)-model as described in the Jordan frame [28; 29], in which the non-Einsteinian features are summarized by the presence of a non-minimally coupled scalar field \(\phi\) to gravity. The action of the theory takes the form
\[S=\frac{1}{2\chi}\int d^{4}x\sqrt{-g}\left\{\phi R-V(\phi)+2\chi\mathcal{L}_{ m}\right\}, \tag{1}\]
where \(g\) and \(R\) are the determinant and Ricci scalar associated to the metric tensor \(g_{\mu\nu}\) respectively; eventually, \(\mathcal{L}_{m}\) denotes the matter Lagrangian density. The potential \(V(\phi)\) is fixed by the specific functional form \(f(R)\), according to the following relation
\[V(\phi)\equiv f(R(\phi))-\phi R(\phi)\,,\quad R(\phi)=\left(\frac{df}{dR} \right)^{-1}(\phi), \tag{2}\]
being \(df/dR\) an invertible function.
It is worth noting that, here, the metric tensor \(g_{\mu\nu}\) and the scalar field \(\phi\) are independent degrees of freedom with respect to which the action (1) has to be varied, considering also the matter variables. In particular, the scalar field is a massive propagating mode that could be detected in the polarisation modes of a gravitational wave, see [30].
### Cosmological implementation
We now implement the modified gravity picture depicted above to the late Universe dynamics in which we neglect both the spatial curvature and the radiation contribution [1] so that it is well described by a line element of the Robertson-Walker form
\[ds^{2}=-dt^{2}+a(t)^{2}\left(dx^{2}+dy^{2}+dz^{2}\right), \tag{3}\]
where we adopted Cartesian coordinates and the dynamics of the cosmic scale factor \(a\) is expressed via the synchronous time \(t\). The generalized Friedmann equation is then obtained by considering the \(tt\)-component of the metric field equation derived from (1), resulting in:
\[H^{2}=\frac{1}{3\phi}\left(\chi\rho-3H\dot{\phi}+\frac{V(\phi)}{2}\right)\,, \tag{4}\]
where \(H\equiv\dot{a}/a\) is the Hubble parameter, with a dot denoting time differentiation, and \(\rho\) represents the total energy density associated to the perfect fluid stress-energy tensor for the matter \(\mathcal{L}_{m}\). In our case it accommodates the non-relativistic component \(\rho_{m}(t)=\rho_{m0}a^{-3}\), describing baryonic and non baryonic matter, as well as for a dynamical dark energy fluid characterized by the equation of state \(P_{\Lambda}=w(t)\rho_{\Lambda}\), with \(w(t)\) to be specified (see the discussion below). Eventually, by varying (1) w.r.t. the scalar field \(\phi\), we get the relation
\[\frac{dV}{d\phi}=6\dot{H}+12H^{2}\,. \tag{5}\]
The dynamical system is described by (4) and (5), which together with the continuity equation for the matter
\[\dot{\rho}+3H(\rho+P)=0, \tag{6}\]
completely determine the evolution of the degrees of freedom \(a(t)\), \(\phi(t)\) and \(\rho(t)\), once \(w(t)\) is assigned.
## III Construction of the model
We now search for a solution, able to reproduce the current Universe expansion and the apparent variation of the Hubble constant. In other words, we are searching for a dynamical paradigm that can be interpreted as a \(\Lambda\)CDM model whose Hubble constant \(H_{0}\), here taken as that one measured by the SNe Ia, acquires a variation with the redshift (say \(H_{0}(z)\)) as the effect of the non-minimally coupled scalar field evolution \(\phi=\phi(z)\). Thus, the Hubble constant predicted by the CMB data comes out due to the different value of the scalar field at \(z\simeq 1100\) and that one taken at low red-shift.
It results convenient to rearrange all the quantities in terms of the red-shift variable \(x\equiv\ln(1+z)\), where the red-shift \(z\) is defined from
\[\frac{a_{0}}{a}=\frac{1}{1+z}. \tag{7}\]
With the standard assumption that the present-day scale factor is \(a_{0}=1\), differentiation in \(t\) can be re-expressed as
\[\frac{d}{dt}=H(x)\frac{d}{dx}. \tag{8}\]
Having in mind a scenario that is very close to a \(\Lambda\)CDM model, we impose that the potential identically cancels out (up to a rescaling of the gravitational coupling) the modified gravity contribution led by the scalar field \(\phi\), i.e.
\[V(\phi)=6H\dot{\phi}, \tag{9}\]
and we choose for the dynamical dark energy component the equation of state
\[P_{\Lambda}(x)=-\left(1-\frac{w_{0}+2w_{1}x}{3}\right)\rho_{\Lambda}(x), \tag{10}\]
where the parameters \(w_{0},w_{1}\) account for deviations from the standard cosmological constant scenario, which is recovered for \(w_{0}=w_{1}=0\). At low red-shifts, where \(x\simeq z\), this equation of state reproduces the so-called \(w_{0}w_{a}\)CDM model [12]. Such a dynamical dark energy contribution, once the parameters \(w_{0}\) and \(w_{1}\) are properly determined (see discussion in Sec. IV), is necessary to guarantee a consistent \(f(R)\) gravity devoid of tachyonic degrees of freedom. Taking into consideration (6), it follows that (4) can be rewritten as
\[H^{2}(x)=\frac{H_{0}^{2}}{\phi(x)}\left(\Omega_{m0}e^{3x}+\Omega_{\Lambda 0 }e^{x(w_{0}+w_{1}x)}\right), \tag{11}\]
where \(\Omega_{m0}\) and \(\Omega_{\Lambda}\) denote the present-day values of the matter and vacuum energy density critical parameters respectively, and we used the definitions of the critical energy density of the Universe today \(\rho_{c0}=3H_{0}^{2}/\chi\), \(\Omega_{m0}=\rho_{m0}/\rho_{c0}\) and \(\Omega_{\Lambda 0}=\rho_{\Lambda 0}/\rho_{c0}=1-\Omega_{m0}\) (flat Universe). Thus, expressing (5) and (9) via \(x\), we get
\[\frac{dV}{dx} = \left(12H^{2}(x)-6H(x)\frac{dH}{dx}\right)\frac{d\phi}{dx}\,, \tag{12}\] \[V(x) = -6H^{2}(x)\frac{d\phi}{dx}. \tag{13}\]
The three equations (11), (12) and (13) can be solved for the three unknowns \(H(z)\), \(\phi(z)\) and \(V(z)\). Furthermore, combining the last two functions, we can also infer the profile \(V(\phi)\) and, hence, the nature of the considered \(f(R)\) gravity.
## IV Solution for the Hubble tension
We start our analysis by observing that the ratio of (12) and (13) results in
\[\frac{d\ln V}{dx}=-2+\frac{d\ln H}{dx}\,, \tag{14}\]
which admits the solution
\[V(x)=\frac{H(x)}{\lambda e^{2x}}, \tag{15}\]
where we fixed the integration constant as \(V(0)\equiv\frac{H(0)}{\lambda}\), with \(\lambda\) a negative constant of dimension \([\lambda]=L\). Now, comparing the equation above with (13) and taking into account (11), we get
\[\frac{d\phi}{dx}=-\frac{1}{6\lambda H_{0}e^{2x}}\sqrt{\frac{\phi(x)}{\Omega_{ m0}e^{3x}+\Omega_{\Lambda 0}e^{x(w_{0}+w_{1}x)}}}, \tag{16}\]
where, in solving (11) for \(H(x)\), we take the positive root related to the expanding branch for \(a(t)\). The formal solution is then given by
\[\sqrt{\phi(x)}=1-\frac{1}{12\lambda H_{0}}\int_{0}^{x}\frac{dy}{\sqrt{\Omega_{ m0}e^{7y}+\Omega_{\Lambda 0}e^{y(4+w_{0}+w_{1}y)}}}, \tag{17}\]
\[H_{0,\rm eff}(x)=\frac{H_{0}}{\sqrt{\phi(x)}}. \tag{18}\]
The behaviour of \(\phi(z)\) can be appreciated in Fig. 1, which shows that General Relativity is rapidly recovered for \(z\gtrsim 2\), where the scalar field boils down to a constant value. From the condition \(H_{0,\rm eff}(x_{\rm CMB})=H_{0,\rm CMB}\) it is possible to evaluate the parameter \(\lambda\) for every pair of \(\{w_{0},w_{1}\}\) from the relation
\[\phi(z_{\rm CMB})=\left(\frac{H_{0}}{H_{0,\rm CMB}}\right)^{2}, \tag{19}\]
having set \(\Omega_{m}=0.298\), \(H_{0,\rm CMB}=67.4\,{\rm km\ s^{-1}Mpc^{-1}}\) and \(H_{0}=73.04\,{\rm km\ s^{-1}Mpc^{-1}}\). The values of \(\{w_{0},w_{1}\}\) are selected by ensuring the positiveness of the mass of the scalar mode associated to the value of \(\phi\) close to \(x=0\), which, for a curved background, takes the form1
Footnote 1: We consider the adiabatic limit where time derivatives are neglected, see [30; 31; 32].
\[m_{eff}^{2}=\frac{1}{3}\left(\phi\,\frac{d^{2}V}{d\phi^{2}}-\frac{dV}{d\phi} \right). \tag{20}\]
The potential term \(V(\phi)\), see Fig. 2, is obtained from (15) by numerically inverting (17) for \(x=x(\phi)\). We find for the parameters \(w_{0},w_{1}\) the values
\[w_{0}=0.8\qquad w_{1}=-0.66, \tag{21}\]
associated to the central profile of \(H_{0,\rm eff}\) in Fig. 3. When we look at the maximal and minimal curves of \(H_{0,\rm eff}\), which enclose the red-shaded region and are obtained by considering the errors of Planck and SNe Ia measurements, we find instead for \(\{w_{0},w_{1}\}\) the extremal values
\[w_{0,\rm max}=0.7,\qquad w_{1,\rm max}=-0.51, \tag{22}\]
and
\[w_{0,\rm min}=0.9,\qquad w_{1,\rm min}=-0.83. \tag{23}\]
We see that the non-minimally coupled scalar field is responsible for the scaling of the effective Hubble constant \(H_{0}(z)\), which mostly occurs for \(z\lesssim 2\), where the quintessence properties of the dynamical dark energy fluid (see Fig. 4) ensure that no tachyon modes emerge in the theory. At higher red-shift, the underlying gravitational scenario approaches General Relativity (up to a rescaling of the gravitational constant given by the asymptotic value of the scalar field, see Fig. 1), and a transition from quintessence to phantom dark energy model (\(w<-1\)) takes place in \(z\simeq 0.8\), where \(\rho(z)\) reaches its maximum.
### The f(R) profile at low-red-shift
Now, we present an approximated solution for the \(f(R)\) function under the low-red-shift condition (\(z\lesssim 1\)). To recover the explicit behaviour of the function \(f(R)\) defined
Figure 1: Behaviour of the scalar field vs the red-shift \(z\).
in (2), we proceed with a complete numerical analysis of the system (11)-(12)-(13), which we solve in terms of \(\phi(x)\) and \(V(\phi(x))\). Thus, we perform the inversion \(x=x(\phi)\) enabling us to calculate \(R=dV/d\phi\) and \(\phi=\phi(R)\). Finally, by fitting such a numerical solution up to \(z=0.5\) and considering a profile of the form
\[f(R)=c_{1}+R+c_{2}R^{2}, \tag{24}\]
where constant, linear, and quadratic terms in R are included, and the recovery of the \(\Lambda\)CDM model occurs when \(\phi=const\) (defined as \(\phi=df/dR\)), we obtain the values \(c_{1}=2.4\,H_{0}^{2}\) and \(c_{2}=4\times 10^{-3}H_{0}^{-2}\). As a check for the theory, the positive \(c_{2}\) coefficient in front of the \(R^{2}\) term ensures the absence of a tachyonic field and well-behaved cosmological solutions with a proper era of matter domination [33].
## V Conclusions
Here, we assumed that Hubble tension is a consolidated feature, emerging from the comparison of SNIa Pantheon and Pantheon+ data [34; 35; 36; 37; 12] with CMB data [38] (see also [39] for a reduction of the error bars in SNIa). As argued in [9; 10], Hubble tension can be associated to an effective \(H_{0}(z)\) profile, as it emerges from the SNIa sample and is reliable up to the recombination red-shift \(z\simeq 1100\) (for a theoretical representation of this scenario via a modified gravity theory, see also [14]). The present study aimed to search for an effective \(H_{0}(z)\) behavior that rapidly decays across the SNIa samples, approaching Planck measurements via a plateau for \(z\sim 5\). This scenario has been reproduced by combining a metric \(f(R)\) gravity, which controls the dynamics at low red-shift, together with a dynamical dark energy density exhibiting a phantom divide line for \(z\gtrsim 0.8\). The whole of these effects concurs in driving the desired dynamical picture, with the non-minimally coupled scalar field responsible for the rescaling of the Hubble constant, and the dynamical dark energy source guaranteeing the absence of tachyonic instabilities, otherwise affecting the considered modified gravity model at low red-shifts. The proposed scenario has the merit to provide a precise marker for its validation, since it discriminates low red-shifts sources, like Cepheid and SNIa, from larger ones, like QUASARS and Gamma Ray Bursts. Within the error bars, indeed, there is a possible agreement between our predictions and data sets from distant sources, see e.g. [40], bearing in mind that the increase of the statistics in the samples of high \(z\) sources will allow a more accurate comparison with our model. At this stage, we simply observe that by taking into account error bars of \(H_{0}\) measurements, the behavior in Fig. 3 is consistent with Planck data already for \(z\simeq 2\), suggesting that the \(H_{0}(z)\) profile could mainly run within the red-shift interval available to SNIa samples.
## Acknowledgment
The work of FB is supported by the postdoctoral grant CIAPOS/2021/169. MDA is supported by an EPSRC studentship. Authors thank Gonzalo J. Olmo for the useful comments and Maria Giovanna Dainotti for valuable suggestions on Fig.3.
|
2307.11228 | From Adaptive Query Release to Machine Unlearning | We formalize the problem of machine unlearning as design of efficient
unlearning algorithms corresponding to learning algorithms which perform a
selection of adaptive queries from structured query classes. We give efficient
unlearning algorithms for linear and prefix-sum query classes. As applications,
we show that unlearning in many problems, in particular, stochastic convex
optimization (SCO), can be reduced to the above, yielding improved guarantees
for the problem. In particular, for smooth Lipschitz losses and any $\rho>0$,
our results yield an unlearning algorithm with excess population risk of
$\tilde O\big(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\rho}\big)$ with unlearning
query (gradient) complexity $\tilde O(\rho \cdot \text{Retraining
Complexity})$, where $d$ is the model dimensionality and $n$ is the initial
number of samples. For non-smooth Lipschitz losses, we give an unlearning
algorithm with excess population risk $\tilde
O\big(\frac{1}{\sqrt{n}}+\big(\frac{\sqrt{d}}{n\rho}\big)^{1/2}\big)$ with the
same unlearning query (gradient) complexity. Furthermore, in the special case
of Generalized Linear Models (GLMs), such as those in linear and logistic
regression, we get dimension-independent rates of $\tilde
O\big(\frac{1}{\sqrt{n}} +\frac{1}{(n\rho)^{2/3}}\big)$ and $\tilde
O\big(\frac{1}{\sqrt{n}} +\frac{1}{(n\rho)^{1/3}}\big)$ for smooth Lipschitz
and non-smooth Lipschitz losses respectively. Finally, we give generalizations
of the above from one unlearning request to \textit{dynamic} streams consisting
of insertions and deletions. | Enayat Ullah, Raman Arora | 2023-07-20T20:46:39Z | http://arxiv.org/abs/2307.11228v1 | # From Adaptive Query Release to Machine Unlearning
###### Abstract
We formalize the problem of machine unlearning as design of efficient unlearning algorithms corresponding to learning algorithms which perform a selection of adaptive queries from structured query classes. We give efficient unlearning algorithms for linear and prefix-sum query classes. As applications, we show that unlearning in many problems, in particular, stochastic convex optimization (SCO), can be reduced to the above, yielding improved guarantees for the problem. In particular, for smooth Lipschitz losses and any \(\rho>0\), our results yield an unlearning algorithm with excess population risk of \(\widetilde{O}\big{(}\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\rho}\big{)}\) with unlearning query (gradient) complexity \(\widetilde{O}(\rho\cdot\text{Retraining Complexity})\), where \(d\) is the model dimensionality and \(n\) is the initial number of samples. For non-smooth Lipschitz losses, we give an unlearning algorithm with excess population risk \(\widetilde{O}\big{(}\frac{1}{\sqrt{n}}+\big{(}\frac{\sqrt{d}}{n\rho}\big{)}^{ 1/2}\big{)}\) with the same unlearning query (gradient) complexity. Furthermore, in the special case of Generalized Linear Models (GLMs), such as those in linear and logistic regression, we get dimension-independent rates of \(\widetilde{O}\big{(}\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{2/3}}\big{)}\) and \(\widetilde{O}\big{(}\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{1/3}}\big{)}\) for smooth Lipschitz and non-smooth Lipschitz losses respectively. Finally, we give generalizations of the above from one unlearning request to _dynamic_ streams consisting of insertions and deletions.
Machine Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Un, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Unlearning, Un, Unlearning, Unlearning, Un, Unlearning, Un, Unlearning, Unlearning, Un, Unlearning, Unlearning, Un, Unlearning, Un, Unlearning, Un, Unlearning, Un, Unlearning, Un, Un, Unlearning, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un, Un Un, Un, Un, Un Un, Un Un, Un Un, Un Un,
### Our Results and Techniques
**Learning/Unlearning as Query Release:** Iterative procedures are an integral constituent of the algorithmic toolkit for solving machine learning problems and beyond. As in the case of GD above, these often consist of a sequence of _simple_ but _adaptive_ computations. The simple computations are often efficiently undo-able (as in the first iteration of GD) but its _adaptive_ nature - change of result of one iteration changing the trajectory of the algorithm - makes it difficult to undo computation, or unlearn, efficiently.
As opposed to designing unlearning (and learning) algorithms for specific (machine learning) problems, we study the design of unlearning algorithms corresponding to (a class of) learning algorithms. We formalize this by considering learning algorithms which perform _adaptive query release_ on datasets. Specifically, this consists of a selection of adaptive queries from structured classes like linear and prefix-sum queries (see Section 3 for details). The above example of GD is an instance of linear query, since the query, which is the average gradient \(\frac{1}{n}\sum_{i=1}^{n}\nabla\ell(w_{t};z_{i})\), is a sum of functions of data-points. With this view, we study how to design _efficient_ unlearning algorithms for such methods.
We use efficiency in the sense of number of queries made (query complexity), ignoring the use of other resources, e.g., space, computation for selection of queries, etc. To elaborate on why this is interesting, firstly note that this does not make the problem trivial, in the sense that even with unlimited access to other resources, it is still challenging do design an unlearning algorithm with query complexity smaller than that of retraining (the naive baseline). Secondly, let us revisit the motivation from solving optimization problems. The standard model to measure computation in optimization is the number of gradient queries a method makes for a target accuracy, often abstracted in an oracle-based setup (Nemirovsky and Yudin, 1983). Importantly, this setup imposes no constraints on other resources, yet it witnesses the optimality of well-known simple procedures like (variants of) GD. We follow this paradigm, and as applications of our results to Stochastic Convex Optimization (SCO), we make progress on the fundamental question of understanding the gradient complexity of unlearning in SCO. Interestingly, our proposed unlearning procedures are simple enough that the improvement over retraining in terms of query complexity also applies even with accounting for the (arithmetic) complexity of all other operations in the learning and unlearning methods.
**Linear queries:** The simplest query class we consider is that of linear queries (details deferred to Appendix B). Herein, we show that the prior work of Ullah et al. (2021), which focused on unlearning in SCO and was limited to the stochastic gradient method, can be easily extended to general linear queries. This observation yields unlearning algorithms for algorithms for _Federated Optimization/Learning_ and _\(k\)-means clustering_. Herein, we give a \(\rho\)-TV stable (see Definition 3) learning procedure with \(T\) adaptive queries and a corresponding unlearning procedure with a \(O(\sqrt{T}\rho)\) relative unlearning complexity (the ratio of unlearning and retraining complexity; see Definition 5).
**Prefix-sum queries:** Our main contribution is the case when we consider the class of prefix-sum queries. These are a sub-class of interval queries which have been extensively studied in differential privacy and are classically solved by the binary tree mechanism (Dwork et al., 2010). We note in passing that for differential privacy, the purpose of the tree is to enable a tight privacy accounting and no explicit tree may be maintained. In contrast, for unlearning, we show that maintaining the binary tree data structure aids for efficient unlearning. We give a binary-tree based \(\rho\)-TV stable learning procedure and a corresponding unlearning procedure with a \(\widetilde{O}(\rho)\) relative unlearning complexity.
**Unlearning in Stochastic Convex Optimization (SCO):** Our primary motivation for considering prefix-sum queries is its application to unlearning in SCO (see Section 2 for preliminaries).
**1) Smooth SCO**: The problem of unlearning in smooth SCO was studied in Ullah et al. (2021) which proposed algorithms with excess population risk of \(\widetilde{O}\left(\frac{1}{\sqrt{n}}+\left(\frac{\sqrt{d}}{n\rho}\right)^{2/ 3}\right)\) where \(\rho\) is the relative unlearning complexity. We show that using a variant of variance-reduced Frank-Wolfe (Zhang et al., 2020), which uses prefix-sum queries, yields an improved excess population risk of \(O\left(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\rho}\right)\). This corresponds to \(\widetilde{O}(\rho n)\) expected gradient computations upon unlearning.
**2) Non-smooth SCO:** In the non-smooth setting, which was not covered in the prior works, we give an algorithm based on Dual Averaging (Nesterov, 2009), which again uses prefix-sum query access, and thus fits into the framework. This algorithm gives us an excess population risk of \(O\left(\frac{1}{\sqrt{n}}+\frac{d^{1/4}}{\sqrt{n\rho}}\right)\) with \(\widetilde{O}(\rho n)\) expected gradient complexity of unlearning.
**3) Generalized Linear Models (GLM):** GLMs are one of most basic machine learning problems which include the squared loss (in linear regression), logistic loss (in logistic regression), hinge loss (support vector machines), etc. We study unlearning in two classes of GLMs (see below), for which we combine recently proposed techniques based on dimensionality reduction (Arora et al., 2022) with the above prefix-sum query algorithms to get the following _dimension-independent rates_.
**3(a) Smooth GLM:** For the smooth convex GLM setting, we combine Johnson-Lindenstrauss transform with variance reduced Frank-Wolfe to get \(O\Big{(}\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{2/3}}\Big{)}\) excess pop
ulation risk. Note that we get no overhead in statistical rate even with very small relative unlearning complexity, \(\rho\approx n^{-1/4}\). This class of smooth GLMs contains the well-studied problem of logistic regression. Hence, our result demonstrates that it is possible to unlearn logistic regression with _sub-linear_, specifically \(O(n^{3/4})\), unlearning complexity with no sacrifice in the statistical rate.
**3(b) Lipschitz GLM:** Similarly, for the Lipschitz convex GLM setting, we combine Johnson-Lindenstrauss transform with Dual Averaging yielding a rate of \(\widetilde{O}\left(\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{1/3}}\right)\).
Please see Table 1 for a summary of above results.
**SCO in dynamic streams:** Finally, we consider SCO in dynamic streams where we observe a sequence of insertions and deletions and are supposed to produce outputs after each time-point. In this case, we present two methods: one which satisfies the exact unlearning guarantee with worse update time, the other which satisfies _weak unlearning_ - which only requires the model (and not metadata) to be indistinguishable (see Definition 2) - with improved update time. The exact unlearning method is inspired from the work of Ullah et al. (2021) which dealt with insertions similar to deletions. The weak unlearning method is motivated from the observation that the above may be too pessimistic. To elaborate, inserting a new data item does not warrant a (unlearning) guarantee that the algorithm's state be indistinguishable to the case if the point was not inserted. Hence, insertions should require smaller update time which is indeed the case for our proposed methods.
### Related work
Our work is a direct follow up of Ullah et al. (2021) which proposed the framework of Total Variation (TV) stability and maximal coupling for the **exact** machine unlearning problem. They applied this to unlearning in smooth stochastic convex optimization (SCO) and obtained a guarantee of \(\frac{1}{\sqrt{n}}+\left(\frac{\sqrt{4}}{n\rho}\right)^{\frac{2}{3}}\) on excess population risk, where \(n\) is the number of data samples, \(d\), model dimensionality and \(\rho\) is the relative unlearning complexity (see Definition 5). We improve upon the results in that work in multiple ways as described in the preceding section.Besides this, the exact unlearning problem has been studied for \(k\)-means clustering (Ginart et al., 2019) and random forests (Brophy and Lowd, 2021). The work of Bourtoule et al. (2021) proposes a general methodology for exact unlearning for deep learning methods. Their focus is to devise _practical methods_ and they do not provide theoretical guarantees on accuracy, even in simple settings. Finally, there are works which consider unlearning in SCO, however they use an _approximate_ notion of unlearning inspired from differential privacy (Guo et al., 2019; Neel et al., 2021; Sekhari et al., 2021; Gupta et al., 2021), and therefore are incomparable to our work.
## 2 Problem Setup and preliminaries
Let \(\mathcal{Z}\) be the data space, \(\mathcal{W}\) be the model space and \(\mathcal{M}\) be the _meta-data_ space, where meta-data is additional information a learning algorithm may save to aid unlearning. We consider a learning algorithm as a map \(\mathbf{A}:\mathcal{Z}^{*}\to\mathcal{W}\times\mathcal{M}\) and an unlearning algorithm as a map \(\mathbf{U}:\mathcal{W}\times\mathcal{M}\times\mathcal{Z}\to\mathcal{W}\times \mathcal{M}\). We use \(\mathcal{A}\) and \(\mathcal{U}\) to denote the first output (which belongs to \(\mathcal{W}\)) of \(\mathbf{A}\) and \(\mathbf{U}\) respectively.
We recall the definition of exact unlearning which requires that the entire state after unlearning be indistinguishable from the state obtained if the learning algorithm were applied to the dataset without the deleted point.
**Definition 1** (Exact unlearning).: _A procedure \((\mathbf{A},\mathbf{U})\) satisfies exact unlearning if for all datasets \(S\), all \(z\in\mathcal{Z}\), and for all events \(\mathcal{E}\subseteq\mathcal{W}\times\mathcal{M}\), we have, \(\mathbb{P}\left(\mathbf{A}\left(S\backslash\left\{z\right\}\right)\in \mathcal{E}\right)=\mathbb{P}\left(\mathbf{U}\left(\mathbf{A}(S),z\right)\in \mathcal{E}\right)\)_
We next define weak unlearning wherein only the model output and not the entire state is required to be indistinguishable.
**Definition 2** (Weak unlearning).: _A procedure \((\mathbf{A},\mathbf{U})\) satisfies weak unlearning if for all all datasets \(S\), all \(z\in\mathcal{Z}\), and for all events \(\mathcal{E}\subseteq\mathcal{W}\times\mathcal{M}\), we have, \(\mathbb{P}\left(\mathcal{A}\left(S\backslash\left\{z\right\}\right)\in \mathcal{E}\right)=\mathbb{P}\left(\mathcal{U}\left(\mathbf{A}(S),z\right)\in \mathcal{E}\right)\)_
**Unlearning request:** We consider the setting where we start with a dataset of \(n\) samples and observe **one** unlearning request. We assume that the choice of unlearning request is oblivious to the learning process. In Section 6, we generalize our result to a streaming setting of requests.
**Total Variation stability, maximal coupling and efficient unlearning:** The Total Variation (TV) distance between two probability distributions \(P\) and \(Q\) is
\[\mathsf{TV}(P,Q)=\sup_{\text{measurable }\mathcal{E}}\left|P(\mathcal{E})-Q( \mathcal{E})\right|.\]
Next, we define Total Variation (TV) stability to motivate algorithmic techniques for efficient unlearning.
**Definition 3**.: _An algorithm \(\mathcal{A}\) is said to be \(\rho\) Total Variation (TV) stable if for all datasets \(S\) and \(S^{\prime}\) differing in
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline
**Problem** & **Base algorithm** & **Rate** \\ \hline \hline Smooth, Lipschitz-SCO & VR-FW & \(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\rho}\) \\ \hline Lipschitz SCO & DA & \(\frac{1}{\sqrt{n}}+\frac{d^{\prime 4}}{\sqrt{n\rho}}\) \\ \hline Smooth, Lipschitz GLM & JL + VR-FW & \(\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{2/3}}\) \\ \hline Lipschitz GLM & JL + DA & \(\frac{1}{\sqrt{n}}+\frac{1}{(n\rho)^{1/3}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Excess population risk guarantees for various problems as well as the base algorithm; \(\rho\): relative unlearning complexity (see Definition 5), VR-FW: Variance-reduced Frank Wolfe, DA: Dual averaging, JL: Johnson-Lindenstrauss transform.
one point, i.e. \(|S\Delta S^{\prime}|=1\), the total variation distance, \(\mathsf{TV}\left(\mathcal{A}(S),\mathcal{A}(S^{\prime})\right)\leq\rho\)_
Given two distributions \(P\) and \(Q\), a **coupling** is a joint distribution \(\pi\) with marginals \(P\) and \(Q\). Furthermore, a **maximal coupling** is a coupling \(\pi\) such that the disagreement probability \(\mathbb{P}_{(x,y)\sim\pi}\left\{x\neq y\right\}=\mathsf{TV}(P,Q)\). In the unlearning context, \(P=\mathcal{A}(S)\), the output on initial dataset, and \(Q=\mathcal{A}(S^{\prime})\), the output on the updated dataset. Hence, the unlearning problem simply becomes about transporting \(P\) to \(Q\) with small _computational cost_, akin to optimal transport (Villani, 2009). Furthermore, observe that when sampled from a maximal coupling between \(P\) and \(Q\), by definition, we get the **same sample** for both \(P\) and \(Q\), expect with probability \(\rho\), and yet satisfying the exact unlearning criterion. The main idea is that for certain learning algorithms of interest, during unlearning, we can **efficiently** construct a (near) maximal coupling of \(P\) and \(Q\), and so the same model output from \(P\) suffices for \(Q\), most of the times. In particular, the fraction of times that we need change the model is (roughly) the TV-stability parameter \(\rho\) of the learning algorithm. The goal, therefore, is to design an (accurate) TV-stable learning algorithm and a corresponding efficient coupling-based unlearning algorithm. In this work, we use the technique of reflection coupling described below.
**Reflection Coupling (Lindvall and Rogers, 1986):** Reflection Coupling is a classical technique in probability to maximally couple symmetric probability distributions. Consider two probability distributions \(P\) and \(Q\) with means \(u\) and \(u^{\prime}\) and let \(r\) be a sample from \(P\). The process involves a rejection sampling step on the two distributions and sample \(r\) (see line 13 in in Algorithm 3). If it results in accept, we use the same \(r\) as the sample from \(Q\), otherwise, we apply the following simple map:
\[\mathsf{Reflect}(u,u^{\prime},r)=u-u^{\prime}+r,\]
which gives the sample from \(Q\), see line 16 in Algorithm 3.
```
0: Dataset \(S\), steps \(T\), query functions \(\left\{q_{t}(\cdot)\right\}_{t\leq T}\) where \(q_{t}\in\mathcal{Q}\), a query class, update functions \(\left\{U_{t}(\cdot)\right\}_{t\leq T}\), selector function \(\mathcal{S}(\cdot)\)
1: Initialize model \(w_{1}\in\mathcal{W}\)
2:for\(t=1\) to \(T-1\)do
3: Query dataset \(u_{t}=q_{t}\left(\left\{w_{i}\right\}_{i\leq t},S\right)\)
4: Update \(w_{t+1}=U_{t}(\left\{w_{i}\right\}_{i\leq t},u_{t})\)
5:endfor
6:\(\widehat{w}=\mathcal{S}\left(\left\{w_{t}\right\}_{t\leq T}\right)\)
```
**Algorithm 1** Template learning algorithm
Our algorithmic techniques borrow tools from differential privacy (Dwork et al., 2014) such as its relationship with Total Variation stability; we describe these in Appendix A.
**Stochastic Convex Optimization (SCO):** SCO is the dominant framework for computationally-efficient machine learning. Consider a closed convex (constraint) set \(\mathcal{W}\subset\mathbb{R}^{d}\) and let \(D\) denote its diameter. Let \(\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}\) be a loss function, which is convex in its first parameter \(\forall z\in\mathcal{Z}\). Given \(n\) i.i.d. points from an unknown probability distribution \(\mathcal{D}\) over \(\mathcal{Z}\), the goal is to devise an algorithm, the output of which has small _population risk_, defined as
\[L(w;\mathcal{D}):=\operatorname*{\mathbb{E}}_{z\sim\mathcal{D}}\ell(w;z).\]
The _excess population risk_ is then \(L(w;\mathcal{D})-L(w^{*};\mathcal{D})\) where \(w^{*}\) denotes a population risk minimizer over \(\mathcal{W}\).
**Generalized Linear Models (GLM):** Generalized Linear Models (GLMs) are loss functions popularly encountered in supervised learning problems, like linear and logistic regression. Herein, \(\ell(w;(x,y))=\phi_{y}\left(\left\langle w,x\right\rangle\right)\), where \(\phi_{y}:\mathbb{R}\rightarrow\mathbb{R}\) is some _link function_. We use \(\left\|\mathcal{X}\right\|\) to denote the radius bound on data points, i.e. for \(x\in\mathcal{X}\subseteq\mathbb{R}^{d}\), \(\left\|x\right\|\leq\left\|\mathcal{X}\right\|\). In this case, we consider the unconstrained setup i.e. \(\mathcal{W}=\mathbb{R}^{d}\), as it allows to get dimension-independent rates for GLMs, similar to what happens under differential privacy (Jain and Thakurta, 2014; Arora et al., 2022).
We introduce the Johnson-Lindenstrauss property below which is crucial to our construction.
**Definition 4** (Johnson-Lindenstrauss property).: _A random matrix \(\Phi\in\mathbb{R}^{k\times d}\) satisfies \((\beta,\gamma)\)-JL property if for any \(u,v\in\mathbb{R}^{d}\), with probability at least \(1-\gamma\), \(\mathbb{P}\left(\left|\left\langle\Phi u,\Phi v\right\rangle-\left\langle u,v \right\rangle\right|\geq\beta\left\|u\right\|\left\|v\right\|\right)\leq\gamma\)._
There exists many efficient constructions of such random matrices (Nelson, 2011).
## 3 Unlearning for Adaptive Query Release
We now set up the framework of adaptive query release, which is a lens to view (existing) iterative learning procedures; this view is useful in our design of corresponding unlearning algorithms. Iterative procedures run on datasets consist of a sequence of _interactions_ with the dataset; each interaction computes a certain function, or query, on the dataset. The chosen query is typically adaptive, i.e., dependent on the prior query outputs. We consider iterative learning procedures which are composed of adaptive queries from a specified query class. Formally, consider a query class \(\mathcal{Q}\subseteq\mathcal{W}^{\mathcal{W}^{*}\times\mathcal{Z}^{*}}\); herein, each query in \(\mathcal{Q}\) is a function of a sequence of \(\left\{w_{i}\right\}_{i<t}\) (typically, prior query outputs), and the dataset \(S\), with output in \(\mathcal{W}\). With this view, we give a general template of a learning procedure as Algorithm 1, where \(\left\{U_{t}\right\}_{t}\) and \(\mathcal{S}\) are the update and selector functions internal to the algorithm.
**Query model:** We describe the query model which we use to measure computational complexity. Under the model, a query function \(q(\left\{w\right\}_{i},S)\) takes \(|S|\) unit computations (or
queries, for brevity) for any \(q\) and \(\left\{w_{i}\right\}_{i}\). In our applications to SCO, this will correspond to the gradient oracle complexity.
Our algorithmic approach to unlearning is rooted in the relationship between TV stability and maximal couplings. With this view, for a specified query class, we have the following requirements.
1. **TV-stability:** We want a \(\rho\)-TV stable "modification" of the learning Algorithm 1, in the sense that it responds to the queries (line 3) while satisfying TV stability.
2. **Efficient unlearning algorithm**: We measure efficiency as the average number of queries the unlearning algorithm makes relative to the learning algorithm (retraining), defined as follows.
**Definition 5** (Relative Unlearning Complexity).: _The Relative Unlearning Complexity is defined as,_
\[\frac{\mathbb{E}_{\left(\mathbf{A},\mathbf{U}\right)}\left[\text{Query complexity of unlearning algorithm }\mathbf{U}\right]}{\mathbb{E}_{\mathbf{A}}\left[\text{Query complexity of learning algorithm }\mathbf{A}\right]}\]
For a \(\rho\)-TV stable learning algorithm, we want that the relative unlearning complexity is (close to) \(\rho\). This is motivated from the relationship between maximal coupling and TV distance. In the following, our proposed unlearning algorithm constructs a (near) maximal coupling of the learning algorithm's output under the original and updated dataset. This means that unlearning algorithm changes the original output (under the original dataset) with probability at most \(\rho\) - in this case, the unlearning algorithm makes a number of queries akin to retraining. In the other case when it does change the output, it makes a small (ideally, constant) number of queries. The above imply that relative unlearning complexity is (close to) \(\rho\). We note that relative unlearning complexity, in itself, does not completely capture if the unlearning algorithm is _good_, since it may be the case that the corresponding learning algorithm is computationally more expensive than other existing methods. However, in our applications to SCO (Section 5), our learning algorithms are linear time, so the denominator, in the definition above, is as small as it can be (asymptotically), i.e. \(\Theta(n)\).
3. **Accuracy:** We will primarily be concerned with correctness of the unlearning algorithm and its efficiency. In the applications (Section 5), we will give accuracy guarantees for specific problems, where we will see our proposed TV stable modified algorithms are still accurate.
## 4 Prefix-sum Queries
We now consider prefix-sum queries, which is the main contribution of this work. The reason for this choice is that two powerful (family of) algorithms for SCO, Dual Averaging and Recursive Variance Reduction based methods, fit into this template (detailed in Section 5). We start by defining a prefix-sum query.
**Definition 6**.: _A set of queries \(\left\{q_{t}\right\}_{t\geq 1}\) where \(q_{t}:\mathcal{W}^{t}\times\mathcal{Z}^{n}\rightarrow\mathcal{W}\) are called prefix-sum queries if \(q_{1}(w_{1},S)=p_{1}(w_{1},z_{1})\) and for all \(t>1\), \(q_{t}(\left\{w_{i}\right\}_{i\leq t},S)=q_{t-1}(\left\{w_{i}\right\}_{i<t},S)+p _{t}\big{(}\left\{w_{i}\right\}_{i\leq t},z_{t})\big{)}\) for some functions \(\left\{p_{t}\right\}_{t\geq 1}\) where \(p_{t}:\mathcal{W}^{*}\times\mathcal{Z}\rightarrow\mathcal{W}\)._
Simply put, prefix-sum queries, sequentially query **new** data points and adds them to the previous accumulated query. A simple example is computing partial sums of data points \((z_{1},z_{1}+z_{2},\ldots)\). Note that in the above definition, we can equivalently represent the prefix-sum queries using the sequence \(\left\{p_{t}\right\}_{t}\). We also assume that the queries have bounded sensitivity, defined as follows.
**Definition 7**.: _A query \(q:\mathcal{W}^{*}\times\mathcal{Z}^{n}\rightarrow\mathcal{W}\) is \(B\)-sensitive if_
\[\sup_{\left\{w_{i}\right\}_{i}}\sup_{S,S^{\prime}:|SAS^{\prime}|=1}\|q\left( \left\{w_{i}\right\}_{i},S\right)-q\left(\left\{w_{i}\right\}_{i},S^{\prime} \right)\|\leq B.\]
We note that the bounded sensitivity condition is satisfied in a variety of applications; see Section 5.
### Learning with Binary Tree Data-Structure
The learning algorithm, given as Algorithm 2, is based on answering the adaptive prefix-sum queries with the binary tree mechanism (Dwork et al., 2010). For \(n\) samples (assume \(n\) is a power of two, otherwise we can append dummy "zero" samples without any change in asymptotic complexity), the binary tree mechanism constructs a complete binary tree \(\mathcal{T}\) with the leaf nodes corresponding to the data samples. The key idea in the binary tree mechanism is that instead of adding _fresh_ independent noise to each prefix-sum query, it is _better_ to add correlated noise, where the correlation structure is described by a binary tree. For example, suppose we want to release the **seventh** prefix-sum query, \(\sum_{i=1}^{7}p_{i}(\left\{w_{j}\right\}_{j\leq i},z_{i})\), then consider the dyadic decomposition of \(7\) as \(4,2\) and \(1\), and release the sum,
\[\big{(}\sum_{i=1}^{4}p_{i}(\left\{w_{j}\right\}_{j\leq i},z_{i})+ \xi_{1}\big{)}+\big{(}\sum_{i=5}^{6}p_{i}(\left\{w_{j}\right\}_{j\leq i},z_{i}) \xi_{2}\big{)}\] \[+\big{(}p_{7}(\left\{w_{j}\right\}_{j\leq i},z_{i})+\xi_{3}\big{)},\]
where \(\xi_{i}\)'s denote the added noise, which may have also been used in prior prefix-sum query responses. See Figure 1 (left) for a simplified description of the process.
We index the nodes of the tree using using binary strings \(B=\left\{0,1\right\}^{\log(n)}\) which describes the path from the root. Let the tree \(\mathcal{T}=\left\{v_{b}\right\}_{b\in B}\) which denotes the contents stored by the learning algorithm. Herein, each node contains the tuple \((u,r,w,z)\) where \(u\in\mathbb{R}^{d}\) is the query response,
\(\mathbb{R}^{d}\) is the _noisy_ response, \(w\in\mathbb{R}^{d}\) a model and \(z\in\mathcal{Z}\) a data point. In fact, only the leaf nodes store the model and data sample. The size of the tree is the space complexity of the learning procedure. Finally, define \(\mathsf{leaf}:[n]\rightarrow\left\{0,1\right\}^{\log(n)}\) which gives the binary representation of the input leaf node.
This binary tree data structure supports the following operations:
1. \(\mathsf{Append}(u,\sigma;\mathcal{T})\): Add a new leaf to \(\mathcal{T}\), which consists of setting its query response and noisy query response to \(u\), and \(u+\mathcal{N}(0,\sigma^{2}\mathbb{I})\) respectively. Further, update tree to add \(u\) to \(u_{b}\), corresponding to nodes \(v_{b}\) in the path from this leaf to root, and add noise to their noisy response \(r_{b}\) for nodes which are left child in the path.
2. \(\mathsf{GetPrefixSum}(t;\mathcal{T})\), where \(t\in\mathbb{N}\): Get the \(t\)-th noisy response from \(\mathcal{T}\), which consists of traversing the tree from \(t\)-th leaf to root, and adding the noisy responses of nodes which are left child.
3. \(\mathsf{Get}(b;\mathcal{T})\) where \(b\in\left\{0,1\right\}^{\log(n)}\): Get all items in the vertex of \(\mathcal{T}\) indexed by \(b\).
4. \(\mathsf{Set}(b,v;\mathcal{T})\) where \(b\in\left\{0,1\right\}^{\log(n)}\): Set the contents of vertex \(b\) in the \(\mathcal{T}\) as \(v\).
Following Guha Thakurta and Smith (2013), we give pseudo-codes of the above operations in Appendix C, with minor modifications to aid the unlearning process.
```
0: Dataset \(S\), steps \(T\), \(B\)-sensitive prefix-sum queries \(\left\{p_{t}\right\}_{t\leq T}\), update functions \(\left\{U_{t}\right\}_{t\leq T}\), noise std. \(\sigma\)
1:if\(t_{0}=1\)then Permute dataset and initialize \(\mathcal{T}\) endif
2:\(\left(\cdot,\cdot,w_{t_{0}},\cdot\right)=\mathsf{Get}(\mathsf{leaf}(t_{0}); \mathcal{T})\)
3:for\(t=t_{0}\) to \(\left|S\right|-1\)do
4:\(u_{t}=p_{t}(\left\{w_{t}\right\}_{t\leq t},z_{t})\)
5:\(\mathsf{Append}(u_{t},\sigma;\mathcal{T})\)
6:\(r_{t}=\mathsf{GetPrefixSum}(t;\mathcal{T})\)
7:\(w_{t+1}=U_{t}\left(\left\{w_{t}\right\}_{\leq t},r_{t}\right)\)
8:\(\mathsf{Set}(\mathsf{leaf}(t),\left(u_{t},r_{t},w_{t},z_{t}\right);\mathcal{T})\)
9:endfor
10:\(\widehat{w}=\mathcal{S}\left(\left\{w_{t}\right\}_{t}\right)\)
```
**Algorithm 2**\(\mathsf{TreeLearn}(t_{0};\mathcal{T})\)
### Unlearning by Maximally Coupling Binary Trees
The unlearning Algorithm 3 is based on constructing a (near) maximal coupling of the binary trees under current and updated dataset. Let \(z_{j}\) be the element to be deleted and let \(v_{s}\) be the leaf node which contains \(z_{j}\) (we use \(z\) in place of \(z_{j}\) from here on, for simplicity). During unlearning, we simulate (roughly speaking) the dynamics of the learning algorithm if the deleted point was not present to begin with. In that case, in place of the deleted point, some other point would have been used. Now, since the dataset was randomly permuted, every point is equally likely to have been used, and thus we can use the point \(z^{\prime}\) in the last leaf node, say \(v_{l}\), in the tree - this choice of the last point is important for unlearning efficiency. Firstly, the computations associated with the last point \(z^{\prime}\) needs to be undone - towards this, we update the contents of the nodes in the path from node \(v_{l}\) to root (line 5), finally removing node \(v_{l}\) from the tree (line 6). Then, we need to _replace_ all the computations which used the deleted point \(z\) with the same computation under \(z^{\prime}\). Since the learning algorithm was based on the binary tree mechanism, the point \(z\) was only **explicitly** used in the nodes lying on the path from leaf \(v_{s}\) to the root (so, at most \(\log\left(n\right)\) nodes). We say explicitly above because due to the adaptive nature of the process, in principle, all nodes after \(v_{s}\) depend on it, in the sense that their contents would change if the response in \(v_{s}\) were to change. However, importantly,
the binary search structure of our learning algorithm and our coupling technique (details below) would enable us to (mostly) only care about explicit computations.
We first compute **two** new queries, under the data point \(z\) and \(z^{\prime}\), with responses \(g=p_{j}(\{w_{q}\}_{q\leq s},z)\) and \(g^{\prime}=p_{j}(\{w_{q}\}_{q\leq s},z^{\prime})\) respectively (line 3). Starting with leaf node \(v_{s}\), we update the original unperturbed prefix-sum query response under \(z\) i.e. \(u\) to what it would have been under data-point \(z^{\prime}\): \(u^{\prime}=u-g^{\prime}+g\) (line 11). Further, since the training method adds noise \(\mathcal{N}(0,\sigma^{2}\mathbb{I})\) to \(u\) to produce original noisy response \(r\), we now need to produce a sample from \(\mathcal{N}(u^{\prime},\sigma^{2}\mathbb{I})\) to satisfy exact unlearning. Naively, we could simply get a _fresh_ independent sample from \(\mathcal{N}(u^{\prime},\sigma^{2}\mathbb{I})\), however, this would change the noisy response \(r\), and hence require all subsequent computations to be redone (the adaptive nature). So, ideally, we want to reuse the same \(r\) and yet generate a sample from \(\mathcal{N}(u^{\prime},\sigma^{2}\mathbb{I})\). This is precisely the problem of constructing a maximal coupling, discussed in the Section 2, wherein we also discussed the method of reflection coupling to do it.
This amounts to doing a rejection sampling which (roughly) ascertains if response \(r\) is still sufficient under the new distribution \(\mathcal{N}(u^{\prime},\sigma^{2}\mathbb{I})\). Specifically we compute the ratio of the probability densities at \(r\) under the noise added to \(u\) and \(u^{\prime}\), i.e. \(\frac{\phi_{\mathcal{N}(u,\sigma^{2}\mathbb{I})}(r)}{\phi_{\mathcal{N}(u^{ \prime},\sigma^{2}\mathbb{I})}(r)}\) and compare it against a randomly sampled Unif(0,1); if it results in accept, we move to part of the node \(v_{s}\), and repeat. If any step fails, we reflect which generates a different noisy response \(r^{\prime}\), and continue retraining from the next leaf w.r.t. the post order traversal of the tree (the variable ct in Algorithm 3 keeps track of this _next_ node). See Figure 1 for a simplified description of the process.
The main result of this section is as follows.
**Theorem 1**.: _The following are true for Algorithms 2 and 3,_
1. _The learning Algorithm_ 2 _with_ \(\sigma^{2}=\frac{64B^{2}\mu q^{2}(n)}{\rho}\) _satisfies_ \(\rho\)_-TV stability._
2. _The corresponding unlearning Algorithm_ 3 _satisfies exact unlearning._
3. _The relative unlearning complexity is_ \(\widetilde{O}\left(\rho\right)\)__
As discussed in the preceding section, in the Theorem above, we have all the properties we needed with the unlearning process. We now move on to applications and give accuracy guarantees.
## 5 Applications
In the following, we describe some problems and learning algorithms. The corresponding unlearning algorithms and its correctness simply follow as application of the result of the preceding section, provided we show that it uses a bounded sensitivity prefix-sum query. The only other thing to show is the accuracy guarantee of the TV stable modification of the learning algorithm (Algorithm 2).
From here on, we use runtime to mean gradient complexity as is standard in convex optimization (Nemirovsky and Yudin, 1983). But, as pointed out before, our proposed unlearning algorithm yields similar improvements over retraining, even accounting for other operations in the method.
### Smooth SCO with Variance Reduced Frank-Wolfe
We assume that the loss function \(w\mapsto\ell(w;z)\) is \(H\)-smooth and \(G\)-Lipschitz for all \(z\)1. The algorithm we use is variance reduced Frank-Wolfe method where the variance reduced gradient estimate \(u_{t}\) is the Hybrid-SARAH estimate (Tran
Figure 1: A simplified schematic of the learning (left) and unlearning (right) procedures for prefix-sum queries. In the left, the leaves contain (noisy, if \(+\xi\)) prefix-sum queries applied on the randomly permuted data-point (\(z_{i}\)’s) below it. The intermediate nodes with + adds the not-noised values of its children, where as others add noise to it. On the right, the deleted point \(z_{i}\) is replaced with \(z_{\approx}\) which amounts to adjusting the queries with \(-g+g^{\prime}\) (see Algorithm 3 for details) and performing Rejection Sampling (abbreviated RS\({}_{i}\), where \(i\)’s indicates the order of occurrence of sequence of rejection samplings) along the height of the tree.
Dinh et al., 2019) with \(\gamma_{t}=\frac{1}{t+1}\) given as,
\[u_{t} =(1-\gamma_{t})\left(u_{t-1}+\nabla\ell(w_{t};z_{t})-\nabla\ell(w_{ t-1};z_{t})\right)+\gamma_{t}\nabla\ell(w_{t};z_{t})\] \[=\frac{1}{t+1}\sum_{i=1}^{t}\left((i+1)\,\nabla\ell(w_{i};z_{i})-i \nabla\ell(w_{i-1};z_{i})\right)\]
We show that the above is a prefix sum query with sensitivity \(B=2\left(HD+G\right)\), thus fits into our framework. The full pseudo-code is given as Algorithm 12 in Appendix E. We state the main result below where the accuracy guarantee follows from modifications to the analysis in Zhang et al. (2020).
**Theorem 2**.: _Let \(\rho\leq 1\) and \(\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}\) be an \(H\)-smooth, \(G\)-Lipschitz convex function over a closed convex set \(\mathcal{W}\) of diameter \(D\). Algorithm 12, as the learning algorithm, run with \(\sigma^{2}=\frac{64(HD+G)^{2}\log^{2}(n)}{\rho^{2}}\), \(t_{0}=1\) and \(\eta_{t}=\frac{1}{t+1}\) on a dataset \(S\) of n i.i.d. samples from \(\mathcal{D}\) outputs \(\widehat{w}\), with excess population risk bounded as,_
\[\mathbb{E}\left[L(\widehat{w};\mathcal{D})-L(w^{*};\mathcal{D})\right]= \widetilde{O}\left(\left(G+HD\right)D\left(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{ n\rho}\right)\right).\]
_Furthermore, the corresponding unlearning Algorithm 3 (with query and update functions as specified in the learning algorithm), satisfies exact unlearning with \(\widetilde{O}\left(\rho n\right)\) expected runtime._
### Convex GLM with JL Method
**Input:** Dataset \(S\), loss function \(\ell\), base algorithm \(\mathcal{A}\), JL matrix \(\Phi\in\mathbb{R}^{d\times k}\), noise variance \(\sigma^{2}\)
```
1:\(\Phi S=\{\Phi x_{i}\}_{i=1}^{n}\)
2:\(\widehat{w}=\mathcal{A}(\ell,\Phi S,2G\left\|\mathcal{X}\right\|,2H\left\| \mathcal{X}\right\|^{2},\sigma)\)
3:\(\widehat{w}=\Phi^{\top}\widetilde{w}\)
```
**Algorithm 4** JL Method
This JL method, proposed in Arora et al. (2022), is a general technique to get dimension-independent rates for unconstrained convex GLMs from algorithms giving dimension-dependent rate for constrained (general) convex losses. The method, described in Algorithm 4, simply embeds the dataset into a low dimensional space, via a JL matrix \(\Phi\), and then runs a base algorithm on the low dimensional dataset.
**Smooth, Lipschitz GLMs:** We assume that \(\phi_{y}:\mathbb{R}\rightarrow\mathbb{R}\) is convex, \(H\)-smooth and \(G\)-Lipschitz for all \(y\in\mathcal{Y}\). We give the following result in this case using VR-Frank Wolfe as the base algorithm.
**Theorem 4**.: _Let \(\rho\leq 1\) and \(\ell:\mathcal{W}\times\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}\) be an \(H\)-smooth, \(G\)-Lipschitz convex GLM loss function. Algorithm 4 instantiated with Algorithm 12, as the learning algorithm, run with \(\sigma^{2}=\widetilde{O}\left(\frac{\left(H\left\|\mathcal{X}\right\|^{2}\left\| w^{*}\right\|+G\left\|\mathcal{X}\right\|\right)^{2}}{\rho^{2}}\right)\), \(t_{0}=1\), \(\eta_{t}=\frac{1}{t+1}\) and \(k=\widetilde{O}\left(\left(\frac{H\left\|\mathcal{X}\right\|^{2}\left\|w^{*} \right\|}{\left(H\left\|\mathcal{X}\right\|^{2}\left\|w^{*}\right\|+G\left\| \mathcal{X}\right\|\right)}\right)^{2/3}\left(n\rho\right)^{2/3}\right)\) on a dataset \(S\) of \(n\) samples, drawn i.i.d. from \(\mathcal{D}\), outputs \(\widehat{w}\) with excess population risk bounded as,_
\[\mathbb{E}\left[L(\widehat{w};\mathcal{D})-L(w^{*};\mathcal{D}) \right]=\widetilde{O}\Bigg{(}\frac{\left(G\left\|\mathcal{X}\right\|+H\left\| \mathcal{X}\right\|^{2}\left\|w^{*}\right\|\right)\left\|w^{*}\right\|}{\sqrt{ n}}\] \[+\frac{H^{1/3}G^{2/3}\left\|w^{*}\right\|^{4/3}\left\|\mathcal{X }\right\|^{4/3}+H\left\|\mathcal{X}\right\|^{2}\left\|w^{*}\right\|^{2}}{(n \rho)^{2/3}}\Bigg{)}.\]
_Furthermore, the corresponding unlearning Algorithm 3 (with query and update functions as specified in the learning algorithm), satisfies exact unlearning with \(\widetilde{O}\left(\rho n\right)\) expected runtime._
### Non-smooth SCO with Dual Averaging
In this section, we only assume that loss function \(w\mapsto\ell(w;z)\) is \(G\)-Lipschitz and convex \(\forall\)\(z\in\mathcal{Z}\). Herein, we use dual averaging method (Nesterov, 2009) where the model is updated as follows:
\[w_{t+1}=\Pi_{\mathcal{W}}\Big{(}w_{0}-\eta\sum_{i=1}^{t}\nabla\ell(w_{i};z_{i })\Big{)},\]
where \(\Pi\) denotes the Euclidean projection on to the convex set \(\mathcal{W}\). The above again is a prefix-sum query with sensitivity \(G\), thus fits into our framework. The full pseudo-code is given as Algorithm 13 in Appendix E. The accuracy guarantee mainly follows from Kairouz et al. (2021).
**Theorem 3**.: _Let \(\rho\leq 1\) and \(\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}\) be a \(G\)-Lipschitz convex function over a closed convex set \(\mathcal{W}\) of diameter \(D\). Algorithm 13, as the learning algorithm, run with \(\sigma^{2}=\frac{64G^{2}\log^{2}(n)}{\rho^{2}}\), \(t_{0}=1\) and \(\eta=\frac{D^{d/4}\sqrt{\log(n)}}{G\sqrt{n\rho}}\) on a dataset \(S\) of \(n\) samples, drawn i.i.d. from \(\mathcal{D}\), outputs \(\widehat{w}\) with excess population risk bounded as,_
\[\mathbb{E}\left[L(\widehat{w};\mathcal{D})-L(w^{*};\mathcal{D})\right]= \widetilde{O}\Bigg{(}GD\Bigg{(}\frac{1}{\sqrt{n}}+\sqrt{\frac{\sqrt{d}}{n \rho}}\Bigg{)}\Bigg{)}.\]
_Furthermore, the corresponding unlearning Algorithm 3 (with query and update functions as specified in the learning algorithm), satisfies exact unlearning with \(\widetilde{O}\left(\rho n\right)\) expected runtime._
**Lipschitz GLMs:** We assume that \(\phi_{y}:\mathbb{R}\rightarrow\mathbb{R}\) is convex and \(G\)-Lipschitz for all \(y\in\mathcal{Y}\). We give the following result in this case using Dual Averaging as the base algorithm.
**Theorem 5**.: _Let \(\rho\leq 1\) and \(\ell:\mathcal{W}\times\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}\) be a \(G\)-Lipschitz convex GLM loss function. Algorithm 4 with Algorithm 13 as the sub-routine, as the learning algorithm, run with \(\sigma^{2}=O\left(\frac{G^{2}\left\|\mathcal{X}\right\|^{2}}{\rho^{2}}\right)\), \(t_{0}=1\), \(\eta=\frac{\left\|w^{*}\right\|^{d/4}\sqrt{\log(n)}}{G\left\|\mathcal{X}\right\| \sqrt{n\rho}}\) and \(k=\sqrt{n\rho}\) on a dataset \(S\) of \(n\) samples sampled i.i.d. from \(\mathcal{D}\) outputs \(\widehat{w}\), with excess population risk bounded as,_
\[\mathbb{E}\left[L(\widehat{w};\mathcal{D})-L(w^{*};\mathcal{D})\right]= \widetilde{O}\Big{(}G\left\|\mathcal{X}\right\|\left\|w^{*}\right\|\left(\frac{1 }{\sqrt{n}}+\frac{1}{\left(n\rho\right)^{1/3}}\right)\Big{)}.\]
_Furthermore, the corresponding unlearning Algorithm 3 (with query and update functions as specified in the learning algorithm), satisfies exact unlearning with \(\widetilde{O}\left(\rho n\right)\) expected runtime._
## 6 SCO in Dynamic Streams
In this section, we extend our previous results to dynamic streams wherein we observe a sequence of insertions and deletions, starting with potentially zero data points. We assume that the number of available points throughout is positive and the data points are i.i.d. from an an unknown distribution as well as the requests are chosen independent of the algorithm.
To give a simple and unified presentation, let the accuracy, say expected excess population risk, of the \(\rho\)-TV stable Algorithm 2 with a dataset \(S\) be denoted as, \(\alpha(\rho,\left|S\right|;\mathcal{P})\) where \(\mathcal{P}\) denotes problem specific parameters such as Lipschitzness, diameter etc.
We present two techniques for dynamic streams; one of them satisfies exact unlearning but has a worse update time; this is similar to Ullah et al. (2021) and is deferred to Appendix F. The other, presented below, satisfies weak unlearning (see Definition 2) with better update time. A key component to both are _anytime_ guarantees, which hold at every time-point in the stream, for any length of the stream.
**Anytime binary tree mechanism:** In the previous section, the depth of the initialized tree and the noise variance \(\sigma^{2}\), both were chosen as a function of the dataset size \(n\). However, the tree can be easily built in an online manner as in prior work of Guha Thakurta and Smith (2013). For setting the noise variance: for target \(\rho\)-TV stability, we distribute the noise budget exponentially along the height of the tree; specifically, the leaf node contribute to \(\rho/2\) TV stability, the nodes above them \(\rho/4\) and so on. In this way, the final tree satisfies \(\rho\)-TV stability for any value of \(n\).
**Anytime accuracy:** The other problem of changing data size is that the internal parameters of algorithm (step size, in our case) may be set as a function of \(n\) for desirable accuracy guarantees. Fortunately, the two algorithms that we consider, VR-Frank Wolfe and Dual Averaging, have known horizon-oblivious parameter settings (Orabona, 2019). Their JL counterparts on the other hand, require setting the embedding dimension as a function of \(n\), and thus not applicable unless we assume that the number of data points throughout the stream is \(\Theta(n)\).
### Weak Unlearning in Dynamic Streams
We first argue in what way insertions handled in Ullah et al. (2021) is deficient. The main reason is that they require insertions to also satisfy the unlearning criterion: the state of the system upon insertion is indistinguishable to the state had the inserted point being present to begin with. However, this is an overkill; adding new points simply serve to yield improved statistical accuracy. Furthermore, methods which allow adding new points, are abound, particularly in the stochastic optimization setting, sometimes known as _incremental_ methods. Importantly, in most cases, the insertion time of these methods is constant (in \(n\)). Hence, a natural question is whether, for dynamic streams, can we design unlearning methods in which we pay for update time only in proportion to the number of deletions? Our result shows that we can, albeit under the weak unlearning (see Definition 2) guarantee.
Specifically, our procedure requires _hiding_ the order in which data points are processed. Intuitively, an incremental method typically processes the newest data point the last. This ordering is problematic to our unlearning procedure, since if some point is to deleted, then we can no longer replace it with the last point, as we did before, since that would result in a different order. Our main result is as follows.
**Theorem 6**.: _In the dynamic streaming setting with \(R\) requests, using anytime incremental learning and unlearning algorithms, Algorithm 2 and 3, without permuting the dataset, the following are true._
1. _It satisfies weak unlearning at every time point in the stream._
2. _The accuracy of the output_ \(\widehat{w}_{i}\) _at time point_ \(i\)_, with corresponding dataset_ \(S_{i}\)_, is_ \[\mathbb{E}[L(\widehat{w}_{i};\mathcal{D})]-\min_{w}L(w;\mathcal{D})=\alpha( \rho,\left|S_{i}\right|;\mathcal{P})\]
3. _The number of times retraining is triggered, for_ \(V\) _under learning requests is at most_ \(\widetilde{O}(\rho V)\)__
Importantly, in the above guarantee, we only pay for the number of unlearning requests \(V\) rather than the number of requests \(R\).
## 7 Conclusion
In this paper, we proposed a general framework for designing unlearning algorithms for learning algorithms which can be viewed as performing adaptive query release on datasets. We applied this to yield improved guarantees for unlearning in various settings of stochastic convex optimization. All of our results (in the main text) are obtained by studying the class of prefix-sum queries, so a natural future direction is to extend it to more query classes, which could be useful for other problems.
## Acknowledgements
This research was supported, in part, by NSF BIGDATA award IIS-1838139 and NSF CAREER award IIS-1943251. |
2305.07233 | Dual Forgetting Operators in the Context of Weakest Sufficient and
Strongest Necessary Conditions | Forgetting is an important concept in knowledge representation and automated
reasoning with widespread applications across a number of disciplines. A
standard forgetting operator, characterized in [Lin and Reiter'94] in terms of
model-theoretic semantics and primarily focusing on the propositional case,
opened up a new research subarea. In this paper, a new operator called weak
forgetting, dual to standard forgetting, is introduced and both together are
shown to offer a new more uniform perspective on forgetting operators in
general. Both the weak and standard forgetting operators are characterized in
terms of entailment and inference, rather than a model theoretic semantics.
This naturally leads to a useful algorithmic perspective based on quantifier
elimination and the use of Ackermman's Lemma and its fixpoint generalization.
The strong formal relationship between standard forgetting and strongest
necessary conditions and weak forgetting and weakest sufficient conditions is
also characterized quite naturally through the entailment-based, inferential
perspective used. The framework used to characterize the dual forgetting
operators is also generalized to the first-order case and includes useful
algorithms for computing first-order forgetting operators in special cases.
Practical examples are also included to show the importance of both weak and
standard forgetting in modeling and representation. | Patrick Doherty, Andrzej Szalas | 2023-05-12T04:01:21Z | http://arxiv.org/abs/2305.07233v1 | # Dual Forgetting Operators in the Context of Weakest Sufficient and Strongest Necessary Conditions
###### Abstract
_Forgetting_ is an important concept in knowledge representation and automated reasoning with widespread applications across a number of disciplines. A standard forgetting operator, characterized in [26] in terms of model-theoretic semantics and primarily focusing on the propositional case, opened up a new research subarea. In this paper, a new operator called _weak forgetting_, dual to standard forgetting, is introduced and both together are shown to offer a new more uniform perspective on forgetting operators in general. Both the weak and standard forgetting operators are characterized in terms of entailment and inference, rather than a model theoretic semantics. This naturally leads to a useful algorithmic perspective based on quantifier elimination and the use of Ackermann's Lemma and its fixpoint generalization. The strong formal relationship between standard forgetting and strongest necessary conditions and weak forgetting and weakest sufficient conditions is also characterized quite naturally through the entailment-based, inferential perspective used. The framework used to characterize the dual forgetting operators is also generalized to the first-order case and includes useful algorithms for computing first-order forgetting operators in special cases. Practical examples are also included to show the importance of both weak and standard forgetting in modeling and representation.
keywords: Knowledge representation and Reasoning, Forgetting, Weakest Sufficient Conditions, Strongest Necessary Conditions, Quantifier Elimination
## 1 Introduction and Motivation
From a knowledge representation and automated reasoning perspective, _remembering_ is essentially what an agent system does when adding new logical statements to a knowledge or belief
base. Remembering, in this context, is a powerful way of modeling and is the basis for decision making in many agent systems. On the surface, remembering appears to be straightforward, simply add a new statement to a knowledge or belief base. But what if one wants to retain consistency or some other property of the knowledge or belief base upon assertion of additional statements? The knowledge or belief base would then need to be modified in various ways. Then the problem becomes more complex and leads to different subareas in Knowledge Representation, such as belief revision [3; 18; 29] or research with consistency preserving operators [3; 24].
This setting also leads naturally to the dual concept of _forgetting_. Given a knowledge or belief base, what does it mean to forget parts of it permanently, or temporarily for reasons of expedience? Here, on the surface also, forgetting appears to be straightforward, simply remove a statement from a knowledge or belief base. But, as in the case of remembering, there is a great deal of subtlety and choice concerning why and how one might remove a statement, or parts of statements from a knowledge or belief base.
The spectrum between explicit remembering and explicit forgetting and the operators that would be needed for specifying the different degrees in between, offer a complex set of research topics in Knowledge Representation. Lin and Reiter [26] opened up a new subarea of Knowledge Representation with the introduction of a (_standard_) forgetting operator applied to a knowledge or belief base. The forgetting operator is specified in terms of model-theoretic semantical criteria, as are its properties. The focus in their work is primarily propositional, but there is consideration of the first-order case. The basic question asked and answered is "what does it mean to forget certain concepts (propositional variables) in a knowledge base and how does this influence entailment of formulas in that knowledge base?".
This context is the starting point for this paper. Here the interest is in exploring whether there are other well-behaved forgetting operators in the spectrum discussed above and how they may relate to the original standard forgetting operator. Such operators should also be useful representationally and pragmatically.
Consider the following motivating example where the need for an additional forgeting operator is considered. Let \(lt\) and \(lp\) stand for "low temperature" and "low pressure", respectively. Assume we are modeling a physical system and want to maintain the property:
\[lt\lor lp. \tag{1}\]
Consider a situation when a temperature sensor associated with the system is broken and we receive no meaningful information about \(lt\). To adapt the model for this situation, we would then want to temporarily forget \(lt\). According to standard approaches of forgetting, this would result in the second-order formula [26]:
\[\exists\,lt\,(lt\lor lp). \tag{2}\]
Formula (2) is equivalent to _true_, so this would leave us empty-handed when reasoning about maintaining (1) with the associated changes in the system. No additional consequences of the change in the system can be derived. This happens since standard forgetting has the property of preserving entailment (see Proposition 10, point 2 in [26]), where one is interested in what a given theory entails, i.e., in the necessary conditions of the theory. On the other hand, in this situation one would expect that \(lp\) itself should still be maintained, as it is a _sufficient_ condition for (1). However, this weaker from of reasoning is not covered by standard forgetting.
Although this is a simple example, it allows us to target what this paper is about. We are interested in this weaker form of reasoning associated with forgetting and its relation to the
stronger standard form of reasoning with forgetting and how these two forms of forgetting can be used in various applications.
The original contributions of the paper include:
* complementing the standard forgetting operator with a new one, the _weak_ forgetting operator, that is dual to standard forgetting, useful in applications and, surprisingly, not explicitly considered in the literature so far;3 Footnote 3: In the paper, we call the new operator the _weak_ forgetting operator and the standard operator found in the literature, either the _standard_ or the _strong_ forgetting operator.
* specifying forgetting operators in a general, principled framework that is directly related to entailment and inference rather than through a semantic construction of model equivalence as is typically used in defining standard forgetting (see, e.g., (26, Definition 1) and other related work);
* the formal framework introduced shows the strong dual relationship between standard forgetting and strongest necessary conditions, and weak forgetting and weakest sufficient conditions. This relationship follows naturally from the entailment-based, inferential perspective used;
* a computational framework that leverages the inferential perspective and is used for computing the result of forgetting operators is presented. It is based on the use of Ackermann's Lemma and tautology preserving formula transformations. This framework is introduced for the propositional case of forgetting and then later extended to the first-order case;
* it is also shown that computing the propositional or first-order (or fixpoint) equivalent of the dual weak forgetting operator is typically more efficient then computing the standard forgetting operator, or with computing both the weakest and strongest necessary conditions. In the light of complexity results on standard forgetting (see, e.g., (23)), even from that one standpoint alone, it is beneficial to consider the dual weak forgetting operator as a separate operator when it is feasible to use representationally.
The rest of the paper is structured as follows. Section 2 presents preliminaries related to propositional logic and standard forgetting. Section 3 discusses forgetting operators in general and provides their second-order characterization. In Section 4, some examples are presented illustrating the approach, where the weak forgetting operator is shown to be very useful representationally. Section 5 considers the strong relationship between the dual forgetting operators and strongest necessary and weakest sufficient conditions. Section 6 shows how the formalism using dual forgetting operators can be extended to the first-order case, whereas the approach to standard forgetting and its use has been predominantly propositional in nature. In Section 7, we discuss relevant, related work. Finally, Section 8 concludes the paper with a summary and some final remarks.
## 2 Preliminaries
### Classical Propositional Logic
For the sake of simplicity, we initially present ideas starting with classical propositional logic, \(\mathcal{L}_{0}\), with truth constants \(\mathbb{T}\) (true) and \(\mathbb{F}\) (false), an enumerable set of propositional variables \(\mathcal{V}_{0}\)
and standard connectives \(\neg\), \(\wedge\), \(\vee\), \(\rightarrow\), \(\equiv\). We shall also use second-order quantifiers \(\exists p\), \(\forall p\), where \(p\in\mathcal{V}_{0}\). The meaning of quantifiers in the propositional context is:
\[\exists p(A(p)) \stackrel{{\mathrm{def}}}{{=}}A(p=\mathbb{F})\lor A (p=\mathbb{T}); \tag{3}\] \[\forall p(A(p)) \stackrel{{\mathrm{def}}}{{=}}A(p=\mathbb{F})\wedge A (p=\mathbb{T}), \tag{4}\]
where \(A(p=expr)\) denotes a formula obtained from \(A\) by substituting all occurrences of \(p\) in \(A\) by expression \(expr\).
By a _theory_ we mean a finite set of formulas. A theory is identified with a conjunction of formulas it contains. We often write \(\bar{p}\) to denote a tuple of propositional variables, and \(Th(\bar{p})\) to indicate that theory \(Th\) is formed over a vocabulary consisting of variables in \(\bar{p}\). Similarly, we often write \(A(\bar{p})\) to indicate that formula \(A\) is formed over a vocabulary consisting of \(\bar{p}\).
We say that a formula \(A\) is _stronger (wrt \(\rightarrow\))_ than a formula \(B\), if \(A\to B\) is a tautology (\(\models A\to B\)). In such a case, we also say that \(B\) is _weaker (wrt \(\rightarrow\))_ than \(A\), \(A\) is a _sufficient condition_ for \(B\), and \(B\) is a _necessary condition_ for \(A\).
A formula \(A(p)\) is _positive_ wrt \(p\in\mathcal{V}_{0}\), if all occurrences of \(p\) in \(A\) are in the scope of an even number of negations.4\(A(p)\) is _negative_ wrt \(p\in\mathcal{V}_{0}\), if all occurrences of \(p\) in \(A\) are in the scope of an odd number of negations. By a _literal_, we mean a propositional variable or its negation.
Footnote 4: As usual, we consider \(B\to C\) to stand for \(\neg B\lor C\), and \(B\equiv C\) to stand for \((\neg B\lor C)\wedge(B\lor\neg C)\).
### Standard Forgetting
Standard forgetting has been introduced in [26] using a model-theoretic framework. The intuition behind this operator is to forget a part of the vocabulary of a theory and remember as much as possible using the remaining vocabulary, where logical consequences are concerned. Theorem 8 and Proposition 10 in [26] provide, among others, the following important properties of forgetting, where \(\mathit{forget}(Th(\bar{p},\bar{q}),\bar{p})\) denotes _forgetting_ about \(\bar{p}\) in \(Th(\bar{p},\bar{q})\).
**Theorem 2.1** (Lin, Reiter).: _Let \(\bar{p}\) and \(\bar{q}\) be disjoint tuples of propositional variables, \(A\) be a formula not containing occurrences of variables from \(\bar{p}\) and \(Th(\bar{p},\bar{q})\) be a theory. Then:_
\[-\mathit{forget}(Th(\bar{p},\bar{q}),\bar{p})\equiv\exists\bar{p} (Th(\bar{p},\bar{q})); \tag{5}\] \[-Th(\bar{p},\bar{q})\models A\ \mathit{iff}\ \mathit{forget}(Th(\bar{p},\bar{q}), \bar{p})\models A. \tag{6}\]
__
Notice that (5) can serve as an alternative definition of standard forgetting. We therefore omit discussion and use of the model-theoretic definition used in [26]. One can also observe that due to the deduction theorem for classical propositional logic, (6) can be expressed as:
\[\models Th(\bar{p},\bar{q})\to A\ \mathrm{iff}\ \models\mathit{ forget}(Th(\bar{p},\bar{q}),\bar{p})\to A. \tag{7}\]
### An Ackermann-Like Approach to Second-Order Quantifier Elimination
Theorem 2.1(5) indicates that computing forgetting using the Lin&Reiter-like forgetting operator is equivalent to eliminating second-order quantifiers \(\exists\bar{p}(\ldots)\). While definitions (3)-(4) allow one to eliminate second-order quantifiers, they lead to an exponential growth of the resulting formula wrt the number of quantifiers. Therefore, as a computationally more appropriate
tool, we will use the following lemma of Ackermann, already proved in [2] (see also [11, 19]). This usage typically results in much shorter formulas as a result of quantifier eliminations in many cases.
**Lemma 2.2** (Propositional Ackermann Lemma).: _Let \(A\) be a propositional formula without occurrences of propositional variable \(p\), and \(B(p)\) be a propositional formula on a vocabulary containing \(p\):5_
Footnote 5: For the sake of clarity we assume that \(B\) contains \(p\), but the lemma is trivially true also when this is not the case.
\[-\text{ if }B(p)\text{ is positive wrt }p\text{ then: }\exists p((p\to A)\wedge B(p))\;\equiv\;B(p=A); \tag{8}\] \[-\text{ if }B(p)\text{ is negative wrt }p\text{ then: }\exists p((A\to p)\wedge B(p))\;\equiv\;B(p=A). \tag{9}\]
\(\Box\)__
The lemma remains true when \(p\) is replaced by a second-order variable, say \(P\), representing propositional formulas. For example, (8) can be formulated as:
\[-\text{ if }B(P)\text{ is positive wrt }P\text{ then: }\exists P((P\to A)\wedge B(P))\;\equiv\;B(P=A).\]
Lemma (2.2) serves as a blueprint for specifying an algorithm to eliminate 2nd-order quantifiers in many cases. It shows that if one can syntactically transform a formula \(F(\bar{p})\), where \(p\in\bar{p}\), into an equivalent formula with the syntactic structure on the lhs of equivalences (8) or (9), respectively, then one can eliminate \(\exists p\) by substitution of \(A\) for \(p\) in \(B\), resulting in the equivalent \(B(p=A)\). This syntactic technique can be iterated for all 2nd-order quantifiers in \(F(\bar{p})\), resulting in a logically equivalent propositional formula.
To transform a formula into a form required in (8) or (9), one can use the Dls algorithm of [11]. For propositional formulas, one of the forms in the lefthand sides of equivalences (8) or (9) can always be obtained, thus guaranteeing removal of all 2nd-order quantifiers in an arbitrary propositional theory.
Figure 1 illustrates the idea behind Ackermann-like lemmas, where the terms "grows" and "shrinks", illustrated by dash arrows within ovals, refer to the standard ordering on truth values, \(\mathbb{F}<\mathbb{T}\), compatible with the semantics of implication, which can be given by:6
Footnote 6: More formally, we deal here with the construction of a partial order among formulas for the Lindenbaum and Tarski algebra [21]. The Lindenbaum–Tarski algebra of a theory \(T\) consists of the equivalence classes of formulas of the theory, where two formulas are equivalent when the theory \(T\) proves that each implies the other. The partial order in question is then defined as: \(\|A\|\leq_{T}\|B\|\) iff \(T\vdash A\to B\), where \(\|C\|\) denotes an equivalence class of a formula \(C\).
\[(\text{the truth value of }p\to q)\stackrel{{\text{def}}}{{=}}( \text{'the truth value of }p\text{'}\leq\text{'the truth value of }q\text{'}).\]
Given the constraint \(p\to A\) in (8) (or, respectively, \(A\to p\) in (9)), the greatest value of \(B(p)\) is obtained when \(p\) takes its greatest (respectively, smallest) value, i.e., the value given by \(A\). Of course, in such cases, the existential quantifier \(\exists p(\ldots)\) obtains the greatest value of \(B(p)\), achieved for \(p=A\). In this case, each of the constraints \(p\to A\) (in (8)) as well as \(A\to p\) (in (9)) evaluate to \(\mathbb{T}\), and so disappear from the result.
## 3 The Entailment-Based Inferential Perspective on Forgetting
### Some Intuitions
Let \(Th(\bar{p},\bar{q})\) be a propositional theory over a vocabulary consisting of \(\bar{p},\bar{q}\).7 When forgetting \(\bar{p}\) in the theory \(Th(\bar{p},\bar{q})\), one can delineate two alternative views, one existing and one new, with both resulting in a theory expressed in the vocabulary containing \(\bar{q}\) only:
Footnote 7: When we refer to tuples of variables as arguments, like in \(Th(\bar{p},\bar{q})\), we always assume that \(\bar{p}\) and \(\bar{q}\) are disjoint.
* _strong (standard) forgetting_\(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\): a theory that preserves the entailment of necessary conditions over \(\bar{q}\);
* _weak forgetting_\(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\): a theory that preserves the entailment of sufficient conditions over \(\bar{q}\).
The rationale behind the operator \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\) is that one wants to remember a theory on vocabulary \(\bar{q}\), whose consequences are also consequences of the original theory. That is, for any formula \(A\) on a vocabulary disjoint with \(\bar{p}\),
\[\models Th(\bar{p},\bar{q})\to A\ \mathrm{iff}\ \models F^{NC}(Th(\bar{p},\bar{q}); \bar{p})\to A. \tag{10}\]
In this case, a formula \(A\) is a consequence of the result of forgetting, \(F^{NC}()\), iff it is a consequence of the original theory \(Th(\bar{p},\bar{q})\). Notice the similarity of (7) and (10) indicating that _forget_() and \(F^{NC}()\) act in the same manner. However, while in the approach of [26], the property (7) is a derived theorem, in our approach it is a fundamental starting point.
Figure 1: An illustration of intuitions behind Ackermann-like lemmas.
The rationale behind the weak forgetting operator \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) is that one wants to remember a theory on vocabulary \(\bar{q}\) such that a formula implies \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) iff it implies the original theory. That is, for any formula \(A\) on a vocabulary disjoint with \(\bar{p}\),
\[\models A\to Th(\bar{p},\bar{q})\ \text{iff}\ \models A\to F^{SC}(Th(\bar{p},\bar{q});\bar{p}). \tag{11}\]
That is, a formula \(A\) implies the result of weak forgetting, \(F^{SC}()\), iff it implies the original theory \(Th(\bar{p},\bar{q})\).
### The Operator \(F^{Nc}()\)
The requirement (10) that \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\) preserves the entailment of necessary conditions over a vocabulary disjoint with \(\bar{p}\) can be expressed as:
\[\forall P\Big{(}\forall\bar{p}(Th(\bar{p},\bar{q})\to P)\equiv \big{(}F^{NC}(Th(\bar{p},\bar{q});\bar{p})\to P\big{)}, \tag{12}\]
where \(P\) is a second-order variable representing an arbitrary formula over a vocabulary disjoint with \(\bar{p}\).
Using (12) as a basis, let's derive an important Theorem 3.1, characterizing the operator, \(F^{NC}()\). By a standard propositional tautology, we represent equivalence \(\equiv\) in (12) as the conjunction of left-to-right (\(\rightarrow\)) and right-to-left (\(\leftarrow\)) implications.
#### 3.2.1 The Analysis of the Left-to-Right Direction
Let us start with the analysis of the left-to-right direction (\(\rightarrow\)) of (12). Since variables \(\bar{p}\) occur only in \(Th(\bar{p},\bar{q})\), the implication is equivalent to:
\[\forall P\Big{(}(\exists\bar{p}(Th(\bar{p},\bar{q}))\to P) \rightarrow\big{(}F^{NC}(Th(\bar{p},\bar{q});\bar{p})\to P\big{)} \Big{)}, \tag{13}\]
In order to apply Lemma 2.2, we have to transform (13) into an equivalent form:
\[\neg\exists P\Big{(}(\exists\bar{p}(Th(\bar{p},\bar{q}))\to P )\wedge F^{NC}(Th(\bar{p},\bar{q});\bar{p})\wedge\neg P\Big{)}. \tag{14}\]
We eliminate the second-order quantifier \(\exists P\) from (14) using Lemma 2.2(9). As a result we obtain the following formula equivalent to (14):
\[\neg\Big{(}F^{NC}(Th(\bar{p},\bar{q});\bar{p})\wedge\neg\exists \bar{p}(Th(\bar{p},\bar{q}))\Big{)}, \tag{15}\]
which in turn is equivalent to:
\[F^{NC}(Th(\bar{p},\bar{q});\bar{p})\rightarrow\exists\bar{p}(Th( \bar{p},\bar{q})). \tag{16}\]
#### 3.2.2 The Analysis of the Right-to-Left Direction
For the right-to-left direction (\(\leftarrow\)) of (12) we proceed as follows. Since variables \(\bar{p}\) occur only in \(Th(\bar{p},\bar{q})\), the implication (\(\leftarrow\)) is equivalent to:
\[\forall P\Big{(}\big{(}F^{NC}(Th(\bar{p},\bar{q});\bar{p})\to P \big{)}\rightarrow\big{(}\exists\bar{p}(Th(\bar{p},\bar{q}))\to P \big{)}\Big{)}, \tag{17}\]
which is equivalent to:
\[\neg\exists P\Big{(}\big{(}F^{NC}(Th(\bar{p},\bar{q});\bar{p}) \to P\big{)}\wedge\exists\bar{p}(Th(\bar{p},\bar{q}))\wedge\neg P \Big{)}. \tag{18}\]
As before, we eliminate the second-order quantifier \(\exists P\) from (18) using Lemma 2.2(9). As a result, we obtain the following formula equivalent to (14):
\[\neg\Big{(}\exists\bar{p}(Th(\bar{p},\bar{q}))\wedge\neg F^{NC}(Th (\bar{p},\bar{q});\bar{p})\Big{)}, \tag{19}\]
which in turn is equivalent to:
\[\exists\bar{p}(Th(\bar{p},\bar{q}))\to F^{NC}(Th(\bar{p},\bar{q}) ;\bar{p}). \tag{20}\]
#### 3.2.3 A Characterization of \(F^{nc}()\)
Using (16), (20) and Theorem 2.1 (5), we have the following characterization of \(F^{NC}()\).
**Theorem 3.1**.: _For arbitrary tuples of propositional variables \(\bar{p},\bar{q}\) and \(Th(\bar{p},\bar{q})\),_
1. \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\equiv\exists\bar{p}(Th(\bar{p},\bar{q}))\)_._
2. \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\equiv\text{forget}(Th(\bar{p},\bar{q}); \bar{p})\)_._
3. \(F^{NC}(Th(\bar{p},\bar{q}),\bar{p})\) _is the strongest (_wrt_\(\rightarrow\)_) formula over vocabulary_ \(\bar{q}\)_, satisfying (_12_)._ \(\square\)__
### The Operator \(F^{sc}()\)
By analogy to \(F^{NC}()\), the requirement (11) that \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) preserves entailment by sufficient conditions can be expressed as:
\[\forall P\Big{(}\big{(}P\to F^{SC}(Th(\bar{p},\bar{q});\bar{p})\big{)} \equiv\forall\bar{p}(P\to Th(\bar{p},\bar{q}))\Big{)}, \tag{21}\]
where \(P\) is again a second-order variable representing an arbitrary formula over a vocabulary disjoint with \(\bar{p}\). As in Section 3.2, we represent equivalence \(\equiv\) in (21) as the conjunction of two implications (\(\rightarrow\)) and (\(\leftarrow\)).
#### 3.3.1 The Analysis of the Left-to-Right Direction
Let us first consider implication (\(\rightarrow\)). Since \(P\) is \(\bar{p}\) free, the implication is equivalent to:
\[\forall P\Big{(}\big{(}P\to F^{SC}(Th(\bar{p},\bar{q});\bar{p})\big{)} \rightarrow\big{(}P\rightarrow\forall\bar{p}(Th(\bar{p},\bar{q}))\big{)} \Big{)}. \tag{22}\]
To apply Lemma 2.2, we transform (22) to an equivalent form:
\[\neg\exists P\Big{(}\big{(}P\to F^{SC}(Th(\bar{p},\bar{q});\bar{p}) \big{)}\wedge P\wedge\neg\forall\bar{p}(Th(\bar{p},\bar{q}))\Big{)}. \tag{23}\]
We eliminate the second-order quantifier \(\exists P\) from (23), using Lemma 2.2(8). As a result, we obtain the following formula equivalent to (23):
\[\neg\Big{(}F^{SC}(Th(\bar{p},\bar{q});\bar{p})\big{)}\wedge\neg\forall\bar{p }(Th(\bar{p},\bar{q}))\Big{)}, \tag{24}\]
which in turn is equivalent to:
\[F^{SC}(Th(\bar{p},\bar{q});\bar{p})\big{)}\rightarrow\forall\bar{p}(Th(\bar{ p},\bar{q})). \tag{25}\]
#### 3.3.2 The Analysis of the Right-to-Left Direction
The implication (\(\leftarrow\)) of (21) is equivalent to:
\[\forall P\Big{(}\big{(}P\rightarrow\forall\bar{p}(Th(\bar{p},\bar{q})) \big{)}\rightarrow\big{(}P\to F^{SC}(Th(\bar{p},\bar{q});\bar{p})\big{)} \Big{)}. \tag{26}\]
To apply Lemma 2.2, we transform (26) to an equivalent form:
\[\neg\exists P\Big{(}\big{(}P\rightarrow\forall\bar{p}(Th(\bar{p},\bar{q})) \big{)}\wedge P\wedge\neg F^{SC}(Th(\bar{p},\bar{q});\bar{p})\Big{)}. \tag{27}\]
We eliminate the second-order quantifier \(\exists P\) from (27), using Lemma 2.2(8). As a result, we obtain the following formula equivalent to (27):
\[\neg\Big{(}\forall\bar{p}(Th(\bar{p},\bar{q}))\wedge\neg F^{SC}(Th(\bar{p}, \bar{q});\bar{p})\Big{)}. \tag{28}\]
which in turn is equivalent to:
\[\forall\bar{p}(Th(\bar{p},\bar{q}))\to F^{SC}(Th(\bar{p},\bar{q});\bar{p})). \tag{29}\]
#### 3.3.3 A Characterization of \(F^{sc}()\)
Combining (25) and (29), we have the following characterization of \(F^{SC}()\).
**Theorem 3.2**.: _For arbitrary tuples of propositional variables \(\bar{p},\bar{q}\) and \(Th(\bar{p},\bar{q})\),_
1. \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\equiv\forall\bar{p}\,(Th(\bar{p},\bar{q}))\)_._
2. \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) _is the weakest (wrt_ \(\rightarrow\)_) formula over vocabulary_ \(\bar{q}\)_, satisfying (_21_)._ \(\Box\)__
### Combining \(F^{nc}()\) and \(F^{sc}()\) in a dual or complementary perspective
The net result of the analysis provided is that standard or strong forgetting, and weak forgetting are complementary in an intuitive and formally concise manner.
Weak forgetting, \(F^{SC}()\), allows one to _remember_ more than using solely standard (strong) forgetting, \(F^{NC}()\), relative to a specific theory. This can be shown to be very useful in applications of forgetting operators as exhibited in Section 4.
In the original example (1), the weak forgetting operator \(F^{SC}()\) can be applied naturally,
\[F^{SC}((lt\lor lp);lt)\equiv\forall\,lt\,(lt\lor lp)\equiv lp \tag{30}\]
As suggested in Section 1, \(lt\) is forgotten, while retaining additional information \(lp\) about the physical system.
It is interesting to observe that \(F^{NC}()\) and \(F^{SC}()\) partition the theory \(Th(\bar{p},\bar{q})\) into three nicely related classes of formulas (see Figure 2):
* the innermost oval, \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\): formulas in vocabulary \(\bar{q}\) that imply the original theory (sufficient conditions of the theory);
* the central oval: the original theory \(Th(\bar{p},\bar{q})\);
* the outermost oval, \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\): formulas in vocabulary \(\bar{q}\) implied by the original theory (necessary conditions of the theory).
From another perspective,
* \(F^{NC}(Th(\bar{p},\bar{q});\bar{p})\) is "the best" upper approximation of \(Th(\bar{p},\bar{q})\) using the restricted vocabulary (as shown in point 3 of Theorem 3.1);
* \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) is "the best" lower approximation of \(Th(\bar{p},\bar{q})\) using the restricted vocabulary (as shown in point 2 of Theorem 3.2).
Figure 2: The relationships between forgetting operators and the original theory in terms of entailment: the ovals inclusion indicates that the formula in an inner oval entails the formula in an outer one.
### The Computational Perspective for Weak Forgetting
The computational overhead involved in computing weak forgetting \(F^{SC}()\) is, in the worst case, the same as for computing standard forgetting \(F^{NC}()\), which is exponential in the size of the input formula, yet there are some pragmatic distinctions. Transformations of formulas made for \(F^{NC}()\) can many times be reused in computing \(F^{SC}()\). Moreover, frequently there may be a more substantial complexity gain. Notice that in some applications, theories are typically presented as sets of formulas, \(Th(\bar{p},\bar{q})=\{A_{1}(\bar{p},\bar{q}),\ldots,A_{k}(\bar{p},\bar{q})\}\), interpreted as the conjunction \(Th(\bar{p},\bar{q})\equiv A_{1}(\bar{p},\bar{q})\wedge\ldots\wedge A_{k}(\bar {p},\bar{q})\). By Theorem (7),
\[F^{SC}(Th(\bar{p},\bar{q});\bar{p})\equiv\forall\bar{p}(Th(\bar{p},\bar{q}))\equiv \forall\bar{p}(A_{1}(\bar{p},\bar{q})\wedge\ldots\wedge A_{k}(\bar {p},\bar{q}))\equiv\] \[\forall\bar{p}(A_{1}(\bar{p},\bar{q}))\wedge\ldots\wedge\bar{p}(A _{k}(\bar{p},\bar{q})).\]
This partitioning allows one to eliminate second-order quantifiers \(\forall\bar{p}\) in each conjunct separately. Though this has to be repeated \(k\) times, the formulas involved are smaller, which in typical cases results in a significant complexity reduction.
Notice also that frequently in applications, the formulas \(A_{1}(\bar{p},\bar{q}),\ldots\), \(A_{k}(\bar{p},\bar{q})\) are syntactically structured in the form of rules/implications (or sequents) of the form:
\[\big{(}\ell_{1}\wedge\ldots\wedge\ell_{r}\big{)}\to\big{(}\ell_{r+1}\lor\ldots \lor\ell_{s}\big{)}, \tag{31}\]
where \(\ell_{i}\) (\(i=1,\ldots,s\)) are literals. Of course, (31) is equivalent to \(\neg\ell_{1}\vee\ldots\vee\neg\ell_{r}\vee\ell_{r+1}\vee\ldots\vee\ell_{s}\). Eliminating second-order quantifiers \(\bar{p}\) from:
\[\forall\bar{p}(\neg\ell_{1}\vee\ldots\vee\neg\ell_{r}\vee\ell_{r+1}\vee\ldots \vee\ell_{s}) \tag{32}\]
is then straightforward, as shown in Algorithm 1.
``` Data: Formula \(A\equiv(\ref{eq:eq
## 4 Some Examples
The first example shows that standard forgetting can be too strong in scenarios, where formulas (rules) depend on some uncontrolled parameters. That is, propositions whose truth values depend on the parameter's values, occur only in the rule's premises. In such cases, when a parameter's value becomes unknown, e.g., due to a failure of a measuring device, one would like to forget the corresponding proposition as this becomes meaningless. In standard forgetting, all rules involving such propositions disappear, reducing to \(\mathbb{T}\).
**Example 4.1**.: _Consider a toy expert system for maintaining a proper balance between temperature and pressure in a production process. While the pressure can be controlled, we assume that outside temperature is not controllable. Let:_
* \(mt,ht\) _stand for "medium" and "high temperature";_
* \(lp,mp\) _stand for "maintain low" and "mantain medium pressure"._
_The considered theory, \(Th(mt,ht,lp,mp)\), consists of the following formulas:_
\[mt\to lp\lor mp; \tag{34}\] \[ht\to lp. \tag{35}\]
_Assume the temperature sensor is broken and one wants to forget related propositions \(mt,ht\). Using Theorems 3.1(1) and Theorem 3.2(1), simple transformations of formulas, and Lemma 2.2, we obtain that:_
* \(F^{NC}(Th(mt,ht,lp,mp);mt,ht)\) _is equivalent to_ \(\mathbb{T}\)_._
* \(F^{SC}(Th(mt,ht,lp,mp);mt,ht)\) _is equivalent to_ \((lp\lor mp)\wedge lp\)_, being equivalent to_ \(lp\)_._
_Notice that \(F^{SC}()\) is much more informative: when the temperature is unknown, for safety reasons it is better to maintain low pressure (\(lp\)) as a sufficient condition for satisfying rules (34), (35). \(\square\)_
The following example illustrates the use of forgetting when there are restrictions on accessing sensitive, personal data directly from a specific person or belief base. One gets around this by leveraging implicit information about that person through an inferential process that involves the use of forgetting.
**Example 4.2**.: _Assume Eve works in human resources in a company where Joe is employed. Eve faces some cultural barriers, or legislative restrictions, in asking Joe about his potential addiction to alcohol or drugs. In order to find the answer, she may use some extra knowledge/beliefs in addition to some neutral questions or observations in probing for an answer. To illustrate the approach consider a belief, expressed by the following simple formula, where \(fdd\) stands for "frequently denies driving", \(ld\) stands for "likes driving", and pa stands for "potentially addicted":_
\[fdd\rightarrow(\neg ld\lor pa). \tag{36}\]
_When using (36) to detect Joe's potential addiction to alcohol or drugs, Eve may wonder when this formula implies pa, without referring to pa itself. That is, she might be interested in the theory:_
\[Th(pa,ld,fdd)\stackrel{{\mathrm{def}}}{{\equiv}}(\underbrace{(fdd \rightarrow(\neg ld\lor pa))}_{(\ref{eq:
of specific costs for the different centers and courts. Consequently, the consultant believes that building an additional indoor squash court will not require a loan (\(isq\rightarrow\neg\)loan), while for a gym center a loan will be needed (\(gc\to loan\)). Summing up, the external consultant's beliefs can be expressed as:_
\[\big{(}(tc\lor sp)\rightarrow(isq\wedge gc)\big{)}\wedge(isq\rightarrow\neg loan )\wedge(gc\to loan). \tag{42}\]
_Jack's beliefs, expressed by (41), merged with the consultant's beliefs, which are expressed by (42), are jointly inconsistent on 'loan'. The investor decides to forget about 'loan'. According to consistency restoring strategies considered in [24], in order to derive meaningful conclusions, Jack can forget about 'loan' in selected parts of the merged belief bases. Assume that he considers forgetting 'loan' in (41) or (42):_
\[-F^{NC}(\{(41)\};loan)\equiv tc\wedge sp, \tag{43}\] \[F^{SC}(\{(41)\};loan)\equiv tc\wedge sp\wedge(bdg\lor inv);\] (44) \[-F^{NC}(\{(42)\};loan)\equiv\big{(}(tc\lor sp)\rightarrow(isq \wedge gc)\big{)}\wedge(isq\rightarrow\neg gc),\] (45) \[F^{SC}(\{(42)\};loan)\equiv\big{(}(tc\lor sp)\rightarrow(isq \wedge gc)\big{)}\wedge\neg isq\wedge\neg gc. \tag{46}\]
_Notice that weak forgetting \(F^{SC}()\) adds substantial additional informative content in comparison to standard forgetting \(F^{NC}()\), by providing sufficient conditions for the respective formulas:_
* (44) _extends (43) by information that to satisfy the formula (41) after forgetting about 'loan', it suffices that additionally (\(bdg\lor inv\)) holds;_
* (46) _extends (45) by information that to satisfy the formula (42) after forgetting about 'loan', it suffices that additionally \(\neg isq\) and \(\neg gc\) hold. _
Although the examples considered may seem somewhat contrived due to their brevity, they do target deeper generic applications in which forgetting operators can play a substantial inferential role. Additionally, the pragmatic use and importance of the weak forgetting operator is clearly shown as a useful inferential complement to the standard forgetting operator.
## 5 Relationship with Strongest Necessary and Weakest Sufficient Conditions
Strongest necessary and weakest sufficient conditions have been introduced in [25]. Let us recall the definitions with minor adjustments.
By _a necessary condition of a formula \(A(\bar{p},\bar{q})\) on propositional variables \(\bar{q}\) under theory \(Th(\bar{p},\bar{q})\)_ we shall understand any formula \(B(\bar{q})\) containing only symbols in \(\bar{q}\) such that \(T\models A(\bar{p},\bar{q})\to B(\bar{q})\). Such a formula \(B(\bar{q})\) is the _strongest necessary condition_, denoted by \(snc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\) if, additionally, for any necessary condition \(C(\bar{q})\) of \(A(\bar{p},\bar{q})\) on \(\bar{q}\) under \(Th(\bar{p},\bar{q})\), we have that \(T(\bar{p},\bar{q})\models B(\bar{q})\to C(\bar{q})\).
By _a sufficient condition of a formula \(A(\bar{p},\bar{q})\) on propositional variables \(\bar{q}\) under theory \(Th(\bar{p},\bar{q})\)_ we shall understand any formula \(B(\bar{q})\) containing only symbols in \(\bar{q}\) such that \(Th(\bar{p},\bar{q})\models B(\bar{q})\to A(\bar{p},\bar{q})\). It is the _weakest sufficient condition_, denoted by \(wsc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\) if, additionally, for any sufficient condition \(C(\bar{q})\) of \(A(\bar{p},\bar{q})\) on \(\bar{q}\) under \(Th(\bar{p},\bar{q})\), we have that \(Th(\bar{p},\bar{q})\models C(\bar{q})\to B(\bar{q})\).
The following second-order characterization of \(snc()\) and \(wsc()\) has been provided in [13].
**Lemma 5.1**.: _For any \(A(\bar{p},\bar{q})\), \(\bar{q}\) and \(T(\bar{p},\bar{q})\):_
\[snc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\equiv\exists \bar{p}(Th(\bar{p},\bar{q})\wedge A(\bar{p},\bar{q})); \tag{47}\] \[wsc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\equiv\forall \bar{p}(Th(\bar{p},\bar{q})\to A(\bar{p},\bar{q})). \tag{48}\]
__
By use of Theorem 3.1(1), Theorem 3.2(1) and Lemma 5.1, the following corollary shows that the dual forgetting operators and strongest necessary and weakest sufficient conditions are mutually definable.
**Corollary 5.2**.: \[snc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\!\equiv\!F^{ NC}(Th(\bar{p},\bar{q})\wedge A(\bar{p},\bar{q});\bar{p});\] (49) \[wsc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\!\equiv\!F^{ SC}(Th(\bar{p},\bar{q})\!\to\!A(\bar{p},\bar{q});\bar{p});\] (50) \[F^{NC}(Th(\bar{p},\bar{q});\bar{p})\equiv snc(Th(\bar{p},\bar{q} );\mathbb{T};\bar{q});\] (51) \[F^{SC}(Th(\bar{p},\bar{q});\bar{p})\equiv wsc(\neg Th(\bar{p}, \bar{q});\mathbb{F};\bar{q}).\] (52)
__
Even though Corollary 5.2 exists, notice that forgetting operators typically serve purposes different than those of strongest necessary and weakest sufficient conditions. That is, the operators \(snc()\) and \(wsc()\) focus on queries \(A(\bar{p},\bar{q})\) to, or observations complementing a theory \(Th()\), when the underlying language is reduced. In applying \(F^{NC}()\) and \(F^{SC}()\), one looks for the "best" approximations of the theory in a reduced sublanguage. For example,
* the weakest sufficient condition \(wsc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\) can be used to _explain the query (observation)_, \(A(\bar{p},\bar{q})\), given the theory \(Th(\bar{p},\bar{q})\), in a sublanguage consisting of \(\bar{q}\);
* on the other hand, the associated weak forgetting operator \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\), can be used to _explain the theory_\(Th(\bar{p},\bar{q})\) in the same sublanguage, consisting of \(\bar{q}\).
As an added distinction, computing \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) is usually more efficient than computing \(wsc(Th(\bar{p},\bar{q});A(\bar{p},\bar{q});\bar{q})\) (see also Sections 3.5 and 6.4).
## 6 A First-Order Extension
Generalization of forgetting operators to the first-order case is an important topic of research and also essential for many applications [9; 14; 26]. Although limited work has been done with standard forgetting, there are no general, generic approaches in this respect. In approaching this topic, classical first- and second-order logics [7; 19] will be used, assuming the following languages:
* classical first-order logic, \(\mathcal{L}_{1}\), extending propositional logic \(\mathcal{L}_{0}\) with the set \(\mathcal{V}_{1}\) of _individual variables_ representing domain objects, \(\mathcal{R}\) of relation symbols, and first-order quantifiers \(\forall\), \(\exists\) binding individual variables;
* second-order logic, \(\mathcal{L}_{2}\), extending \(\mathcal{L}_{1}\) with _relational variables_, \(\mathcal{V}_{2}\), and second-order quantifiers, also denoted by \(\forall,\exists\), but binding relational variables.
Some additional terminology is also required. The part of a logical formula to which a quantifier is applied is called the _scope_ of this quantifier. A quantifier _binds_ its variable within its scope. An occurrence of a variable in a formula is bound if it occurs in the scope of a quantifier binding the same variable. Otherwise the occurrence of this variable is _free_. A formula not containing free variables is called _closed_.
### First-Order Standard and Weak Forgetting
When shifting from entailment (6) to implication (7), the deduction theorem for propositional logic was applied. In first-order logic, the deduction theorem requires that the formulas moved from the left-hand side to the right-hand side of \(\models\), are closed. That is, in the first-order case there will be the requirement that the theories considered contain only closed formulas. In practice, this is not really a restriction. Typically, belief bases are defined using closed formulas. Even if the formulas in question contain free variables, such as in rule-based theories, they are assumed to be implicitly universally quantified.
Given the above requirement pertaining to closed theories, considerations about standard and weak forgetting provided in Section 3, including Theorems 3.1 and 3.2, are preserved. This is formulated in the following theorems, where for tuples of relation symbols \(\bar{r}\), \(\bar{s}\), \(Th(\bar{r}\), \(\bar{s})\) is a closed first-order theory over vocabulary \(\bar{r}\), \(\bar{s}\) and, rather than in the propositional version (12), (21), there is the requirement that for every second-order variable \(P\) representing first-order formulas over a vocabulary disjoint with \(\bar{r}\):
* \(F^{NC}(Th(\bar{r},\bar{s});\bar{r})\) preserves the entailment of necessary conditions: \[\forall P\Big{(}\forall\bar{r}(Th(\bar{r},\bar{s})\to P)\equiv \big{(}F^{NC}\big{(}Th(\bar{r},\bar{s});\bar{r}\big{)}\to P\big{)}\Big{)};\] (53)
* \(F^{SC}(Th(\bar{p},\bar{q});\bar{p})\) preserves the entailment by sufficient conditions: \[\forall P\Big{(}(P\to F^{SC}(Th(\bar{r},\bar{s});\bar{r}))\equiv\forall\bar{r} (P\to Th(\bar{r},\bar{s}))\Big{)}.\] (54)
**Theorem 6.1**.: _For arbitrary tuples of relation symbols \(\bar{r}\), \(\bar{s}\) and a closed first-order theory \(Th(\bar{r},\bar{s})\),_
1. \(F^{NC}\big{(}Th(\bar{r},\bar{s});\bar{r}\big{)}\equiv\exists\bar{r}\,(Th(\bar{ r},\bar{s}))\)_._
2. \(F^{NC}\big{(}Th(\bar{r},\bar{s});\bar{r}\big{)}\equiv\text{forget}(Th(\bar{r}, \bar{s});\bar{r})\)_._
3. \(F^{NC}\big{(}Th(\bar{r},\bar{s}),\bar{r}\big{)}\) _is the strongest (wrt_\(\rightarrow\)_) formula over vocabulary_ \(\bar{s}\)_, satisfying (_53_)._ \(\square\)__
Notice that similarly to the the case of Theorem 3.1.2, Theorem 6.1.2 follows from the first statement in the above theorem together with Theorem 8 of [26].
**Theorem 6.2**.: _For arbitrary tuples of relation symbols \(\bar{r}\), \(\bar{s}\) and a closed first-order theory \(Th(\bar{r},\bar{s})\),_
1. \(F^{SC}\big{(}Th(\bar{r},\bar{s});\bar{r}\big{)}\equiv\forall\bar{r}(Th(\bar{r},\bar{s}))\)_._
2. \(F^{SC}\big{(}Th(\bar{r},\bar{s});\bar{r}\big{)}\) _is the weakest (wrt_\(\rightarrow\)_) formula over vocabulary_ \(\bar{s}\)_, satisfying (_54_)._ \(\square\)__
### First-Order Ackermann Lemma
As in the propositional case, the first-order Ackermann Lemma provides a powerful technique for eliminating second-order quantifiers from standard and weak forgetting formulas. The Dls algorithm of [11] uses Lemma 6.3, described below.
In order to formulate the first-order version of the Ackermann lemma, let us extend the notation \(A(p=expr)\) form Section 2. If \(A\) is a formula, \(r\) is a \(k\)-argument relation symbol occurring in \(A\), \(expr(x_{1},\ldots,x_{k})\) is an expression including variables among \(x_{1},\ldots,x_{k}\), then:
\[A\big{(}r(x_{1},\ldots,x_{k})=expr(x_{1},\ldots,x_{k})\big{)}\]
denotes a formula obtained from \(A\) by substituting all occurrences of \(r\) by \(expr(x_{1},\ldots,x_{k})\) in which variables \(x_{1},\ldots,x_{k}\) are in each instance replaced by arguments of the occurrence of \(r\) being substituted. For example, let \(A\stackrel{{\mathrm{def}}}{{=}}\big{(}s(x_{1},a)\lor r(a,b)\lor r (b,c)\big{)}\) and \(expr(x_{1},x_{2})\stackrel{{\mathrm{def}}}{{=}}s(x_{1},x_{2}) \wedge t(x_{2},d)\). When \(r(x_{1},x_{2})=expr(x_{1},x_{2})\), we have:
\[\text{`}r(a,b)=expr(a,b)\text{' is }s(a,b)\wedge t(b,d)\text{ \ and \ `}r(b,c)=expr(b,c)\text{' is }s(b,c)\wedge t(c,d).\]
Therefore we have:
\[A\big{(}r(x_{1},x_{2})=\underbrace{(s(x_{1},x_{2})\wedge t(x_{2},d))}_{expr( x_{1},x_{2})}\big{)}=s(x_{1},a)\vee\underbrace{(s(a,b)\wedge t(b,d))}_{r(a,b)=expr(a,b)} \vee\underbrace{(s(b,c)\wedge t(c,d))}_{r(b,c)=expr(b,c)}\]
We can now formulate the first-order version of the Ackermann lemma.
**Lemma 6.3** (First-order Ackermann Lemma).: _Let \(r\) be a \(k\)-ary relation symbol, \(A\) be a first-order formula without occurrences of \(r\), \(B\) be a first-order formula and \(\bar{x}\) be a \(k\)-tuple of distinct variables. Then:_
\[-\text{ if }B\text{ is positive wrt }r\text{ then: \ }\exists r(\forall\bar{x}(r(\bar{x}) \to A(\bar{x}))\wedge B)\ \equiv\ B(r(\bar{x})=A(\bar{x})); \tag{55}\] \[-\text{ if }B\text{ is negative wrt }r\text{ then: }\exists r(\forall\bar{x}(A(\bar{x}) \to r(\bar{x}))\wedge B)\ \equiv\ B(r(\bar{x})=A(\bar{x})). \tag{56}\]
Observe that Figure 1 also illustrates Lemma 6.3, where \(A\) grows when \(r\) grows (respectively shrinks).8 In the first-order case, Figure 1 is even more intuitive since growing and shrinking pertains to the tuples that satisfy the relation \(r\) (\(p\) in the figure), which are maximized (minimized) relative to (55) and (56), respectively.
Footnote 8: In Figure 1, \(r\) is represented by \(p\).
To transform a formula into a form required in (55) or (56), one can again use the Dls algorithm [11]. Unlike the propositional case, in the first-order case such a transformation is not always doable in general, but it has been shown to work for large classes of formulas [6; 11; 19].
The following example illustrates the use of Lemma 6.3 in the context of standard and weak forgetting.
**Example 6.4**.: _To illustrate the use of the first-order Ackermann lemma, consider the following belief base, where \(ms(x)\) stands for "person \(x\) has mild symptoms of a disease", \(ss(x)\) - for "person \(x\) has severe symptoms of the disease", \(h(x)\) - for "\(x\) should stay home", \(t(x)\) - for
_"x needs a test for the disease", and \(ich(x)-\) for "x should immediately consult a health care provider":_
\[Th(ms,h,t,ss,ich)=\Big{\{}\forall x\Big{(}ms(x)\to\big{(}h(x)\wedge t(x)\big{)} \Big{)},\ \forall x\Big{(}(ss(x)\lor t(x))\to ich(x))\Big{)}\Big{\}}. \tag{57}\]
_When a test is not available, it is useful to forget about it, so one can consider standard and weak forgetting operators \(F^{NC}\big{(}Th(ms,h,t,ss,ich);t)\big{)}\) and \(F^{SC}\big{(}Th(ms,h,t,ss,ich);t)\big{)}\):_
\[\begin{array}{l}\vspace{0.2cm}\bullet F^{NC}\big{(}Th(ms,h,t,ss,ich);t) \big{)}\equiv\exists t\Big{(}\forall x\Big{(}ms(x)\to\big{(}h(x)\wedge t(x) \big{)}\Big{)}\wedge\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \forall x\Big{(}(ss(x)\lor t(x))\to ich(x)\Big{)}\Big{)}.\end{array} \tag{58}\]
_In order to eliminate_ \(\exists t\) _from (_58_) using Lemma_ 6.3_, it suffices to transform it to the equivalent form:_9__ Footnote 9: Notice that in this case, the form can be automatically obtained using the Dls algorithm. \[\exists t\Big{(}\forall x\Big{(}ms(x)\to t(x)\Big{)}\wedge\forall x\Big{(}ms(x )\to h(x)\Big{)}\wedge\forall x\Big{(}(ss(x)\lor t(x))\to ich(x)\Big{)}\Big{)}.\] (59) _An application of Lemma_ 6.3_(_56_) _results in:_ \[\forall x\Big{(}ms(x)\to h(x)\Big{)}\wedge\forall x\Big{(}(ss(x)\lor ms(x)) \to ich(x)\Big{)}.\] (60) _That is, a person with minor symptoms should stay at home. If, under the circumstances, it is not possible to test for the disease, then the severe or mild symptoms should suffice to immediately consult (by phone or a visit) the health care provider. Intuitively, both causes are correct:_ * _by the second formula of the belief base (_57_), severe symptoms suffice for a need to immediately consult a health care provider;_ * _by the first formula of the belief base (_57_), mild symptoms imply a need for making a test which, in turn, by the second formula of the belief base, suffices for a need to immediately consult a health care provider._
* \(F^{SC}\big{(}Th(ms,h,t,ss,ich);t)\big{)}\equiv\forall t\Big{(}\forall x\Big{(} ms(x)\to\big{(}h(x)\wedge t(x)\big{)}\Big{)}\wedge\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall x\Big{(}(ss(x)\lor t (x))\to ich(x)\Big{)}\Big{)}.\)__ (61) _After a transformation similar to (_59_), and distributing_ \(\forall t\) _over conjunctions, one obtains the following equivalent formula:_ \[\forall t\forall x\Big{(}ms(x)\to t(x)\Big{)}\wedge\forall t\forall x\Big{(} ms(x)\to h(x)\Big{)}\wedge\forall t\forall x\Big{(}(ss(x)\lor t(x))\to ich(x)\Big{)}.\] (62) _One can now eliminate each conjunct separately:_
* _Notice that subformula_ \(\forall x(\mathbb{F}\to t(x))\) _is added artificially to make the Ackermann Lemma work. It is actually a tautology so, in conjunction with the other formulas, it does not affect their truth value. Now an application of Lemma_ 6.3_(_56_), results in_ \(\forall x\neg\Big{(}ms(x)\wedge\neg\mathbb{F}\Big{)}\)_, which is equivalent to_ \(\forall x\Big{(}\neg ms(x)\Big{)}\)_;_
_in_ \(\forall\forall\forall x\big{(}ms(x)\to h(x)\big{)}\) _the quantifier_ \(\forall t\) _is redundant, so can simply be removed;_ * \(\forall t\forall x\big{(}(ss(x)\lor t(x))\to ich(x)\big{)}\equiv\forall x\neg\exists t \Big{(}(ss(x)\lor t(x))\land\neg ich(x)\big{)}\)_. We add an artificial conjunct_ \(\forall x\big{(}t(x)\to\mathbb{T}\big{)}\) _which is a tautology, apply Lemma_ 6.3_(_55_), and obtain an equivalent formula_ \(\forall x\neg\Big{(}(ss(x)\lor\mathbb{T})\land\neg ich(x)\Big{)}\)_. This is equivalent to_ \(\forall x\Big{(}ich(x)\Big{)}\)_._ _Thus, we obtain the following first-order result, equivalent to (_62_):_ \[\forall x\Big{(}\neg ms(x)\Big{)}\land\forall x\Big{(}ms(x)\to h(x)\Big{)} \land\forall x\Big{(}ich(x)\Big{)},\] (63) _which is equivalent to_ \(\forall x\Big{(}\neg ms(x)\Big{)}\land\forall x\Big{(}ich(x)\Big{)}\)_. Indeed, without access to a test,_ \(t(x)\)_, one can only make sure that the first formula of (_57_) is true by assuring that_ \(\forall x\Big{(}\neg ms(x)\Big{)}\) _is true. The second formula of (_57_) can only be guaranteed when_ \(\forall x\Big{(}ich(x)\Big{)}\) _is true. Though_ \(\forall x\Big{(}\neg ms(x)\Big{)}\land\forall x\big{(}ich(x)\Big{)}\) _looks rather useless, it can actually be used to select persons satisfying_ \(\neg ms(x)\land ich(x)\)_, thus also satisfying (_57_). That is, when one reduces the domain to objects satisfying_ \((\neg ms(x)\land ich(x))\)_, the initial theory is guaranteed to hold for those individuals._ \(\Box\)__
The use of \(F^{SC}()\) in Example 6.4 has interesting generic potential in that it allows one to isolate a subdomain of individuals ensuring the validity of a restricted part of the original theory in question.
### The Fixpoint Lemma
Notice that Lemma 6.3 requires that formula \(A\) does not contain occurrences of the eliminated relation symbol. On the other hand, when it does, one may still obtain useful results for a large class of formulas. To see this, consider once again the cases shown in Figure 1, where rather than \(A\) we consider \(A(p)\). As is well known,10 given that \(A(p)\) is positive wrt \(p\), there exists a smallest and greatest fixpoint for \(A(p)\) wrt \(p\). This observation is all that is required to generalize the first-order extension. The idea is formulated in the following lemma, proved in [28] (see also, e.g., [13; 19]), where:
Footnote 10: See, e.g., [1].
* Lf\(p\)\(.(A(p))\) is the _least_ wrt implication (that is, in the terminology used in the paper, the _strongest_) formula \(B\) being a fixpoint of \(A(p)\), i.e., satisfying \(\models B\equiv A(B)\);
* Gf\(p\)\(.(A(p))\) is the _greatest_ wrt implication (that is, in the terminology used in the paper, the _weakest_) formula being a fixpoint of \(A(p)\).
Intuitively, when the extension of a formula _shrinks_, it becomes _stronger_ because it implies more formulas. Likewise, when the extension of a formula _grows_, it becomes _weaker_ because it is implied by more formulas. Therefore, _least_ wrt to implication means _strongest_ and _greatest_ wrt implication means _weakest_.
**Lemma 6.5** (Fixpoint Lemma).: _Let \(r\) be a \(k\)-ary relation symbol, \(A(r)\) be a first-order formula with positive occurrences of \(r\) only, \(B\) be a first-order formula, and \(\bar{x}\) be a \(k\)-tuple of distinct variables. Then:_
\[-\text{if $B$ is positive wrt $r$ then:}\quad\exists r(\forall\bar{x}(r(\bar{x}) \to A(r))\wedge B)\ \equiv\ B(r(\bar{x})=\operatorname{Gfp}r(\bar{x}).(A(r))); \tag{64}\] \[-\text{if $B$ is negative wrt $r$ then:}\quad\exists r(\forall\bar{x}(A(r) \to r(\bar{x}))\wedge B)\ \equiv\ B(r(\bar{x})=\operatorname{Lfp}r(\bar{x}).(A(r))). \tag{65}\]
Notice that Figure 1 applies also to the fixpoint case formulated in Lemma 6.5, where \(A\) grows when \(r\) grows (respectively shrinks).11 Therefore we look for the greatest (respectively least) \(r\) such that \(r\equiv A(r)\), i.e., \(r\) is a (greatest or least) fixpoint of \(A(r)\).
Footnote 11: As in the first-order case, in Figure 1\(r\) is represented by \(p\).
The following example illustrates an application of this lemma.
**Example 6.6**.: _A communication network is being designed. Due to a specific application area, the designers have to consider special security requirements. In particular they consider two networks: an internal and an external one. The internal network nodes should not be externally reachable unless they are protected by a specialized expensive security component. A part of the underlying belief base contains, among others, the following formula, where \(con(x,y)\) stands for "nodes \(x\) and \(y\) are directly connected", and \(r(x,y)\) stands for "\(y\) is reachable from \(x\)".12_
Footnote 12: Notice that formula (66) can be obtained as a translation of a rule defining \(r()\) as a transitive closure of \(con()\).
\[\forall x\forall y\Big{(}\big{(}con(x,y)\vee\exists z(con(x,z)\wedge r(z,y)) \big{)}\to r(x,y)\Big{)}, \tag{66}\]
_Due to the security requirements, the following integrity constraint has to be preserved when choosing direct network connections, where \(ex(x)\) and \(in(x)\) denote that node \(x\) belongs to the external or internal network, respectively, and \(sec(x)\) denotes that node \(x\) is equipped with the security component:_
\[\forall y\Big{(}\exists x(ex(x)\wedge r(x,y))\to\big{(}in(y)\to sec(y)\big{)} \Big{)}. \tag{67}\]
_That is, when \(y\) is reachable from a node \(x\) of an external network (\(\exists x(ex(x)\wedge r(x,y))\)) then whenever \(y\) is an internal node (\(in(y)\)) then it is to be equipped with the security protecting component (\(sec(y)\)). To focus on the design of \(con()\), the designers prefer to abstract from \(r()\) for the time being, so forget about \(r\)._
_Let us first compute \(F^{NC}(Th(con,r,ex,in,sec);r)\), where \(Th(con,r,ex,in,sec)\) denotes the conjunction (66) \(\wedge\) (67):_
\[F^{NC}(Th(con,r,ex,in,sec);r)\equiv\] \[\exists r\,\Big{(}\forall x\forall y\big{(}\big{(}con(x,y)\vee \exists z(con(x,z)\wedge r(z,y))\big{)}\to r(x,y)\big{)}\wedge \tag{68}\] \[\forall y\Big{(}\exists x(ex(x)\wedge r(x,y))\to\big{(}in(y)\to sec (y)\big{)}\Big{)}. \tag{69}\]
_Observe that Lemma 6.3 cannot be applied due to (68). However, Lemma 6.5(65) can still be used and results in:_
\[\forall y\Big{(}\exists x(ex(x)\wedge\operatorname{Lfp}r(x,y).( con(x,y)\vee\exists z(con(x,z)\wedge\,r(z,y)))) \tag{70}\] \[\to\big{(}in(y)\to sec(y)\big{)}\Big{)}.\]
_The least fixpoint in (70) actually defines \(r()\) as the transitive closure of \(con()\)._
_Using the well-known Knaster and Tarski theorem, the least fixpoint \(\operatorname{Lfp}r(x,y).(A(r))\), where \(A(r)\stackrel{{\mathrm{def}}}{{\equiv}}con(x,y)\vee\exists z(con(x, z)\wedge r(z,y))\), is equivalent to a disjunction \(\bigvee_{i}A^{i}(\mathbb{F})\), where \(A^{i}\) stands for applying \(A\)\(i\) times. Therefore, the formula (70) can be represented by the conjunction:_
\[\forall y\Big{(}\] \[\exists x(ex(x)\wedge A^{0}(\mathbb{F}))\rightarrow(in(y) \rightarrow\ sec(y))\wedge\] \[\exists x(ex(x)\wedge A^{1}(\mathbb{F}))\rightarrow(in(y) \rightarrow\ sec(y))\wedge\] \[\cdots\] \[\exists x(ex(x)\wedge A^{i}(\mathbb{F}))\rightarrow(in(y) \rightarrow\ sec(y))\wedge\] \[\cdots\] \[\Big{)}.\]
_To compute \(F^{SC}(Th(con,r,ex,in,sec);r)\), we consider:_
\[F^{SC}(Th(con,r,ex,in,sec);r)\equiv\] \[\forall r\Big{(}\forall x\forall y\Big{(}(con(x,y)\vee\exists z( con(x,z)\wedge r(z,y)))\to r(x,y)\Big{)}\wedge \tag{71}\] \[\forall y\Big{(}\exists x(ex(x)\wedge r(x,y))\rightarrow(in(y) \rightarrow\ sec(y))\Big{)}. \tag{72}\]
_The quantifier \(\forall r\) can be distributed over the conjunction, so one can consider:_
\[\forall r\lor x\forall y\Big{(}(con(x,y)\vee\exists z(con(x,z) \wedge r(z,y)))\to r(x,y)\Big{)}\wedge \tag{73}\] \[\forall r\forall y\Big{(}\exists x(ex(x)\wedge r(x,y))\rightarrow( in(y)\rightarrow\ sec(y))\Big{)}. \tag{74}\]
_To eliminate \(\forall r\) from (73), we transform it to the following equivalent form:_
\[\neg\exists x\exists y\exists r\Big{(}(con(x,y)\vee\exists z( con(x,z)\wedge r(z,y)))\wedge\neg r(x,y)\Big{)}, \tag{75}\]
_equivalent to:_
\[\neg\exists x\exists y\exists r\Big{(}(con(x,y)\vee\exists z( con(x,z)\wedge(z\neq x\lor y\neq y)))\Big{)}, \tag{76}\]
_applying Ackermann's Lemma 6.3(55) to (76) one obtains:_
\[\neg\exists x\exists y\Big{(}(con(x,y)\vee\exists z(con(x,z)\wedge(z \neq x\lor y\neq y)))\Big{)}, \tag{77}\]
_equivalent to:_
\[\forall x\forall y\Big{(}(con(x,y)\rightarrow\forall z(con(x,z) \to z=x))\Big{)}, \tag{78}\]
_which in turn is equivalent to:_
\[\forall x\forall z\Big{(}con(x,z)\to z=x\Big{)}. \tag{79}\]
_To eliminate \(\forall r\) from (74), we transform it to the following equivalent form:_
\[\neg\exists y\exists r\Big{(}\exists x(ex(x)\wedge r(x,y))\wedge in (y)\wedge\neg sec(y)\Big{)}. \tag{80}\]
_Adding an artificial conjunct \(\forall x\forall y(r(x,y)\rightarrow\mathbb{T})\), equivalent to \(\mathbb{T}\) (to ensure the right syntactic structure), and applying Ackermann's Lemma 6.3(55) to (80), we obtain:_
\[\forall y\Big{(}\exists x(ex(x))\rightarrow(in(y)\to sec(y)\Big{)}. \tag{81}\]
_Combining (79) and (81) we obtain that:_
\[F^{SC}\big{(}Th(con,r,ex,in,sec);r\big{)}\equiv\] \[\forall x\forall z\Big{(}con(x,z)\to z=x\Big{)}\wedge \tag{82}\] \[\forall y\Big{(}\exists x(ex(x))\rightarrow(in(y)\to sec(y) \Big{)}. \tag{83}\]
_That is, to make sure that \(Th(con,r,ex,in,sec)\), when \(r()\) is forgotten, nodes can only be connected to themselves and if there is an external node, then every internal node is to be equipped with the security component. Though this guarantees that the security requirements are satisfied, the resulting theory is rather strong. \(\Box\)_
### Computational Aspects
Forgetting is typically applied to finite domain knowledge or belief bases and rule languages. As indicated by the first points of Theorems 3.1 and 3.2, Computing queries expressed by \(F^{NC}()\) is NP-complete, and those expressed by \(F^{SC}()\) is co-NP-complete. On the other hand, the data complexity of first-order queries obtained using Lemma 6.3 is PTime and LogSpace [1]. Data complexity of fixpoint queries, thus also queries obtained from Lemma 6.5, is PTime [1]. Therefore, the approach based on Ackermann's Lemma and its fixpoint extension is computationally friendly.
The following approaches to second-order quantifier elimination that have previously been formulated and implemented are:
* using Lemma 6.3 as a basis for the Dls algorithm [11], which first attempts to transform an arbitrary formula into a form suitable for applying this lemma, and then uses this lemma as a basis for eliminating quantifiers;
* using Lemma 6.5 as a basis for the Dls* algorithm [12], which extends the Dls algorithm to work with formulas that are suitable for an application of the fixpoint lemma.
In fact, all calculations carried out in this paper that involve elimination of second-order quantifiers reflect selected steps used in the Dls or Dls* algorithms.
As has been shown in Section 3.5, computing propositional equivalents for the weak forgetting operator is often more efficient than computing such equivalents for standard forgetting and for weakest sufficient and strongest necessary conditions. This reasoning can also be extended to the first-order case as follows, where we consider eliminating the quantifier \(\forall r\) from the formula:
\[\forall r\forall\bar{x}\Big{(}r(\bar{x}_{1})\vee\ldots\lor r(\bar{x}_{m})\lor \neg r(\bar{x}_{m+1})\vee\ldots\vee\neg r(\bar{x}_{n})\lor A(\bar{z})\Big{)}, \tag{84}\]
where \(0\leq m\leq n\), \(\bar{x}\) contains at least all variables in \(\bar{x}_{1},\ldots,\bar{x}_{n}\), and formula \(A\) does not contain relation symbol \(r\). Formula (84) is equivalent to:
\[\forall\bar{x}\neg\exists r\Big{(}\neg r(\bar{x}_{1})\wedge\ldots\wedge\neg r (\bar{x}_{m})\wedge r(\bar{x}_{m+1})\wedge\ldots\wedge r(\bar{x}_{n})\wedge \neg A(\bar{z})\Big{)}, \tag{85}\]
which can be transformed into the equivalent form (e.g., using the Dls algorithm):
\[\forall\bar{x}\neg\exists\mbox{${\cal{}}$}\mbox{${\cal{}}$}\Big{(}\forall\bar{y}( r(\bar{y})\to(\bar{y}\neq\bar{x}_{1}\wedge\ldots\wedge\bar{y}\neq\bar{x}_{m}) \big{)}\wedge r(\bar{x}_{m+1})\wedge\ldots\wedge r(\bar{x}_{n})\wedge\neg A( \bar{z})\Big{)}. \tag{86}\]
After applying Ackermann's Lemma (6.3) we obtain the following formula equivalent to (86), thus also equivalent to (84):
\[\forall\bar{x}\neg\Big{(}(\bar{x}_{m+1}\neq\bar{x}_{1}\wedge\ldots\wedge\bar{x }_{m+1}\neq\bar{x}_{m})\wedge\ldots\wedge(\bar{x}_{n}\neq\bar{x}_{1}\wedge \ldots\wedge\bar{x}_{n}\neq\bar{x}_{m})\wedge\neg A(\bar{z})\Big{)}. \tag{87}\]
Formula (87) can be simplified to:
\[\forall\bar{x}\Big{(}\bar{x}_{m+1}=\bar{x}_{1}\vee\ldots\vee\bar{x}_{m+1}=\bar {x}_{m}\vee\ldots\vee\bar{x}_{n}=\bar{x}_{1}\vee\ldots\vee\bar{x}_{n}=\bar{x}_ {m}\lor A(\bar{z})\Big{)}. \tag{88}\]
That is, the length of the resulting formula (88) is at most quadratic in the length of the input formula (84). Given variables \(\bar{x}_{1},\ldots,\bar{x}_{m},\bar{x}_{m+1},\ldots,\bar{x}_{n}\), one typically can still substantially simplify the result. Notice that in similar cases, the fixpoint lemma is not needed (but may be needed for other shapes of formulas).
When dealing with a conjunction of formulas of the form (84), the quantifier \(\forall r\) can be distributed over conjunction (as, e.g., in (62)), and each conjunct can be processed separately. This provides a sharp contrast in comparison with computing \(F^{NC}()\), \(wsc()\) and \(snc()\), where such a distribution is generally not possible. This is indicated by their syntactic second-order characterizations given by point 1 of Theorem 3.1, and formulas (47), (48) of Lemma 5.1, respectively.
It is worth emphasizing that the problem as to whether a second-order formula is equivalent to a first-order (or fixpoint) formula, is highly undecidable. So algorithms like Dls, Dls* as well as any other algorithms attempting to do the same quantifier elimination, have to fail for some input formulas. However, pragmatic application has shown that these algorithms work for large classes of formulas (see Section 7 for more a detailed discussion).
## 7 Related Work
Standard forgetting, _forget_(), has been introduced in the foundational paper [26], where model theoretical definitions and analysis of properties of the standard _forget_() operator are provided. Its second-order characterization as well as its entailment preserving property, as quoted in Theorem 2.1 above, are consequences of this definition. The paper [26] opened a research subarea, summarized, e.g., in [10] or more recently, in [14]. In the approach used in the current paper, one begins with an entailment-based, inferential perspective, as expressed by (10) and (11). This is directly beneficial for pragmatic application, since this approach leads directly to algorithms and implementations of the dual forgetting operators. In addition to introducing a new weak forgetting operator, standard forgetting is also characterized in this context.
The paper [10] concentrates on _introspective_ forgetting ("forgetting as becoming ignorant"). It extends modal epistemic logic with modal operators allowing one to express what is known before and what remains known after forgetting. It therefore essentially deals with necessary conditions, corresponding to standard forgetting.
In [14] two types of forgetting are distinguished: (1) forgetting part of the signature, corresponding to standard forgetting, and (2) forgetting a formula, related to _contracting_ in the AGM theory of belief change [3; 18]. In the current paper, both dual operators, weak and strong forgetting, belong to the first type of forgetting. The extension of this work to forgetting formulas is an interesting research direction.
A general framework, covering a range of belief changes involving forgetting, is proposed in [5]. In particular, a commonsense perspective based on belief change, involving contraction, ignorance introduction, revision/update, (deductive) abstraction at the level of rules, marginalization, etc., is proposed. Another general framework for standard-like forgetting is investigated in [9], where forgetting is regarded as a belief change operator, independent of the underlying logic. Forgetting is achieved by reducing a part of a signature in a theory. In that approach, forgetting is specified by the set of logical consequences of the input theory over the reduced language. Though providing very interesting characterizations and contexts of forgetting, the frameworks of [5] and [9] deal with consequences (necessary conditions) of the considered theories and do not deal with sufficient conditions, expressed by the weak forgetting operator in this paper.
Papers which consider relations between forgetting and weakest sufficient and strongest necessary conditions include [16; 17]. In the first of these papers, forgetting is applied in the context of strongest necessary and weakest sufficient conditions in Computation Tree Logic (CTL). Axiomatic characterization of forgetting and an algorithm for computing forgetting are provided. In addition, weakest sufficient and strongest necessary conditions are characterized using forgetting (Theorem 9 in [16]). The characterization is similar in spirit to the Corollaries (49) and (50) in the current paper. In [17] the results of [16] are transferred into the context of \(\mu\)-calculus. Both in CTL and \(\mu\)-calculus, typical reasoning problems are intractable. Forgetting in multiagent modal logics, sharing a similar methodology, has been addressed in [15]. Unlike in [15; 16; 17], in the current paper, both standard and weak forgetting are considered. The approach isolates broad classes of formulas for which one can compute first-order or fixpoint equivalents of both forgetting operators. This in turn, leads to a tractable reasoning-by-querying machinery.
In [32] a _weak_ forgetting operator has been introduced which differs from the weak forgetting operator, \(F^{SC}()\), used in this paper, and it has characteristics much more related to the standard forgetting operator, \(F^{NC}()\). While standard forgetting is not always first-order expressible, the weak operator of [32] has been developed to make sure that the result of forgetting can always be expressed in first-order logic. Weak forgetting in the sense of [32] and standard forgetting differ only in the cases where the result of standard forgetting is not first-order expressible. This operator does not coincide with the dual weak forgetting operator \(F^{SC}()\). The terminology used in the current paper, reflects the fact that \(F^{SC}()\) and \(F^{NC}()\) are dual operators that formally show the relationship of \(F^{SC}()\) to weakest sufficient conditions and \(F^{NC}()\) to strongest necessary conditions.
A series of papers [8; 22; 33] (see also references there) has addressed forgetting in description logics. Though description logics are strongly related to modal logics [4], the papers [8; 22; 33] are methodologically closer to the approach used in this paper. In [8], forgetting-based abduction is investigated using weakest sufficient and strongest necessary conditions. The idea, inspired by [13], is implemented and experimentally verified using resolution. As in the approach proposed in the current paper, the papers [22; 33] use the Ackermann [2] and fixpoint [28] Lemmas, adjusted to the description logic formalism. Though directly dealing with abduction, the approaches in [8; 22; 33] concentrate on (valuable and important) experimental research and do not separate the weak forgetting operator from the standard forgetting operator. Forgetting in description logics has also been addressed in [27] via uniform interpolants, where mixed model-theoretic and automata-theoretic approaches are applied. It is shown that computing uniform interpolants in the context of the description logics considered, is generally highly intractable.
Forgetting is particularly useful in rule-based languages, when one simplifies a belief base to improve querying performance, or protect its parts [20; 30; 31]. This is especially useful for
Answer Set Programs, where the corresponding entailment tasks, centering around necessary conditions, are typically intractable. In [30]_semantic forgetting_ is proposed. It preserves skeptical and credulous consequences on unforgotten variables, as well as strong equivalence of ASP programs. It is shown that computing the forgetting result is intractable even for Horn logic programs. The paper [20], provides a comprehensive survey of the area as pertains to Answer Set Programs.
In summary, while the topic of forgetting in knowledge representation has gained considerable attention with numerous publications, the dual weak forgetting operator which is proposed and investigated in this paper, has not been explicitly considered in the literature. Additionally, the fundamental principles of forgetting founded on the entailment-based, inferential perspective, which is the starting point of this paper, is somewhat unique in comparison to other papers, where theoretical results are a consequence of model-theoretical, or other considerations. This perspective, which results in entailment preservation on a respective sublanguage, naturally leads to the consideration of both standard and weak forgetting operators.
Second-order quantifier elimination is used in this paper as the main logical tool for computing propositional, first-order and fixpoint equivalents of both standard and weak forgetting operators. Although there are several alternative techniques that could be used (see, e.g., [19]), Ackermann-like approaches have been selected for their expressive power and pragmatic application. Combining Ackermann-like lemmas with tautology-preserving formula transformations, as exhibited in this paper, has been shown to be very powerful and useful in many different contexts, including that of computing dual forgetting operators.
In particular, as shown in [11], the Dls algorithm subsumes most other known techniques that have been developed for computing circumscription. The paper [6], shows that the Dls algorithm covers all Sahlqvist formulas, an important class of formulas used in modal correspondence theory. The fixpoint lemma of [28], has been used originally for application to modal correspondence theory. This in turn has led to the development of the Dls* algorithm for fixpoint computations, which has been used for computing various forms of domain circumscription [12]. Both the Dls and Dls* algorithms have been shown to be useful in computing weakest sufficient and strongest necessary conditions [13]. The previously mentioned works [8; 22; 33], in addition to other papers by these authors, have shown that Ackermann-like techniques are also powerful tools for computing forgetting in description logics. The experimental results of these papers also show that such approaches are very efficient and applicable to real-world problems. For other uses of Ackermann-like approaches, including use of the Dls and Dls* algorithms, see [19].
## 8 Conclusions
This paper provides a general characterization of forgetting founded on an entailment-based, inferential framework. In doing so, an interesting forgetting operator, weak forgetting, is identified which is dual to the standard forgetting operator studied in recent work [26]. Due to the entailment-based, inferential framework presented, it is shown how quantifier elimination techniques based on Ackermann's Lemma, using existing algorithms, \(DLS\) and \(DLS^{*}\), can be used to compute output of both forgetting operators. Additionally, the tight relationship between weak forgetting and weakest sufficient conditions, and standard forgetting and strongest necessary conditions is characterized in a straightforward manner.
This paper first approaches the topics using the propositional logic case and then generalizes these results for the first-order and fixpoint logic case, offering new expressivity in the pragmatic
use of such operators. Throughout, the paper provides examples justifying the introduction of the weak forgetting operator and also shows how the computational framework is used to derive inferences from application of both forgetting operators. Similarities and distinctions between related work and work presented in this paper are provided. This work opens up a new way to think about forgetting operators and their pragmatic application in general, by integrating diverse threads of previous research in this area and new results from this paper, in one uniform, formal, entailment-based, inferential framework.
Based on this approach, it is also shown in Theorems 3.1(p.3), 3.2(p.2), 6.1(p.3) and 6.2(p.2) that from the perspective of entailment, the operators \(F^{SC}()\) and \(F^{NC}()\), generate the formally best possible (maximal) results of forgetting, as expressed by the restricted target language in any application.13
Footnote 13: See also Figure 2 and the explanations in Section 3.4.
|
2303.11907 | An Approach to the Primordial Universe Using Colombeau's Simplified
Algebra | The proposal "no boundary" of physicists Hartle and Hawking seeks to build a
satisfactory model of the early Universe, in a way that avoids the singularity
"Big Bang" of the beginning of the Universe. As a consequence of this proposal,
the concept of metric signature change arises, which is approached in different
ways in the literature. Here, we reinterpret the Mansouri-Nozari approach,
which modifies the FLRW metric, and uses the formalism of Colombeau's Algebras,
to develop its equations. In addition, we write the function that changes the
sign in terms of redshift. Finally, we developed Friedmann's equations, of the
modified metric, as well as the equation of state and other relevant equations
in Cosmology. | Jonatas A. Silva, Fábio C. Carvalho, Antonio R. G. Garcia | 2023-03-21T14:56:43Z | http://arxiv.org/abs/2303.11907v1 | # An Approach to the Primordial Universe Using Colombeau's Simplified Algebra
###### Abstract
The proposal no _boundary_ of physicists Hartle and Hawking seeks to build a satisfactory model of the early Universe, in a way that avoids the singularity (_Big Bang_) of the beginning of the Universe. As a consequence of this proposal, the concept of metric signature change arises, which is approached in different ways in the literature. Here, we reinterpret the Mansouri-Nozari approach, which modifies the FLRW metric, and uses the formalism of Colombeau's Algebras, to develop its equations. In addition, we write the function that changes the sign in terms of redshift. Finally, we developed Friedmann's equations, of the modified metric, as well as the equation of state and other relevant equations in Cosmology.
## Introduction
The idea of changing the signature had its origins in Hartle-Hawking's no-boundary proposal [1, 2]. This proposal investigates the wave function of the Universe and seeks to build a satisfactory model of the Universe in order to avoid the primordial spacetime singularity predicted by the standard cosmological model, using a combination of general relativity and quantum mechanics. One of the intriguing features of this proposal is the idea that the spacetime signature must change in very primitive times, resulting in an origin of the Universe in a regime where there is no time, so that "spacetime" was initially Euclidean and, by changing the signature of the "space-time" metric, the transition to the usual Lorentzian space-time occurred [3, 4, 5].
To investigate the consequences of such a situation, we use the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric modified by a function \(f(t)\), responsible for the signature change, following the works [4, 5]. Since Einstein's
equations of general relativity are nonlinear PDEs, nonlinear operations between distributions in discontinuous metrics are unavoidable. In this sense, we make use of Colombeau's algebras to skirt this difficulty.
Colombeau's theory proved to be very consistent with applications to physical problems. Cosmic strings [6], Reissner-Nordstrom metric [7], general relativity [8, 9]. In addition, there is a vast production of articles related to problems intrinsic to mathematics, as well as the development of basic concepts, which makes the formalism even more solid. Generalized numbers [10], generalized solutions of nonlinear parabolic equations [11], generalized quaternions [12], off-diagonal condition [13]. These are just some of the applications of the formalism of the Colombeau algebras, as well as being stopping points to appreciate and understand the theory and the wide spectrum of applications.
In this paper, as announced we use Colombeau generalized functions to deal with the nonlinear operations that arise in the development of FLRW solutions of Einstein's equations. Colombeau's algebras [14, 15] are differential algebras, commutative, associative, distributive of multiplication with respect to addition, and worth Leibniz's rule for the derivative of the product of functions, and contain the distributions by continuous immersion.
In this article, we develop a cosmological model from the FLRW metric modified by [4, 5], in the context of the signature change. We reinterpret the function \(f(t)\) responsible for the sign change and express it in terms of the redshift. With this, we develop the Friedmann equations and other relevant expressions in cosmology.
This paper is organized as follows. Section 1, presents an overview of the fundamentals of Colombeau algebra. In Section 2, we develop our proposal. Section 3, presents the development of Friedmann's equations, the conservation equation and the state equation. In Section 4, some cosmological parameters were developed. Summary and conclusions are provided in Section 5. We use the signature \((-+\ +\ +)\) for Lorentzian manifolds and follow the curvature conventions of Misner et al. [17].
## 1 Preliminary
Colombeau algebras are differential, commutative, associative, distributive algebras of multiplication in relation to addition, which holds the Leibniz rule for the derivative of the product of functions and contains distributions by continuous embedding. These algebras are defined as follows.
**Definicao 1.1**.: _Let \(\Omega\) be an open subset of \(\mathbb{R}^{n}\). We define \(\mathcal{E}(\Omega)\coloneqq\mathscr{C}^{\infty}(\Omega)^{I}\)_
_which is a point-to-point operation ring and hence the subring_
\[\mathcal{E}_{M}(\Omega)\coloneqq\{(u_{\varepsilon})_{\varepsilon} \in\mathcal{E}(\Omega)|\forall\ K\subset\subset\Omega,\ \forall\ \alpha\in\mathbb{N}_{0}^{n},\ \exists\ p\in\mathbb{N}\] \[\text{with}\ \sup_{x\in K}|\partial^{\alpha}u_{\varepsilon}(x)|=O( \varepsilon^{-p})\ \text{as}\ \varepsilon\to 0\} \tag{1}\]
_and its maximal ideal_
\[\mathcal{N}(\Omega)\coloneqq\{(u_{\varepsilon})_{\varepsilon} \in\mathcal{E}_{M}(\Omega)|\forall\ K\subset\subset\Omega,\ \forall\ \alpha\in\mathbb{N}_{0}^{n}\ e\ \forall\ q\in\mathbb{N}\] \[\sup_{x\in K}|\partial^{\alpha}u_{\varepsilon}(x)|=O(\varepsilon ^{q})\ \text{as}\ \varepsilon\to 0\}. \tag{2}\]
_The Colombeau algebra "simplified" \(\mathscr{G}(\Omega)\) is defined as the quotient space_
\[\mathscr{G}(\Omega)\coloneqq\mathcal{E}_{M}(\Omega)/\mathcal{N}(\Omega).\]
By Definition 1.1, we observe that a simplified generalized Colombeau function is therefore a family of moderate functions \((f_{\varepsilon}(\cdot))\in\mathscr{C}^{\infty}\) modulo as null functions.
In order to define the product of two given distributions the route taken here is to substitute for one or both factors a smooth regularization (obtained by convolution with a so-called mollifier), compute the product in \(\mathscr{C}^{\infty}\times\mathscr{D}^{\prime}\) or \(\mathscr{C}^{\infty}\times\mathscr{C}^{\infty}\), and then pass to the limit, if possible.
**Definicao 1.2**.: _For \(u,v\in\mathscr{D}^{\prime}\) set_
* \(u\cdot[v]=\lim_{\varepsilon\to 0}u(v\ast\rho_{\varepsilon});\)__
* \([u]\cdot v=\lim_{\varepsilon\to 0}(u\ast\rho_{\varepsilon})v;\)__
* \([u]\cdot[v]=\lim_{\varepsilon\to 0}(u\ast\rho_{\varepsilon})(v\ast\sigma_{ \varepsilon});\)__
* \([u\cdot v]=\lim_{\varepsilon\to 0}(u\ast\rho_{\varepsilon})(v\ast\rho_{ \varepsilon})\)__
_if the limit exists in_ \(\mathscr{D}^{\prime}\) _for all strict delta nets_ \((\rho_{\varepsilon})_{\varepsilon}\) _or_ \((\rho_{\varepsilon})_{\varepsilon}\) _and_ \((\sigma_{\varepsilon})_{\varepsilon}\)_, respectively._
The test functions are \(\varphi\in\mathscr{C}^{\infty}_{0}\) where the regularizing net is defined \(\varphi_{\varepsilon}(x)=\varepsilon^{-n}\varphi\left(\frac{x}{\varepsilon} \right),\ \forall\ \varepsilon\in]0,1]\). Definition 1.2 is of great importance in our work because it is through it that we understand the potential and products of distribution.
The construction of the space of the simplified generalized Colombeau functions, \(\mathscr{G}(\Omega)\), was organized so as to obtain an immersion by means of the convolution operation with a suitable mollifier. Now, from what we saw earlier, this will require that such a mollifier satisfies the following properties:
* \(\int_{\mathbb{R}^{n}}\rho(x)dx=1;\)
\(ii)\ \int_{\mathbb{R}^{n}}x^{\alpha}\rho(x)dx=0,\ \forall\ |\alpha|\geq 1.\)
Due to the construction of the algebras of generalized Colombeau functions, we have that given \(u\in\mathscr{D}^{\prime}(\Omega),\) the net of functions \(u_{\varepsilon}=(u*\varphi_{\varepsilon})\in\mathscr{G}(\Omega),\ \forall\ \varepsilon\in]0,1],\) where, as we defined earlier, for \(\varphi\in\mathscr{D}(\Omega),\) we obtain its regularizer \(\varphi_{\varepsilon}(x)=\varepsilon^{-n}\varphi\left(\frac{x}{\varepsilon}\right)\). And thus we have that \(\mathscr{D}^{\prime}(\Omega)\hookrightarrow\mathscr{G}(\Omega)\). This section was structured and presented as [18].
## 2 Change of signature of the metric
The signature change addressed by [4] has as its starting point the modified FLRW metric, which is given by
\[ds^{2}=-f(t)dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}d\theta^{2}+r^{ 2}\sin^{2}\theta d\phi\right], \tag{3}\]
where the function \(f(t)\) is responsible for changing the signature of the metric and is defined by
\[f(t)=\theta(t)-\theta(-t) \tag{4}\]
and \(\theta(t)\) is a "Heaviside-like" function, it will be so called, as it is different from how the Heaviside function (or step function) is normally defined. Furthermore it is considered that \(a^{2}(t)=a_{+}^{2}(t)\theta(t)-a_{-}^{2}(t)\theta(-t).\) The function \(\theta(t)\) is given by
\[\theta(t)=\left\{\begin{array}{ccc}1&\mbox{if}&t>0\\ \tau&\mbox{if}&t=0\\ 0&\mbox{if}&t<0\end{array}\right., \tag{5}\]
where \(\tau>\frac{1}{2}\). It is important to note that \(\tau\) should not be confused with the notation traditionally adopted in the literature for the proper time. Knowing this, we have to
\[\theta(-t)=\left\{\begin{array}{ccc}1&\mbox{if}&t<0\\ \tau&\mbox{if}&t=0\\ 0&\mbox{if}&t>0\end{array}\right., \tag{6}\]
and satisfies the following property \(\theta(-t)=1-\theta(t)\). With this, we have that the function \(f(t)\) is given by
\[f(t)=\left\{\begin{array}{ccc}1&\mbox{if}&t>0\\ 2\tau-1&\mbox{if}&t=0\\ -1&\mbox{if}&t<0\end{array}\right.. \tag{7}\]
Due to the abrupt jump at \(t=0\), the regularized function \(f_{\varepsilon}\) given through convolution was considered.
In our approach, we consider that the scale factor \(a(t)\) is written and has the same characteristics as the standard model, so we do not need to write as proposed by Mansouri-Nozari, we chose this because the transition of the sign change in our proposal takes place in a different way than Mansouri-Nozari, this will become clearer below.
In the Mansouri-Nozari approach we can observe that the function \(f(t)\) changes the signature of the metric only when \(t<0\), which physically does not make sense. Thinking about it, we reinterpreted the function so that it was physically plausible.
### Reinterpretation of function \(f(t)\).
The function \(\theta(t)\) can assume any value \(c\), such that \(0<c<1\), in \(t=0\)[19]. Here, we choose \(0<c<\frac{1}{2}\). Therefore, our function \(f(t)\) is
\[f(t)=\left\{\begin{array}{cc}1&\mbox{if}\ \ \ t>0\\ 2c-1&\mbox{if}\ \ \ t=0\\ -1&\mbox{if}\ \ \ t<0\end{array}\right.. \tag{8}\]
Where the curve touches the vertical axis depends on the value \(c\) taken. By convolution, we can show that, no matter how small \(t\), that is, \(\varepsilon\to 0\), the regularized function of \(f(t)\) converges to \(2c-1\), that is, it changes signal without going through the origin. For this consider the following: from the definition of convolution, we have
\[(f*\varphi_{\varepsilon})(t)=\int_{-\varepsilon}^{+\varepsilon}f(t-y)\varphi _{\varepsilon}(y)dy, \tag{9}\]
where \(\varphi_{\varepsilon}(x)=\varepsilon^{-1}\varphi\left(\frac{x}{\varepsilon} \right),\ \forall\ \varepsilon\in]0,1]\) is constructed from \(\varphi\in\mathscr{C}_{0}^{\infty}(\Omega),\ \Omega\subset\mathbb{R}\), from the test function \(\varphi\). Therefore, we have to
\[(f*\varphi_{\varepsilon})(t) = \int_{-\varepsilon}^{+\varepsilon}f(t-y)\varepsilon^{-1}\varphi \left(\frac{y}{\varepsilon}\right)dy \tag{10}\] \[= \varepsilon^{-1}\int_{-\varepsilon}^{+\varepsilon}f(t)\varphi \left(\frac{y}{\varepsilon}\right)dy\]
For the case where \(\varepsilon\to 0\), we have
\[(f*\varphi_{\varepsilon})(t) = \varepsilon^{-1}\int_{-\varepsilon}^{+\varepsilon}(2c-1)\varphi \left(\frac{y}{\varepsilon}\right)dy \tag{11}\]
Making a change of variable \(u=y/\varepsilon,\) we have
\[(f*\varphi_{\varepsilon})(t) = (2c-1)\int_{-\varepsilon}^{+\varepsilon}\varphi(u)du \tag{12}\] \[= 2c-1,\]
where \(\int_{-\varepsilon}^{+\varepsilon}\varphi(u)du=1.\)
Note that in the vicinity of zero, the convolution of \(f(t)\) is \(2c-1,\) and outside the vicinity is \(-1\) or \(+1,\) this is the correct reading of the convolution of f(t). Also, note that in the _Big Bang_ (\(t=0\)) the convolution of \(f(t)\) is worth \(2c-1,\) this describes that the transition of the signature change does not occur on the surface \(t=0\) but after a sufficiently small interval \(0\leq t\leq\varepsilon,\) where \(\varepsilon\in]0,1],\) after this value of F the convolution assumes the value \(+1\) and we have the usual metric of the standard model.
The condition \(c<\frac{1}{2}\) is taken so that the function and its powers do not cancel at any point, making possible operations such as \(\frac{1}{f}\) and \(\frac{1}{f^{2}}\). Later, we will come across operations of type \(\frac{\dot{f}}{f(t)}\) and \(\frac{\dot{f}}{f^{2}(t)}\). According to the standard calculation of distributions we have
\[\dot{f}(t) = \dot{\theta}(t)-\dot{\theta}(-t) \tag{13}\] \[= \delta(t)-\delta(-t)(-1)\] \[= \delta(t)+\delta(t)\] \[= 2\delta(t),\]
since \(\delta(-t)=\delta(t).\) The multiplication of the distribution \(\delta(t)\) with the generalized functions \(\frac{1}{f(t)}\) e \(\frac{1}{f^{2}(t)}\) is not defined in the Theory of Distributions but is well defined in the sense of Colombeau algebras, as defined in Definition 1.2.
Thus, with the regularization of the function \(f(t)\) it is possible to have a change of signature in the FLRW metric in a period that \(t>0\). Making possible a study in the interval in which the space-time had a Euclidean region, i.e., positive signature.
According to [3], this change may have occurred at the Planck epoch in the FLRW universe, or at a less extreme period, such as the GUTs epoch. Regardless of the stage, the way in which we construct the function \(f(t)\) allows for a freedom in when the signature change may have occurred. To do this, simply assign different values to \(\varepsilon,\) onde \(\varepsilon\in]0,1],\) similarly there is
a freedom for \(f(t)\) to take on different values at t=0 by simply choosing a different value for \(c\), \(0<c<\frac{1}{2}\).
For the sake of analysis, without worrying about which period the transition occurred, let's look at the convolution plot of \(f(t)\) for different values of \(\varepsilon\) and \(c=0,35\), that is, at the origin the function assumes the value \(-0,3\) (see Fig. 2.1).
It is notorious that there is a smooth evolution of the function in a period \(t>0\), as well as the signal transition. This is an important point for our approach, since the smoothing proposed by [4, 5] occurs in \(t<0\), and does not allow for a satisfactory physical description.
Figure 1: Function \(f(t)\) regularized by convolution.
### The function \(f(t)\) in terms of redshift.
It is convenient to represent function \(f\) in terms of redshift because of its importance in Cosmology. For this, we will use the relationship between the cosmic time \(t\) and the redshift \(z\) given by [20]. The conditions for the validity of the relation expressed in (14) are established in the previous reference. According to [20], we have
\[t(z)=\frac{2H_{0}^{-1}}{1+(1+z)^{2}}. \tag{14}\]
From that, we get
\[z(t)=\sqrt{\frac{2H_{0}^{-1}}{t}-1-1}. \tag{15}\]
Note that \(t\circ z(t)=t(z(t))=t\) e \(z\circ t(z)=z(t(z))=z\).
Thus, the function \(z(t)\) is the inverse of \(t(z)\). Therefore,
\[t^{-1}(z)=\sqrt{\frac{2H_{0}^{-1}}{z}-1}-1, \tag{16}\]
where \(0<z<H_{0}^{-1}\) and \(z\leq 2H_{0}^{-1}\). With this, we can now define the functions \(\theta(t^{-1})\) and \(\theta(-t^{-1})\). Using Eq.(16), we have
\[\theta(t^{-1})\equiv\gamma(z)=\left\{\begin{array}{ll}1&\mbox{se}\ \ \sqrt{ \frac{2H_{0}^{-1}}{z}-1}-1>0\Rightarrow z<H_{0}^{-1}\\ 0&\mbox{se}\ \ \sqrt{\frac{2H_{0}^{-1}}{z}-1}-1=0\Rightarrow z=H_{0}^{-1}\\ c&\mbox{se}\ \ \sqrt{\frac{2H_{0}^{-1}}{z}-1}-1<0\Rightarrow z>H_{0}^{-1} \end{array}\right.. \tag{17}\]
Similarly, we obtain
\[\gamma(-z)=\left\{\begin{array}{ll}1&\mbox{if}\ \ \ z>H_{0}^{-1}\\ 0&\mbox{if}\ \ \ z=H_{0}^{-1}\\ c&\mbox{if}\ \ \ z<H_{0}^{-1}\end{array}\right.. \tag{18}\]
Having in hand the functions \(\gamma(z)\) and \(\gamma(-z)\), we will define a new function \(g\), which is given by
\[g(z)=\gamma(z)-\gamma(-z), \tag{19}\]
that is,
\[g(z)=\left\{\begin{array}{cl}1&\mbox{if}\ \ \ z<H_{0}^{-1}\\ 2c-1&\mbox{if}\ \ \ z=H_{0}^{-1}\\ -1&\mbox{if}\ \ \ z>H_{0}^{-1}\end{array}\right.. \tag{20}\]
Analogous to what we did for \(f(t)\) we can do for \(g(z)\), that is, for a sufficiently small value of \(\varepsilon\) the function \(g(z)\) converges to \(2c-1\), that is, the function does not cancel at the origin and does not cause problems at that point. This way it is possible to analyze the change of the signature of the metric with a new perspective.
In view of the relation of cosmic time \(t\) to redshift \(z\) expressed by Eq.(14) we define a new function \(g(z)\). With this it was possible to analyze the signature change from the perspective of redshift. Similarly to the function \(f(t)\), we can adjust the value of \(\varepsilon\) so that the change occurs in a different \(z\). Let's see the graph of the convolution of \(g(z)\) in Fig. 2.2.
With the regularization of these functions by convolution, they belong to the space of generalized Colombeau functions, and due to the properties of \(\varphi_{\varepsilon}\), they become \(\mathscr{C}_{0}^{\infty}(\mathbb{R})\) functions.
Figure 2: Function \(g(z)\) regularized by convolution.
Friedmann equations
From the modified FLRW metric, given by Eq.(3), we can deduce the Friedmann equations, and thus study the expansion of the universe in this context. First, we calculate the relevant components of the Einstein tensor for the metric by Eq.(3)
\[G_{00}=\frac{3\dot{a}^{2}}{a^{2}}-\frac{3kf}{a^{2}}. \tag{21}\]
\[G_{11}=\frac{1}{f(1-kr^{2})}\left(-2a\ddot{a}-\dot{a}^{2}+kf+\frac{\dot{f}}{f} a\dot{a}\right). \tag{22}\]
\[G_{22}=\frac{r^{2}}{f}\left(-2a\ddot{a}-\dot{a}^{2}+kf+\frac{\dot{f}}{f}a\dot{ a}\right). \tag{23}\]
\[G_{33}=\frac{r^{2}sen^{2}\theta}{f}\left(-2a\ddot{a}-\dot{a}^{2}+kf+\frac{\dot{ f}}{f}a\dot{a}\right). \tag{24}\]
The energy-momentum tensor of a perfect fluid is given by
\[T_{\alpha\beta}=(\rho+p)u_{\alpha}u_{\beta}+pg_{\alpha\beta} \tag{25}\]
The time component of the energy-momentum tensor is simply the energy density, that is, \(T_{00}=\rho\), we obtain the first Friedmann equation
\[\left(\frac{\dot{a}}{a}\right)^{2}-\frac{kf}{a^{2}}=\frac{8\pi G\rho f}{3}, \tag{26}\]
for the case where the Universe is flat, that is, \(k=0\), we have
\[\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G\rho f}{3}. \tag{27}\]
We also obtain the second Friedmann equation, considering the spatial components of the Einstein equations (remember that the spatial components of the energy-momentum tensor are simply the pressure, that is,\(T_{ij}=p,i=j\)), that is,
\[G_{ij}=8\pi GT_{ij}. \tag{28}\]
Substituting equations (22) - (24), into the previous equation, and using \(T_{ij}=p,\mbox{for }i=j\), we obtain
\[\frac{2}{f}\left(\frac{\ddot{a}}{a}\right)+\frac{1}{f}\left(\frac{\dot{a}}{a} \right)^{2}-\frac{k}{a^{2}}-\frac{\dot{f}\dot{a}}{f^{2}a}=-8\pi Gp. \tag{29}\]
In the particular case where \(k=0\) the equation is given by
\[\frac{2}{f}\left(\frac{\ddot{a}}{a}\right)+\frac{1}{f}\left(\frac{\dot{a}}{a} \right)^{2}-\frac{\dot{f}\dot{a}}{f^{2}a}=-8\pi Gp. \tag{30}\]
Substituting Eq.(27) into Eq.(30), we have
\[\left(\frac{\ddot{a}}{a}\right)=-\frac{4\pi G}{3}\left[\rho f+3pf-\frac{\dot{f} }{f}\left(\frac{3}{8\pi G}\right)^{\frac{1}{2}}\sqrt{\rho f}\right]. \tag{31}\]
We can rewrite this equation in a more compact form as follows
\[\left(\frac{\ddot{a}}{a}\right)=-\frac{4\pi G}{3}\left(\rho_{T}+3P_{T}\right), \tag{32}\]
where \(\rho_{T}\equiv\rho f-\frac{\dot{f}}{f}\left(\frac{3}{8\pi G}\right)^{\frac{1 }{2}}\sqrt{\rho f}\) e \(P_{T}\equiv pf\), which is the total density and the total pressure, written in terms of function F, respectively. Note that the previously obtained equations fall in the standard case when the function is equal to one.
Now, let's express the conservation equation. For this, we will derive Eq.(27) with respect to \(t\). Thus, we have
\[\frac{8\pi G}{3}\dot{\rho}=-\frac{\dot{f}}{f^{2}}\left(\frac{\dot{a}}{a} \right)^{2}+\left(\frac{\dot{a}}{a}\right)\left[\frac{2\ddot{a}}{fa}-\frac{2} {f}\left(\frac{\dot{a}}{a}\right)^{2}\right]. \tag{33}\]
Substituting equations (27) and (30) into Eq.(33) and performing some operations, we have
\[\frac{8\pi G}{3}\dot{\rho}=-8\pi G\frac{\dot{a}}{a}(p+\rho). \tag{34}\]
Therefore, we obtain the conservation equation
\[\dot{\rho}=-3H(p+\rho), \tag{35}\]
which describes the evolution of energy density in relation to time, being \(H=\frac{\dot{a}}{a}\).
The set of equations obtained here describes the dynamics of the Universe. We can evaluate the conditions for the accelerated expansion of the Universe. So for that \(\ddot{a}>0\), we must have \(\rho_{T}+3P_{T}<0\), so
\[P_{T}<-\frac{1}{3}\rho_{T}. \tag{36}\]
We will evidence this condition in terms of the functions \(f\) and \(\dot{f}\). From Eq.(31), we have
\[\rho f+3pf-\frac{\dot{f}}{f}\left(\frac{3}{8\pi G}\right)^{\frac{1}{2}}\sqrt{ \rho f}<0. \tag{37}\]
Using \(p=w\rho\) in (37) we obtain
\[\rho f(1+w)-\frac{\dot{f}}{f}\left(\frac{3}{8\pi G}\right)^{\frac{1}{2}}\sqrt{ \rho f}<0. \tag{38}\]
Hence, we can rewrite this inequality in terms of parameter \(w\), obtaining the following expression
\[w<\frac{\dot{f}}{3}\left(\frac{3}{8\pi Gf^{3}}\right)^{\frac{1}{2}}\frac{1}{ \sqrt{\rho}}-\frac{1}{3}. \tag{39}\]
We can write this expression more compactly using the energy density given by Friedmann's first equation, Eq.(27). This allows us to obtain the condition for \(w\) in terms of the Hubble parameter. Thus, we have
\[w<\frac{\dot{f}}{3}\left(\frac{3}{8\pi Gf^{3}}\right)^{\frac{1}{2}}\left( \frac{8\pi Gf}{3}\right)^{\frac{1}{2}}-\frac{1}{3}. \tag{40}\]
Finally, we obtain
\[w<\frac{1}{3H}\frac{\dot{f}}{f^{2}}-\frac{1}{3}. \tag{41}\]
Note that the result is similar to the usual one, differing by extra terms that arise as a result of the change in the metric. Note also that this condition varies over time, as \(f\) and \(H\) also depend on time. In the case of \(f\) equals 1, we obtain the already known result.
From the regularization of the functions \(f(t)\) and \(g(z)\) and the conditions imposed on them, it was possible to construct the equations presented here. Initially, we used specifically the function \(f(t)\) and derived the first Friedmann equation and the equation of acceleration influenced by it so that in
a primitive universe, \(f(t)\) assumes a different value and as time evolves it assumes value 1 and returns to the standard case.
Moreover, we also observe that the conservation equation (Eq.35) remains invariant. This shows that regardless of the period, whether in the Euclidean or Lorentzian regime, the conservation equation remains, that is, it is not influenced by the change of signature.
Another interesting result is that due to the function \(f(t)\), new terms appear in the acceleration equation, written more compactly using \(\rho_{T}\) and \(P_{T}\). Thus, we describe the conditions necessary for the accelerated expansion of the universe, given by Eq.(36). With this, we evidenced this condition in terms of parameter W of the state equation. In our case, the parameter varies with time and is written in terms of \(f(t)\) and the Hubble parameter, and it is possible to estimate the condition for \(w\) in a Euclidean regime (when \(t\) is small enough).
### Equation of state
The parameter \(w\) of the state equation provides the relationship between energy density and pressure as follows [21]
\[w_{i}=\frac{p_{i}}{\rho_{i}}. \tag{42}\]
This dimensionless parameter can be used in the FLRW equations to define the evolution of an isotropic universe filled with a perfect fluid [22]. The \(w\) parameter can be a constant or time-dependent function [23].
Here, let's consider \(w=w(t)\) and determine an expression in terms of the Hubble parameter \(H\) and the function \(g(z)\). Substituting Eq.(42) in Eq.(35), we have
\[\dot{\rho}=-3H\rho(1+w), \tag{43}\]
we can rewrite it as
\[\frac{1}{\rho}\frac{d\rho}{dt}=-\frac{3}{a}\frac{da}{dt}(1+w). \tag{44}\]
Integrating with respect to time, we have
\[\int_{\rho_{0}}^{\rho^{\prime}}\frac{d\rho}{\rho}=-3\int_{a_{0}}^{a^{\prime}} \frac{da}{a}(1+w), \tag{45}\]
where \(\rho^{\prime}\) and \(a^{\prime}\) represent arbitrary values of the energy density and the scale factor, respectively. Therefore, it follows that
\[\ln\left(\frac{\rho}{\rho_{0}}\right)=-3\int_{a_{0}}^{a^{\prime}}\frac{da}{a} (1+w). \tag{46}\]
With this, we have that
\[\xi=\exp\left(-3\int_{a_{0}}^{a^{\prime}}\frac{da}{a}(1+\omega)\right), \tag{47}\]
where we define \(\xi=\frac{\rho}{\rho_{0}}\). Differentiating \(\xi\) with respect to the scale factor, we have
\[\frac{d\xi}{da}=\xi\left(-\frac{3(1+\omega)}{a}\right). \tag{48}\]
Rearranging the terms, this expression gives us
\[w=-1-\frac{a}{3\xi}\frac{d\xi}{da}. \tag{49}\]
Using Eq.(27) in terms of function \(g=g(z)\), that is, \(3H^{2}=8\pi G\rho g\) and \(\xi=\frac{\rho}{\rho_{0}}\), we have
\[w=-1-\frac{a}{3}\left(\frac{2}{H}\frac{dH}{da}-\frac{1}{g}\frac{dg}{da}\right). \tag{50}\]
Through the relation \(a=(1+z)^{-1}\), we have
\[da=-(1+z)^{-2}dz, \tag{51}\]
which allows us to rewrite Eq.(50) in terms of the redshift derivative. Therefore, after substituting the relation \(a=(1+z)^{-1}\) and Eq.(51) in Eq.(50), performing the necessary operations, we obtain
\[w=-1+\frac{2}{3}\frac{(1+z)}{H}\frac{dH}{dz}-\frac{1}{3}\frac{(1+z)}{g}\frac{ dg}{dz}. \tag{52}\]
In the case where \(w\) is constant, we obtain
\[\rho=\rho_{0}\left(\frac{a}{a_{0}}\right)^{-3(1+w)}, \tag{53}\]
that is, the same result found in the standard model.
Cosmological parameters
The expression that provides us with a special value for the Universe to become spatially flat (\(k=0\)) for this context is obtained from the first Friedmann equation given by Eq.(27). In this case, the critical density is defined by
\[\rho_{\rm cri}=\frac{3H^{2}}{8\pi Gf}. \tag{54}\]
In order to study how the universe might evolve, we need some idea of what is in it. A more general situation is when you have a mixture of matter and radiation. Thus, we have that the total energy density is written in the form \(\rho=\rho_{m}+\rho_{\gamma}\).
Here, we will express the Hubble parameter in terms of redshift and the density parameter, which is defined by \(\Omega(t)=\frac{\rho}{\rho_{\rm cri}}\). Through Eq.(53) and from the relation \(a=(1+z)^{-1}\) we can express \(\rho_{m}\) and \(\rho_{\gamma}\). For the density of matter, we have \(w=0\). Thus, we obtain
\[\rho_{m}=\rho_{0,m}(1+z)^{3}. \tag{55}\]
For the case of radiation, we have \(w=\frac{1}{3}\). From this, it follows that
\[\rho_{\gamma}=\rho_{0,\gamma}(1+z)^{4}. \tag{56}\]
Thus, the density parameter for each material is
\[\Omega_{0,m}=\frac{\rho_{0,m}}{\rho_{0,{\rm cri}}}\quad{\rm e}\quad\Omega_{0, \gamma}=\frac{\rho_{0,\gamma}}{\rho_{0,{\rm cri}}}. \tag{57}\]
Therefore, the equation \(3H^{2}=8\pi G\rho f\) is
\[3H^{2}=8\pi G(\rho_{m}+\rho_{\gamma})f. \tag{58}\]
Substituting equations (55), (56), and (57) into (58), we obtain
\[3H^{2}=8\pi G\rho_{0,{\rm cri}}\left[\Omega_{0,m}(1+z)^{3}+\Omega_{0,\gamma}(1+ z)^{4}\right]f. \tag{59}\]
From Eq.(54), we have that
\[\rho_{0,{\rm cri}}=\frac{3H_{0}^{2}}{8\pi G}, \tag{60}\]
because the value of function \(f\) today is 1. Therefore, by substituting Eq.(60) in Eq.(59) we obtain
\[H^{2}=H_{0}\left[\Omega_{0,m}(1+z)^{3}+\Omega_{0,\gamma}(1+z)^{4}\right]f. \tag{61}\]
It is interesting to note that this expression changes over time, since \(H\) and \(f\) also evolve.
Conclusion
This paper consists in applying convolution in the sense of Colombeau algebras, to deal with a function of distributional nature \(f(t)\) capable of performing the sign change smoothly and we can work with its derivative and its powers, thus we construct the equations presented here. In our approach, we choose a way to present the function \(f(t)\) that reinterprets the results from the literature. We take as a starting point the fact that the function changes sign without passing through the origin. By doing so, we avoid problems with \(t=0\) and derive the standard model equations in the context of the altered FLRW metric. With this, we can make explicit the Friedmann equations influenced by \(f(t)\), show that the conservation equation is not influenced by this function, and also show the conditions for the accelerated expansion of the universe as well as the equation of state in this context. Finally, we determine the critical density and the Hubble parameter in terms of the density parameter.
In our work, the results obtained show a physically interesting description compared to the works [4, 5], since the transition occurs in \(t>0\), after the Big Bang (\(t=0\)) there is a time interval \(t>0\) small enough that the Universe was ruled by a Euclidean regime, and so, as the Universe evolves space-time becomes Lorentzian and goes on as we know it today.
|
2305.14529 | Topological edge state transfer via topological adiabatic passage | The study of quantum state transfer has led to a variety of research efforts
utilizing quantum simulators. By exploiting the tunability of the qubit
frequency and qubit-qubit coupling, a superconducting qubit chain can simulate
various topological band models. In our study, we demonstrate that a spin-up
state can be transported along a topological qubit chain by modulating the
coupling strengths and the qubit frequencies. We show that the Hilbert space of
the qubit chain can be restricted to the subspace of two edge states in this
process, while the Hamiltonian degenerates into a two-state Landau-Zener (LZ)
model. Furthermore, we prove that the state transfer process in this
topological qubit chain is equivalent to the topological adiabatic passage of
the LZ model. With this analysis, we generalize the state transfer approach
from single-qubit Fock states to two-qubit Bell states. | Chong Wang, Xiu Gu, Shu Chen, Yu-xi Liu | 2023-05-23T21:10:30Z | http://arxiv.org/abs/2305.14529v3 | # Topological edge state transfer via topological adiabatic passage
###### Abstract
The study of quantum state transfer has led to a variety of research efforts utilizing quantum simulators. By exploiting the tunability of the qubit frequency and qubit-qubit coupling, a superconducting qubit chain can simulate various topological band models. In our study, we demonstrate that a spin-up state can be transported along a topological qubit chain by modulating the coupling strengths and the qubit frequencies. We show that the Hilbert space of the qubit chain can be restricted to the subspace of two edge states in this process, while the Hamiltonian degenerates into a two-state Landau-Zener (LZ) model. Furthermore, we prove that the state transfer process in this topological qubit chain is equivalent to the topological adiabatic passage of the LZ model. With this analysis, we generalize the state transfer approach from single-qubit Fock states to two-qubit Bell states.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
The discovery of topological insulators [1] triggered extensive studies in topological phases of matter. Topological insulators, topological superconductors, and topological semimetals are just a few examples. It is well known that the topological power stems from its global geometric properties characterized by topological invariants. The topological protections can result in many potential applications, for example, topological quantum computation [2], in which the non-Abelian states of matter are used to encode and manipulate quantum information in a nonlocal manner. These nonlocal global states are topologically protected, and are more robust against the decoherence of qubit states or local impurities of quantum computational devices.
Topological phenomena were first demonstrated in crystals and other condensed matter systems. Recently, the study for topological physics has been applied to photonic systems, ultracold atoms, and ultracold gases in optical lattices [3; 4; 5]. These systems enhance the possibility to create and probe new topological phases. Furthermore, inspired by quantum computing, in which the qubits and their couplings can be controlled or tuned, topological physics is studied via quantum computational devices. The reason is twofold. First, some exotic topological states, which are not easy to find in natural systems, may be created and probed by artificially designing and fabricating on-demand quantum computational devices. Second, some topological states, which are experimentally difficult to create in natural systems, may be simulated via quantum simulators [6].
The simplest model exhibiting topological characters is the Su-Schrieffer-Heeger (SSH) model [7; 8; 9; 10]. It has been extensively studied by theorists [11; 12; 13; 14] and attracted different experimental platforms (e.g., in Refs. [15; 16; 17; 18; 19; 20; 21]). For example, in cold atoms, the topological invariant of the one dimensional band, also known as the Zak phase [22], was measured [18]. Because of the bulk-edge correspondence, the band invariant is associated with the existence of edge states. The edge signal is not easy to resolve from the bulk in the real space lattice of cold atoms. Recently in the momentum space of cold atoms, the dynamics of edge states was probed [19]. The quantized transport of particles, known as the Thouless pump [23], was also demonstrated in cold atoms by modulating the on-site potential and coupling strength of the SSH model [20; 21].
Recently, topological physics is explored through superconducting quantum circuits (or superconducting artificial atoms) [24; 25; 26]. Unlike natural atoms, these circuits can be fabricated with well-tailored characteristic frequencies and other parameters. The exquisite control of superconducting quantum circuits makes it possible to simulate topological band models on a single superconducting qubit. This is achieved by mapping the bulk momentum space of a topological band model, onto the parameter space of a spin in an external magnetic field [27]. The Berry phase was first measured in a single superconducting qubit [28; 29; 30; 31; 32]. Via Berry phase, topological invariants characterizing the band properties were also measured [33; 34; 35]. The space-time inversion symmetric topological semimetal [36] and topological Maxwell metal bands [37] were also simulated in a single superconducting qubit circuit. Experimental efforts are now directed to large scale of superconducting qubits. As an initial step towards realizing fractional quantum Hall effect, anyonic fractional statistical behavior is emulated in superconducting circuits with four qubits coupled via a quantized microwave field [38]. Also, directional transport of photons was observed on a unit cell formed by three superconducting qubits [39]. In this design, qubits play the role of the lattice sites, whereas the synthetic topological materials are made of photons. There are var
ious interesting theoretical proposals to study topological physics based on superconducting circuits [40; 41; 42; 43; 44; 45; 46; 47].
Here, rather than using microwave photons coupled by superconducting qubits, we propose to simulate topological physics with a chain of coupled superconducting qubits. As a simulator of spin physics, the coupled superconducting qubits are widely studied [48; 49; 50; 51; 52]. For instance, quantum annealing was demonstrated experimentally on an Ising spin chain comprised of eight superconducting flux qubits [50]. Due to the improved controllability and fabrication technique of superconducting circuits, it becomes accessible to fabricate tens of qubits with various types of couplings. The qubit frequency and qubit-qubit coupling strengths can all be tuned in situ, making the whole superconducting qubit chain versatile enough to simulate topological models [53; 10; 54]. Inspired by the studies in other systems (e.g., In Refs. [15; 16; 17; 18; 19; 20; 21]), we here study topological edge states and pumping by constructing the SSH model and Rice-Mele model [9; 10] using gap-tunable flux qubit circuits [55; 56; 57; 58].
The paper is organized as follows. In Section II, we briefly introduce the topological qubit chain constructed of gap-tunable flux qubits. This spin chain model can be mapped to SSH model or Rice-Mele model when restricted to single excitation subspace. In Section III, we show that the single-qubit edge state can be transported from one end of the chain to the other end by adiabatic pumping. We theoretically analyze this pumping process and propose one optimized pumping protocol. In Section IV, we generalize the state pumping protocol to two-qubit state transfer with a trimer Rice-mele model. In Section V, we summary our results and further discuss possible demonstration on topological physics using superconducting qubit circuits. For the completeness of the paper, we also give a detailed superconducting circuit analysis for the spin chain in the appendix.
## II Topological qubit chain with superconducting qubits
As schematically shown in Fig. 1(a), we study a superconducting quantum circuit, in which \(2N\) identical superconducting qubits are coupled to form a chain with alternating coupling strengths. That is, the coupling strength between the qubit (marked in green) on the odd sites and its right neighbor (marked in orange) is \(a\), while the coupling strength between the qubit (marked in orange) on the even sites is coupled to its right neighbor (marked in green) with an amplitude of \(b\). It is well known that the coupling constants \(a\) and \(b\) between qubits can be designed to be tunable in superconducting qubit circuits by using, e.g., additional coupler, variable qubit frequency, detuning between qubits, or frequency matching [59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70].
In principle, the coupled qubit chain can be constructed by any type of superconducting qubits, e.g., flux qubits [71; 72], transmon [73], xmon [74] and gmon [75; 76]. Compared with the qubits [73; 74; 75; 76], the gap-tunable flux qubit circuit [55; 56; 57; 58] has negligibly small population leakage from the first excited state to other upper states when the two lowest energy levels are operated as the qubit near the optimal point. Also the frequency of the gap-tunable flux qubit can be tuned via the magnetic flux through the \(\alpha\)-loop or main loop. Moreover, the gap-tunable flux qubit has better coherent time. Thus, for concreteness of the discussions and as an example, we first assume that each qubit in the chain is implemented by a gap-tunable flux qubit circuit [55; 57; 77], as schematically shown in Fig. 1(b) and is further explained in Appendix A. Through the detailed study below for the chain of the gap-tunable flux qubit circuit, we will give the comparative summary for the chain formed by other types of superconducting qubits in the discussions.
The qubit chain consists of \(N\) unit cells, and each unit cell contains two sublattices labeled as \(A_{n}\) and \(B_{n}\) with \(n=1,2,\cdots,N\). The couplings between adjacent qubits are staggered, denoted as \(a\) and \(b\). Here we relabel each qubit in the chain by one increasing index \(j\) from \(A_{1}\) to \(B_{N}\), then the Hamiltonian of such a qubit chain can be
Figure 1: (a) Schematic diagram of a one-dimensional topological qubit chain. Each unit cell hosts two qubits, labeled as \(A_{n}\) and \(B_{n}\) separately. The coupling strengths between \(A_{n}\) and \(B_{n}\) are staggered, denoted as \(a\) and \(b\). Each qubit can also be labeled by one increasing numerical index \(j\) from left to right. (b) Realization of the qubit chain with gap-tunable flux qubits. Each qubit contains two basic superconducting circuit elements, i.e., the josephson junction and the inductance. Two adjacent qubits are coupled through a tunable coupler (CP). The qubit-qubit coupling can be tuned via an external magnetic flux \(\Phi_{\rm{ext}}\) with a dc control. The signs \(\bigcirc\) denote that the magnetic fluxes are directed outside. We use reduced magnetic fluxes \(f_{\alpha}=\Phi_{\alpha}/\Phi_{0}\), \(f_{\epsilon_{1}}=\Phi_{1}/\Phi_{0}\), and \(f_{\epsilon_{2}}=\Phi_{2}/\Phi_{0}\). Here \(\Phi_{\alpha}\), \(\Phi_{1}\) and \(\Phi_{2}\) are the magnetic fluxes through the three small loops of the qubit. \(\Phi_{0}\) is the flux quanta. The frequency of the gap-tunable flux qubit can be tuned via the magnetic flux through the \(\alpha\)-loop or main loop, thus both the qubit frequencies and the couplings in the qubit chain are tunable.
written as (\(\hbar=1\))
\[H = \sum_{j=1}^{2N}\frac{\omega_{j}}{2}\sigma_{j}^{z}+\sum_{j\in\text{ odd}}^{2N}a(\sigma_{j}^{+}\sigma_{j+1}^{-}+\text{H.c.}) \tag{1}\] \[+ \sum_{j\in\text{even}}^{2N}b(\sigma_{j}^{+}\sigma_{j+1}^{-}+\text{ H.c.}),\]
with \(\omega_{j}\) denoting the frequency of the qubit. \(\sigma_{j}^{+}\) and \(\sigma_{j}^{-}\) are the creation and annihilation operators of the \(j\)th qubit.
The upper Hamiltonian can degenerate into the Rice-Mele model or SSH model when restricted to the single-excitation subspace of the qubit chain. In most studies, the qubit operator is mapped onto the non-interacting fermions through Jordan-Wigner transformation [78]. However, here we find that the total spin excitation \(\sum_{j=1}^{2N}\sigma_{j}^{z}\) commutes with Hamiltonian \(H\) in Eq. (1). Thus the number of total excitations of the qubit chain is conserved. In the following analysis, instead of resorting to the nonlocal Jordan-Wigner transformation, we restrict our study in the single-excitation subspace. That is, only one qubit is excited in the qubit chain. This can be done in superconducting quantum circuits due to the strong anharmonicy of the flux qubit. We define the basis in single-excitation subspace as
\[|e_{j}\rangle=|0,...,1_{j},0...\rangle, \tag{2}\]
where the \(j\)th superconducting qubit is assumed in the spin-up state \(|1_{j}\rangle\), while the other qubits are in the spin-down states \(|0\rangle\). Thus in the subspace with only one excitation, the Hamiltonian \(H\) in Eq. (1) has the tridiagonal form as
\[H_{S}=\begin{bmatrix}\omega_{1}&a&&&\\ a&\omega_{2}&b&&&\\ &b&\omega_{3}&a&&&\\ &&\ddots&\ddots&\ddots&\\ &&&b&\omega_{N-1}&a\\ &&&a&\omega_{N}\end{bmatrix}. \tag{3}\]
Here, the subscript \(S\) denotes the single-excitation. If the qubits on odd(even) sites are identical, then the Hamiltonian \(H_{S}\) can be reduced to the Rice-Mele model as
\[H_{\text{RM}} = \sum_{n}\left(\omega_{1}A_{n}^{\dagger}A_{n}+\omega_{2}B_{n}^{ \dagger}B_{n}\right) \tag{4}\] \[+ \sum_{n}\left(aA_{n}^{\dagger}B_{n}+bB_{n}^{\dagger}A_{n+1}+\text {H.c.}\right),\]
where \(A_{n}\) and \(B_{n}\) are the single-excitation annihilation operators of each qubit in the \(n\)th unit cell, which can be expanded as \(A_{n}=|\O\rangle\langle e_{2n-1}|\) and \(B_{n}=|\O\rangle\langle e_{2n}|\). Here \(|\O\rangle\) denotes that all qubits in the chain are in the ground state, i.e., \(|\O\rangle=|0,\ldots,0\rangle\).
Furthermore, if all the qubits are identical, that is, \(\omega_{j}=\omega\) with \(j=1,\cdots,2N\), then the Hamiltonian in Eq. (4) is equivalent to the SSH model [10]. After shifting the zero-energy point to \(\omega\), this single-excitation Hamiltonian is given by
\[H_{\text{SSH}}=\sum_{n}\left(aA_{n}^{\dagger}B_{n}+bB_{n}^{\dagger}A_{n+1}+ \text{H.c.}\right). \tag{5}\]
The SSH model describes a chain of dimers, each hosting two sites A and B. The hopping strength within the unit cell is \(a\), the intercell hopping amplitude is \(b\). In our qubit chain as schematically shown in Fig. 1, the odd (even) number of the qubits in Eq. (1) corresponds to the A (B) particles in Eq. (5). For this standard SSH model, the topological phases are characterized by winding numbers [10]. According to the bulk-edge correspondence [1], two topological edge states are supported when the qubit chain is in the topological nontrivial phase. In Appendix C, we study the quench dynamics of the SSH chain when the first qubit is initially prepared in the spin-up state. As we can see, in the topological nontrivial phase the existence of edge states reveals itself as a soliton localized at the very end of the chain, while in the topological trivial phase the excitation at the first qubit will quickly diffuse into the bulk.
The edge states are topologically protected and robust against disorder, thus it is quite straightforward to utilize these edge state for quantum state transfer. To precisely control the whole process, we need another degree of freedom as the parameter, i.e., the on-site potentials of the qubits. Below we show how to transfer quantum states in the superconducting qubit chain mapped onto the Rice-mele model.
## III Topological analysis of edge state transfer
For one topological chain, once an excitation is injected at the edge, it will stay as a soliton. Therefore, such property of preserving quantum states gives topological matter great potential to store quantum information, which is a basic task of quantum information processing. Another basic task of quantum information processing is robust quantum state transfer. Fortunately, with the topological property of edge states in the topological qubit chain, it is possible to transfer the soliton edge state from the left end to the right end of the chain by adiabatic pumping.
Hereafter, we will firstly propose an straightforward proposal of transferring an edge state in the qubit chain. Then, a topology analysis will be given for such quantum state transfer process. Finally, based on the topology analysis, we will give an optimized protocol with better accuracy and robustness.
### Pumping of an edge state
If the on-site potentials of the Hamiltonian in Eq. (1) are staggered as \(u\) and \(-u\), the qubit chain can be
mapped to the Rice-Mele model. The pumping can be realized if the staggered potentials and the coupling strengths in the Hamiltonian can be modulated. Such time modulations for qubit frequencies and coupling strengths can be realized in superconducting qubit circuits as discussed in Appendix B.
If we only consider the single-excitation states, as discussed in Section II, the time-dependent Hamiltonian in Eq. (B) with qubit frequency modulations can be reduced to the Hamiltonian of the Rice-Mele model [79; 10] as
\[H_{\rm RM} = \sum_{n}\left[a(t)A_{n}^{\dagger}B_{n}+b(t)A_{n}^{\dagger}B_{n-1} +{\rm H.c.}\right] \tag{6}\] \[+ u(t)\sum_{n}(A_{n}^{\dagger}A_{n}-B_{n}^{\dagger}B_{n}),\]
where \(A_{n}^{\dagger}\) (\(B_{n}^{\dagger}\)) is the particle creation operator on the site A (B) in the \(n\)th cell, \(a(t)\) and \(b(t)\) are the time dependent coupling strengths, \(u(t)\) is the staggered potential. This Rice-Mele Hamiltonian can be continuously deformed along the time-dependent pump sequences given by \(u(t)\), \(a(t)\) and \(b(t)\). In superconducting quantum circuits, this can be done by varying the magnetic fluxes through the couplers and the \(\alpha\) loops of the superconducting qubits.
For example, as shown in Fig. 2, we can demonstrate the topological pumping by simply choosing the coupling strengths \(a(t)\) and \(b(t)\) as well as the on-site potential \(u(t)\) as
\[a(t) = 1-\cos\left(\frac{2\pi t}{T}\right),\] \[b(t) = 1,\] \[u(t) = \sin\left(\frac{2\pi t}{T}\right). \tag{7}\]
where \(T\) is the variation period of these parameters. Figures 2(a) and (b) show the time-dependent coupling strengths and on-site potentials versus time \(t\) during one period. The numerical simulation of the dynamical evolution of the qubit chain, as shown in Fig. 2(c), is solved using ordinary differential equation solver based on backward differentiation (BDF) formulas [80]. The result shows that the soliton edge state can indeed be transferred from one end of the chain to the other end within one pumping cycle. To ensure the adiabatic limit, we set \(T=100\). The chain is initialized in the all spin-down state. To prepare the left edge state, the first qubit is flipped by a \(\pi\) pulse of the applied magnetic flux through the main loop of the qubit. However, this pumping process somehow can't last more than one circle.
The instantaneous spectrum of the Hamiltonian in Eq. (6) is plotted as a function of \(t\) in Fig. 3(a). The black-solid lines represent the bulk states and the color-dashed lines represent the edge states. According to the quantum adiabatic theorem (berry's article), if the adiabatic approximation holds during the pumping circles, the system will stay in the same eigenstate as the prepared initial state. With the increasing of the on-site potential, two degenerate edge states separate from each other and develop into two branches. As shown in Fig. 3(b), the state on point b of the upper branch is mostly located on the left end of the chain. Therefore, if the initial state is prepared as the left edge state, it will adiabatically evolve following the upper branch as shown in the red-dashed line in Fig. 3(a).
The wave functions of six particular points on the upper branch are plotted in Fig. 3(b) to (g). For the first pumping cycle, the result shown in Fig. 2(c) is consistent with the adiabatic limit in Fig. 3. The soliton first diffuses as the left edge state with vanishing amplitudes on the even sites. Then it is pushed into the bulk occupying both even and odd sites. After that, it reappears as the right edge state with vanishing amplitudes on the odd qubits. At the end of the first pumping cycle, as shown in Fig. 3(f), the right edge state refocuses on the right end qubit. The soliton edge state is already transferred from the left end to the right end before the crossing point. As shown in Fig. 3(g), the eigenstate in point g is the same in point f. Indeed, after this crossing point, the evolving route follows the red-dashed line from point f to point g, which is also verified in Fig. 2(c). Apparently, the evolution of the state around the level crossing does not follow the traditional adiabatic theorem, which will be discussed later in Section III.2.
Through the wave functions at points b to f in Fig. 3, we have shown how a left edge state is adiabatically
Figure 2: (a) The coupling strengths \(a\) and \(b\) versus time \(t\). (b) The on-site potentials \(u\) and \(-u\) versus time \(t\). (c) Time evolution of the particle distributions \(\langle\sigma_{j}^{z}\rangle\) for the qubit chain of \(2N=14\) qubits. The time-dependent pump sequence is defined in Eq. (7), with \(T=100\). The color from dark-red to bright-yellow represents the particle distribution \(\langle\sigma_{j}^{z}\rangle\) varying from \(-1\) to \(1\).
pumped to the right during a pumping cycle. It is seen that the state occupies the left edge and right edge equally in point d. We denote this to be the transition point. Combining Fig. 2 and Fig. 3, we can summarize two principles to achieve the edge state transfer. The first one is that, as shown in Fig. 2(b), the on-site potential \(u(t)\) must change sign at the transition point, i.e., \(t=T/2\). The second one is that, to insure the adiabaticity, the energy gap must be opened between two branches during the state transfer process, i.e., the coupling strengths can't be zero around the transition point, as shown in Fig. 2(a). Based on these two principles, we can easily redesign some other time-dependent pump sequences to achieve QST.
In our pump result, only the edge mode is occupied initially, while the lower band is empty. However, in the cold atom experiments [20; 21], all the lower band is filled with the atoms, while the upper band is empty. During a pumping cycle, each atom in the valence band is moved to the right by a single lattice constant. Or equivalently, the number of pumped particles through the cross section is one. This is determined by the Chern number of the associated band [10]. By promoting the periodic time \(t\) to the wave-number, the adiabatic pump sequence in one dimension is equivalent to two-dimensional insulators.
### Topology of the pumping process
When the on-site potentials \(u\) are equal to \(0\), the former Rice-Mele model in Eq. (6) is reduced to the SSH model. As discussed in Section. C, for a standard SSH model in the nontrivial topological phase, i.e., \(|a|<|b|\), two existing topological edge states are denoted as \(|L\rangle\) and \(|R\rangle\). While the coupling coefficients \(a\) and \(b\) are positive real numbers, two edge states can be expressed as
\[|L\rangle=\Xi\sum_{n}\lambda^{n-1}|e_{2n-1}\rangle \tag{8}\]
and
\[|R\rangle=\Xi\sum_{n}\lambda^{L-n}|e_{2n}\rangle \tag{9}\]
where \(\Xi=\sqrt{\left(1-\lambda^{2}\right)/\left(1-\lambda^{2N}\right)}\) is the normalization factor and \(\lambda=-a/b\) is the ratio of the coupling coefficients. In the nontrivial topological phase, the two edge states \(|L\rangle\) and \(|R\rangle\) hybridize under the Hamiltonian \(H\) to an exponentially small amount, meanwhile, the couplings between the edge states and the bulk states are much exponentially smaller. With a good approximation using adiabatic elimination of the other bulk states [10], we can investigate the state transfer process in the subspace of \(\{|L\rangle,|R\rangle\}\). This idea of restricting the Hilbert space of the dynamic process can also be generalized to the Rice-Mele model [81; 82; 83]. As for the our system shown in Eq. (6), the matrix elements of the effective Hamiltonian in this subspace take the following forms as
\[\langle L|H|L\rangle=-\langle R|H|R\rangle=u, \tag{10}\]
\[\langle L|H|R\rangle=\langle R|H|L\rangle=\Xi^{2}a\lambda^{L-1}. \tag{11}\]
Hence the effective Hamiltonian can be written as
\[H=\left(\begin{array}{cc}u&g\\ g&-u\end{array}\right), \tag{12}\]
where \(g=\Xi^{2}a\lambda^{L-1}\) is the effective coupling strength between two edge states. Thus, we transform the complicated many-body problem into a two-state quantum dynamics.
For a two-state Landau-Zener (LZ) model [84] shown in Eq. (12), we can characterize the adiabatic state transfer processes based on the topology of the eigenenergy surfaces [85]. The eigenenergies of the effective Hamiltonian in Eq. (12) are
\[E_{\pm}=\pm\sqrt{u^{2}+g^{2}}. \tag{13}\]
The two eigenenergy surfaces can be drawn as a dirac cone shown in Fig. 4(b), and the two energy bands intersect at the critical point \(u=0\) and \(g=0\). There are two basic types of topological passages depending on whether the adiabatic evolution path goes through the critical point.
To illustrate the differences between these two passages, we particularly choose two paths with the same start point and end point as shown in the solid lines in
Figure 3: (a) Instantaneous spectrum of the Hamiltonian in Eq. (6) with the pumping sequence defined in Eq. (7). The chain consists of \(2N=14\) qubits. When \(t>0\), the on-site potential will relieve the degeneracy of two edge states and break the degenerate energies into two branches, as shown in the red- and green-dashed lines. Five particular states on the upper branch are chosen as b, c, d, e, f, and g. The corresponding wave functions are shown in (b) to (g).
Fig. 4(a). The green path A is an arc and can be described as
\[u=\alpha\cos\left(\frac{\pi t}{T}-\pi\right),g=\alpha\sin\left(\frac{\pi t}{T}-\pi \right). \tag{14}\]
Due to the quantum adiabatic theorem [86; 87], if the parameters of the quantum system are changed slowly enough, it will remain in its instantaneous eigenstate. As shown in Fig. 4(b), if the initial state is prepared as \(\ket{L}\) on the upper branch, the adiabatic following of the path A can induce the complete state transfer from \(\ket{L}\) to \(\ket{R}\). This adiabatic process is verified with numerical simulation as shown in Fig. 4(c). The red path B is a straight line and can be described as
\[u=\alpha\left(\frac{2t}{T}-1\right),g=0. \tag{15}\]
As shown in Fig. 4(b), the two bands merge at the critical point in the section of \(g=0\), and hence the adiabatic following here should be handled more carefully. Notice that if \(g=0\), the two states \(\ket{L}\) and \(\ket{R}\) are decoupled from each other. Therefore, as the parameter \(u\) changes continuously, the system will remain in the same state. This evolution process is verified with numerical simulation as shown in Fig. 4(d).
More generally, an arbitrary straight path through the critical point can be described as
\[u=\alpha\left(\frac{2t}{T}-1\right),g=\tan\left(\theta\right)u \tag{16}\]
as shown in the blue-dashed line in Fig. 4(a). The Hamiltonian of this path C can be written as
\[H=\frac{u}{\cos\theta}\left(\begin{array}{cc}\cos\theta&\sin\theta\\ \sin\theta&-\cos\theta\end{array}\right). \tag{17}\]
Eigenstates of the Hamiltonian in Eq. (17) have the form
\[\left[\begin{array}{c}\ket{\uparrow}\\ \ket{\downarrow}\end{array}\right]=\left(\begin{array}{cc}\cos\frac{\theta} {2}&\sin\frac{\theta}{2}\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right)\left[\begin{array}[] {c}\ket{L}\\ \ket{R}\end{array}\right]. \tag{18}\]
These two eigenstates are time-independent and the corresponding eigenenergies are \(E=\pm u/\cos\theta\). Therefore, the Hamiltonian in the basis of \(\ket{\uparrow}\) and \(\ket{\downarrow}\) can be written as
\[H=\frac{u}{\cos\theta}\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{19}\]
From this perspective, the two states \(\ket{\uparrow}\) and \(\ket{\downarrow}\) are decoupled from each other. If the initial state is prepared as \(\ket{\uparrow}\) or \(\ket{\downarrow}\), the system will remain in the same state over time. In the Fig. 4(b), we show the state on the path C initially prepared as \(\ket{\uparrow}\) evolves from the upper branch to the lower branch across the critical point. Thus, _the adiabatic passage around the critical point support the state transfer, while the adiabatic passage through the critical point leaves the system in the same state_.
As for the state transfer proposal in Section III.1, the parameters of the effective Hamiltonian can be derived from Eqs. (7), (10) and (11). The path of this adiabatic passage in parametric space is shown in Fig. 5(a). The whole evolution process can be divided into three stages denoted as red-solid line, green-dashed line and blue-solid line, individually. The solid lines represent the topological nontrivial region and the dashed line represents the topological trivial region. As shown in Fig. 5(b), in the first stage, the on-site potentials rise up and the system keeps the same state, i.e., \(\ket{L}\rightarrow\ket{L}\). In the second stage, the state evolves from the left edge into the bulk and then reaches to the right edge, i.e., \(\ket{L}\rightarrow\mathit{bulk}\rightarrow\ket{R}\). In the third stage, the on-site potentials decrease to \(0\) and the system maintains the same state, i.e., \(\ket{R}\rightarrow\ket{R}\). Therefore, after one pumping cycle, the state is transferred from the left edge to the right edge.
As discussed before, the approximation of adiabatic elimination of bulk states works only in the topological nontrivial region. In the topological trivial region, the couplings between the edge states and the bulk states can not be ignored, and the dashed lines of the second stage in Fig. 5 are not the real evolution path. Nevertheless, the energy bands in the topological trivial region are separated from each other as shown in Fig. 3, and the adiabatic following is still feasible in this region. Therefore, after the first stage, the state is confirmed to evolve from the left edge into the bulk. The three-stage description of the state transfer process is still applicable.
Figure 4: (a) Three paths shown in the parameter space of on-site potential \(u\) and coupling strength \(g\). (b) The energy spectrum of the two-level system. Different parameter paths of the same start point and end point can lead to different evolution paths. (c) Time evolution of state occupation along path A. (d) Time evolution of state occupation along path B.
### Optimization of the pumping cycle
With the analysis above, it is easy to conclude that the inaccuracy of the pumping does not come from the crossing energy levels, but from the close energy bands in the topological trivial region. Hence we need to adjust the system parameters to get larger gaps between the adiabatic passage and other bulk states in our advanced pump protocol as compared with Fig. 3.
As shown in Fig. 3, in the first topological nontrivial region, the energy of the edge state increases with the on-site potential before the edge state joins the bulk. Thus the first clew is to decrease the amplitude of the on-site potential. Here we choose the on-site potential as \(u(t)=0.25*\sin\left(2\pi t/T\right)\). As shown in Fig. 6(a), the maximum energy of the edge state indeed decreases in the topological nontrivial region, but the edge state still joins the bulk hereafter in the topological trivial region due to the topological phase transition. Therefore the second clew is to avoid the topological phase transition so that the edge state can be well isolated from the bulk. Here apart from the modified on-site potential \(u\left(t\right)\), we further choose the coupling strength \(a\) as \(a(t)=0.5*[1-\cos\left(2\pi t/T\right)]\), so that the relation \(a<b\) can be conserved during the whole pumping cycle and the system will stay in the topological nontrivial region. As shown in Fig. 6(b), the adiabatic passage is well isolated with other bulk states during the whole quantum state transfer process.
In Fig. 6(c), the numerical simulation of the qubit chain's dynamical evolution is well resolved. With the parameters above, the initial state prepared on the first site of the chain can be perfectly transferred to the last site in a pumping cycle. This pumping process can last for many pumping cycles with high fidelity. Therefore, we can indeed optimize our pumping protocol using the topological analysis in Section III.2.
## IV Two-qubit Bell state pumping in a trimer Rice-Mele model
Above from the perspective of the traditional adiabatic theorem, we have exhibited a more ordinary way to understand the thouless pumping in 1D Rice-Mele model. With this comprehensible explanation, we can easily generalize the state transfer protocol to a trimer Rice-Mele qubit chain.
As shown in Fig. 7, the trimer Rice-Mele model is a periodic qubit chain with the unit cell of three cites. The Hamiltonian of the trimer RM model can be written as
\[H_{\rm TRM} = \sum_{n}\left[aA_{n}^{\dagger}B_{n}+bB_{n}^{\dagger}C_{n}+cC_{n }^{\dagger}A_{n+1}+{\rm H.c.}\right] \tag{20}\] \[+ \sum_{n}\left[aA_{n}^{\dagger}A_{n}+vB_{n}^{\dagger}B_{n}+wC_{n }^{\dagger}C_{n}\right]\]
where \(A_{n}^{\dagger}\), \(B_{n}^{\dagger}\) and \(C_{n}^{\dagger}\) are the particle creation operators on the sites A, B and C in the \(n\)th unit cell. \(a\), \(b\) and \(c\) are the coupling strengths between each sites. \(u\), \(v\) and \(w\) are the staggered potentials on sites A, B and C.
When the on-site potentials are chosen to be \(0\), the trimer Rice-Mele model in Eq. (20) is reduced to the extended SSH3 model. As shown in Ref. [88], there are generally two different cases for the topological characterization of this SSH3 model. One is the mirror symmetry case (\(a=b\)) and the other is the none mirror symmetry case (\(a\neq b\)). For the convenience of discussion, we only consider the mirror symmetry case hereafter. In this case, the Hamiltonian exhibits a band gap closing when \(a=b=c\), and the edge states emerge at this gap closing point in the thermodynamic limit [88].
Figure 5: (a) The path of the adiabatic passage of the effective Hamiltonian in the parametric space. (b) Evolution path of the adiabatic passage in the energy spectrum.
Figure 6: (a) The energy spectrum of the chain with \(u(t)=0.25*\sin\left(2\pi t/T\right)\). (b) The energy spectrum of the chain with \(u(t)=0.25*\sin\left(2\pi t/T\right)\) and \(a(t)=0.5*[1-\cos\left(2\pi t/T\right)]\). (c) Time evolution of the particle distributions \(\langle\sigma_{j}^{z}\rangle\) for the qubit chain. The color from dark-red to bright-yellow represents the particle distribution \(\langle\sigma_{j}^{z}\rangle\) varying from \(-1\) to \(1\).
In Fig. 7(b), we plot the energy spectrum with respect to the intercell coupling \(c\), which depends on the time \(t\). The coupling strengths are chosen as
\[a=b=1,c\left(t\right)=2\sin\left(\frac{2\pi t}{T}\right). \tag{21}\]
The edge states only exist in the region \(c(t)>a\), where the SSH3 model has the nontrivial topology. We can easily notice that there are 4 edge states divided by 2 gaps. The edge states in each gap are degenerate when \(c(t)\gg a\). Specifically, the eigenstates distribution at \(c(t)=2\) is plotted in Fig. 7(c). The two pairs of degenerate edge states are clearly shown in red dots and blue dots.
Considering the former thouless pumping protocol, we can plot another energy spectrum with respect to the intracell couplings as shown in Fig. 7(d). The coupling strengths are chosen as
\[a\left(t\right)=b\left(t\right)=\sin\left(\frac{2\pi t}{T}\right),c=2 \tag{22}\]
where the intracell couplings are time-dependent and the intercell coupling \(c\) is constant. With the parameters above, the system remains in the topologically trivial region, so the edge states are degenerate all along. We can also clarify the degenerate edge states with the eigenstates distribution in Fig. 7(e).
In a finite size SSH3 model, there must be a exponentially small overlap between the left and the right edge states. Thus the wave functions of the eigenstates in the bulk band gaps must be the superpositions of the left and the right edge states. In the nontrivial topological phase, four existing edge states of the SSH3 model can be denoted as \(\left|L_{\pm}\right\rangle\) and \(\left|R_{\pm}\right\rangle\). The localized eigenfunctions in the lower and upper band gaps can be individually expressed as \(\left(\left|L_{-}\right\rangle\pm\left|R_{-}\right\rangle\right)/\sqrt{2}\) and \(\left(\left|L_{+}\right\rangle\pm\left|R_{+}\right\rangle\right)/\sqrt{2}\). In Fig. 8, we plot the wave functions of the four hybridized edge states. The pattern of these hybridized wave functions is just the same as the original SSH model in Section C. Therefore, we can easily derive the forms of the four edge states as
\[\left|L_{\pm}\right\rangle=\Xi\sum_{n}\left(\mp\lambda\right)^{n-1}\left(\frac {\left|e_{3n-2}\right\rangle\pm\left|e_{3n-1}\right\rangle}{\sqrt{2}}\right) \tag{23}\]
and
\[\left|R_{\pm}\right\rangle=\Xi\sum_{n}\left(\mp\lambda\right)^{L-n}\left( \frac{\left|e_{3n-1}\right\rangle\pm\left|e_{3n}\right\rangle}{\sqrt{2}}\right) \tag{24}\]
where \(\Xi=\sqrt{\left(1-\lambda^{2}\right)/\left(1-\lambda^{2}N\right)}\) is the normalization factor and \(\lambda=a/c\) is the ratio of the coupling coefficients.
Figure 7: (a) Schematic diagram for a trimer qubit chain.The 3 identical sites in each unit cell are denoted by green circle, orange circle and red circle. \(a\) and \(b\) denote the intracell coupling strengths, while \(c\) denotes the intercell coupling strength. (b) The energy spectrum with respect to the intercell coupling \(c(t)\) (unit cells \(N=8\)). The green dashed line and red dashed line represent the edge modes. (c) The eigenstates distribution on the black dashed line in (b). (d) The energy spectrum with respect to the intracell couplings. (e) The eigenstates distribution on the black dashed line in (d).
Figure 8: The wave functions of four hybridized edge states exhibited in Fig. 7(e). (a) and (b) show the hybridized edge states in the lower gap. (c) and (d) show the hybridized edge states in the upper gap.
### The state pumping process in the subspace of edge states
The whole analysis of the trimer Rice-Mele model above is in the mirror symmetric case, i.e. \(a=b\). Therefore the Hamiltonian in Eq. (20) can be rewritten as
\[H_{\rm TRM} =\sum_{n}\left[a\left(A_{n}^{\dagger}B_{n}+B_{n}^{\dagger}C_{n} \right)+cC_{n}^{\dagger}A_{n+1}+{\rm H.c.}\right]\] \[+\sum_{n}\left[uA_{n}^{\dagger}A_{n}+vB_{n}^{\dagger}B_{n}+wC_{n }^{\dagger}C_{n}\right]. \tag{25}\]
Following the lead of the former analysis in Section III.2, we can firstly rewrite the system Hamiltonian in the subspace of \(\{|L_{\pm}\rangle,|R_{\pm}\rangle\}\). As shown in Fig. 7, when \(v\neq 0\), the upper edge states are well isolated from the lower edge states, so we can further divide this subspace into two individual subspaces \(\{|L_{+}\rangle,|R_{+}\rangle\}\) and \(\{|L_{-}\rangle,|R_{-}\rangle\}\). As for our trimer Rice-Mele system shown in Eq. (20), the matrix elements of the effective Hamiltonian in the edge state subspace take the following forms as
\[\langle L_{+}|H|L_{+}\rangle=a+\frac{u+v}{2},\langle R_{+}|H|R_{+}\rangle=a+ \frac{v+w}{2} \tag{26}\]
\[\langle L_{+}|H|R_{+}\rangle=\langle R_{+}|H|L_{+}\rangle=\frac{L\left(a+v \right)+a}{2}\Xi^{2}\left(-\lambda\right)^{L-1} \tag{27}\]
\[\langle L_{-}|H|L_{-}\rangle=-a+\frac{u+v}{2},\langle R_{-}|H|R_{-}\rangle=-a +\frac{v+w}{2} \tag{28}\]
\[\langle L_{-}|H|R_{-}\rangle=\langle R_{-}|H|L_{-}\rangle=\frac{L\left(a+v \right)+a}{2}\Xi^{2}\lambda^{L-1} \tag{29}\]
Thus the effective Hamiltonian in the edge state subspace can be described as
\[H=\left(\begin{array}{cc}H_{+}&\\ &H_{-}\end{array}\right), \tag{30}\]
where \(H_{+}\) and \(H_{-}\) represent the Hamiltonian in the upper subspace \(\{|L_{+}\rangle,|R_{+}\rangle\}\) and the lower subspace \(\{|L_{-}\rangle,|R_{-}\rangle\}\), respectively. Hence \(H_{+}\) and \(H_{-}\) are given by
\[H_{+}=\left(\begin{array}{cc}\frac{u}{2}&g_{+}\\ g_{+}&\frac{w}{2}\end{array}\right)+\left(a+\frac{v}{2}\right)I \tag{31}\]
and
\[H_{-}=\left(\begin{array}{cc}\frac{u}{2}&g_{-}\\ g_{-}&\frac{w}{2}\end{array}\right)-\left(a-\frac{v}{2}\right)I, \tag{32}\]
where \(I\) is identity matrix, \(g_{+}\) and \(g_{-}\) are the coupling strengths between the edge states, given by
\[g_{\pm}=\frac{L\left(a+v\right)+a}{2}\Xi^{2}\left(\mp\lambda\right)^{L-1} \tag{33}\]
The effective Hamiltonians \(H_{+}\) and \(H_{-}\) have the same form as in Eq. (12). Thus we can achieve the quantum state transfer via the same adiabatic passages.
### Two-qubit Bell state transfer via the trimer Rice-Mele model
As discussed before, the adiabatic quantum state transfer process needs to begin in the topological non-trivial region, i.e. \(a\ll c\). In the first stage, the on-site potential of the left edge should be much larger than that of the right edge, i.e. \(u\gg w\). With the evolution of the system, the on-site potentials \(u\) and \(w\) of the left and right edge states exchange their values slowly. At some point, there will be a moment that these two potentials reach the same value, i.e. \(u=w\). To avoid the degeneracy of the edge states, the system should be in the topological trivial region near this point, i.e. \(a\simeq c\). Afterwards the potentials of two edge states totally exchanges their values and the system returns back to the topological trivial region. Thus the coefficients of the Hamiltonian along the state transfer process are denoted as
\[a = b=1-0.9*\cos\left(\frac{2\pi t}{T}\right),\] \[c = 1,v=2,\] \[u = 1+\cos\left(\frac{\pi t}{T}\right),w=1-\cos\left(\frac{\pi t}{T} \right), \tag{34}\]
where \(T\) is the variation period of these parameters.
Figures 9(a) show the time-dependent coupling strengths and on-site potentials versus time \(t\) during one period. The initial states are prepared as two-qubit Bell states, i.e., \(\left(\left|e_{1}\right\rangle\pm\left|e_{2}\right\rangle\right)/\sqrt{2}\) in this qubit chain.
Figure 9: (a) The coupling strengths \(a,b\) and on-site potentials \(u,w\) versus time \(t\). (b) The energy spectrum of the trimer Rice-Mele chain, which contains \(3L=21\) qubits. (c) Time evolution of the particle distributions \(\langle\sigma_{j}^{z}\rangle\) for the qubit chain. The time-dependent pump sequence is defined in Eq. (34), with \(T=1000\). The color from dark-red to bright-yellow represents the particle distribution \(\langle\sigma_{j}^{z}\rangle\) varying from \(-1\) to \(1\).
As shown in Figures 9(b), the two selected topological adiabatic passage are denoted as blue-dashed and red-dashed lines, which can be used to transfer the Exchange-symmetric and Exchange-antisymmetric Bell states. The numerical simulations of the dynamical evolution of the qubit chain prepared in a Bell state is shown in Fig. 9(c). The result shows that the symmetric Bell state can indeed be transferred from one end of the chain to the other end within one pumping cycle. The result of the antisymmetric Bell state is not shown, for the purpose of simplification, because it is identical to the prior example. To ensure the adiabatic limit, we set \(T=1000\). As show in the simulation, this pumping process can last for more than three circles. Therefore, we can indeed achieve the edge state pumping in a trimer Rice-Mele model via topological adiabatic passages.
## V Discussions and Conclusions
### experimental feasibility
Let us now further discuss the experimental feasibility. The qubit-qubit couping in flux qubit circuit can be tuned from 0 to 60 MHz [75, 76]. Therefore, in our proposal, the state transfer process needs about 10 \(\mu s\) for 7 unit cells for the gap-tunable flux qubit circuit. The coherence time of qubit is about 10-100 \(\mu s\)[90, 89], thus it is quite reasonable for execute state transfer process with our protocol. Our proposal can also be realized in other superconducting qubit circuits, e.g., transmon qubit, which has long coherence time, tunable coupling and frequency modulation. The disadvantage is that anharmonicity is not very good for the transmon qubits, thus the population leakage needs to be considered. In table 1, we list the advantage and disadvantage for two different type of qubit to realize our proposal. We note that the topological magnon insulator states have been experimentally demonstrated in a superconducting circuit with 5 transmon qubits [91]. Thus it is not very difficult to further study the topological physics using large scale of superconducting quantum circuits with current technology.
As for measurements, we can couple the qubit chain dispersively to a microwave resonator. This means the qubits and resonator are detuned in frequency so they do not exchange energy. Applying a microwave pulse at the bare resonator frequency, the pulse will experience a frequency shift and accumulate a phase shift [92, 93, 94, 95]. By measuring the phase shift of the reflected or transmitted probe pulse, we can infer the state of the qubit.
We note that our qubit chain can be used to implement pumping using one-dimensional Aubry-Andre-Harper (AAH) model [53, 54, 96], which is related to the well-known Hofstadter butterfly problem [97] in two dimensions. In our proposal, the AAH model can be obtained through the Hamiltonian in Eq. (10), in which the frequency \(\omega\) of the \(j\)th qubit needs to be modulated as \(\omega\cos(2\pi j\alpha+t/T)\), where \(\alpha\) is a rational (irrational) number. However, the qubit-qubit coupling strengths need to be changed to uniform, i.e., \(a=b\). Experimentally, this can be realized by fabricating the chain of coupled superconducting flux qubits with uniform coupling strength, the frequency modulation can be done by applying the magnetic fields through the main loop of each qubit. However, if the coupling strengths of superconducting qubit chain are tunable, then we need only to tune all coupling strengths so that they equal to each other. We further note that the pumping of an edge state based on AAH model was realized in quasicrystals [98]. Recently, the Hofstadter butterfly spectrum was observed in a chain of nine coupled gmon qubits [99].
We mention that the qubit chain can also be constructed using circuit QED system [25, 26], where the qubit-qubit coupling can be mediated by the cavity fields. In this case, the cavity fields works as quantum couplers, thus the qubit-qubit coupling can be obtained by eliminating the cavity field with assumption that the qubits and the cavity fields are in the large detuning.
### Conclusions
We have proposed to simulate topological phenomenon using a gap-tunable superconducting-qubit chain, in which the Hamiltonian is equivalent to the SSH model when the total excitations of the qubit chain is limited to one. The spin-up injection at the localized edge state is robust against fluctuations, which can be used to store quantum information. We further show an equivalence of the Rice-Mele model can be realized with the time modulated frequencies and the coupling strengths of the qubit chain, and the adiabatic pumping of an edge state can be realized in this time modulated qubit chain. In our numerical simulation, we take a larger number of the qubits in the chain, e.g., 7 unit cells. However, we find that the topological phenomena can also be demonstrated in such chain with even smaller qubit number, e.g., 4 unit cells. We also find that the localization can become more strong with the increase of the qubit number of the chain.
In summary, superconducting quantum circuits can be artificially designed according to the purpose of the experiments. In particular, the qubit frequencies and qubit-qubit couplings can be easily modulated or tuned. This
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline QT & CT & TC & FM & Anh \\ \hline Flux qubit & microseconds & Yes & Yes & Good \\ \hline Transmon & Several tens seconds & Yes & Yes & Not good \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of the flux and transmon qubits for constructing a qubit chain to realize our proposal. Here, QT, CT, TC, FM and Anh denote Qubit Type, Coherence Time, Tunable Coupling, Frequency modulation, and anharmonicity, respectively.
opens many possibilities to simulate or demonstrate various topological physics of matter on demand with artificial designs. We demonstrate that a spin-up state prepared in the one edge of a topological qubit chain can be transferred to the other edge of the chain. This starter transfer process can be interpreted by restricting the Hilbert space of the Hamiltonian into the subspace of the edge states. Thus the hamiltonian of the chain can degenerate to a two-state LZ model, and the state transfer process in the long qubit chain can be easily understood with the adiabatic passage in the LZ model. With this comprehension, we have designed a new approach to achieve the two-qubit Bell state transfer in a trimer Rice-Mele model. Our proposal can also be expended to other tunable quantum system.
###### Acknowledgements.
Y.X.L. acknowledges the support of the National Basic Research Program of China Grant No. 2014CB921401 and the National Natural Science Foundation of China under Grant No. 91321208. S.C. is supported by the National Key Research and Development Program of China (2016YFA0300600), NSFC under Grants No. 11425419, No. 11374354 and No. 11174360.
## Appendix A Gap-tunable flux qubit
Let us now briefly summarize the main properties of the gap-tunable flux qubit [55; 56; 57], which is a variation of a three-junction flux qubit [71; 72]. It replaces the smaller junction in the three-junction flux qubit with a superconducting quantum interference device (SQUID), which is equivalent to a single junction. The SQUID loop is called as \(\alpha\)-loop. An externally controllable flux \(\Phi_{\alpha}\) applied to the \(\alpha\)-loop can change the Josephson energy of the smaller junction, and the ratio between the larger junctions and the smaller junction is also tunable. This directly results in tunable tunneling between two potential wells of the flux qubit [71] and the time modulated frequency of the qubit by applying ac magnetic flux through the \(\alpha\)-loop. To keep \(\Phi_{\alpha}\) from affecting the biased flux threading the main loop, a gradiometric design is adopted. In such design, the central current of the three-junction flux qubit is split into two opposite running currents through two small loops. The magnetic fluxes generated by these two currents cancel each other in the main loop, thus independent control over both fluxes in \(\alpha\)-loop and main loop is guaranteed. Magnetic fluxes \(\Phi_{1}\) and \(\Phi_{2}\), applied to two small loops of the qubit, are used to tune the potential well energy of the qubit. Because both the tunneling and the potential well energies can be tuned in the gap-tunable flux qubit, we have a fully controllable Hamiltonian. Below, we use reduced magnetic fluxes \(f_{\alpha}=\Phi_{\alpha}/\Phi_{0}\), \(f_{\epsilon_{1}}=\Phi_{1}/\Phi_{0}\), and \(f_{\epsilon_{2}}=\Phi_{2}/\Phi_{0}\). Here \(\Phi_{0}\) is the flux quanta.
We define the flux difference between the two loop halves of the gradiometer as \(f_{\epsilon}=f_{\epsilon_{1}}-f_{\epsilon_{2}}\). At the optimal point where \(f_{\epsilon}=0\), the low-frequency effect of the environmental magnetic flux reaches minimum, and the two lowest-energy states of the flux qubit are the supercurrents states \(\pm I_{p}\) circulating in opposite directions in the \(f_{\epsilon}\) loop. In the persistent current states basis, e.g., the anticlockwise current state \(|\circlearrow\rangle\) and clockwise current state \(|\circlearrow\rangle\), the Hamiltonian of the qubit takes the form (detailed in Appendix D)
\[H=\frac{1}{2}(\epsilon\,\sigma_{x}+\delta\,\sigma_{z}), \tag{10}\]
where \(\sigma_{z}\) and \(\sigma_{x}\) are Pauli matrices, the parameter \(\delta\) is the tunneling energy between the states of two potential wells and can be tuned by the reduced magnetic flux \(f_{\alpha}\). By changing \(f_{\alpha}\) and \(f_{\epsilon}\), the energy bias \(\epsilon(f_{\epsilon},f_{\alpha})=2f_{\epsilon}\Phi_{0}I_{p}(f_{\alpha})\) can be tuned to zero, where the qubit is its optimal point and insensitive to magnetic flux noise to first order. That is, the qubit Hamiltonian in Eq. (10) can be fully controlled by both \(f_{\alpha}\) and \(f_{\epsilon}\). Moreover, in contrast to the three-junction qubit, in which the transition frequency \(\delta\) is fixed at the optimal point, the gap-tunable flux qubit allows us to control the qubit frequency without affecting the bias point, that is, \(\delta\) can be tuned by \(f_{\alpha}\) even at the optimal point with \(f_{\epsilon}=0\). This is demonstrated in detail in the appendix. Moreover, if a time-dependent magnetic flux with the frequency \(\omega_{0}\) is applied to the \(\alpha\)-loop, a time-dependent longitudinal coupling [100; 101] Hamiltonian \(\lambda_{z}\sigma_{z}\cos(\omega_{0}t)/2\) with the coupling constant \(\lambda_{z}\) between the qubit and external field should be included, then the qubit Hamiltonian in Eq. (10) becomes into
\[H^{\prime}=\frac{1}{2}(\epsilon\,\sigma_{x}+\delta\,\sigma_{z})+\frac{1}{2} \lambda_{z}\cos(\omega_{0}t)\sigma_{z}. \tag{11}\]
In the qubit basis corresponding to the excited \(|e\rangle\) and ground \(|g\rangle\) states of the Hamiltonian in Eq. (10), we can derive the longitudinal and the transverse couplings between the qubit and external field from Eq. (11). However, if the qubit works at the optimal point with \(\epsilon=0\)
Figure 10: Schematic diagram for the gap-tunable flux qubit. The magnetic flux threading the \(\alpha\)-loop is denoted by \(\Phi_{\alpha}\). \(\Phi_{1}\) and \(\Phi_{2}\) denote magnetic fluxes through the left and right of the main loop. The signs \(\bigodot\) denote that the magnetic fluxes are directed outside.
then the time dependent Hamiltonian in Eq. (10) becomes into
\[H^{\prime}=\frac{1}{2}\delta\,\sigma_{z}+\frac{1}{2}\lambda_{z}\cos(\omega_{0}t) \sigma_{z}=\frac{1}{2}[\delta+\lambda_{z}\cos(\omega_{0}t)]\sigma_{z}, \tag{11}\]
which has only longitudinal coupling. Here, we note that the coupling constant \(\lambda_{z}\) can be negative or positive by choosing the initial phase of the time-dependent external magnetic flux. This Hamiltonian can also be explained as that the qubit frequency is periodically modulated by the external field. This longitudinal coupling can result in the decoupling of the qubit from its environment.
## Appendix B Periodic Superconducting qubit chains
As shown in Fig. 10, there are four current loops in the gap-tunable flux qubit, thus different types of tunable couplings can be created via different loops. For example, the longitudinal coupling \(\sigma_{j}^{z}\sigma_{j+1}^{z}\) between the \(j\)th and \((j+1)\)th qubits in the qubit chain can be realized by inductively coupling them through their \(\alpha\)-loops [102]. However, in this paper, we mainly focus on the transverse coupling \(\sigma_{j}^{x}\sigma_{j+1}^{x}\) between the qubits via \(\epsilon\)-loops. Here, \(\sigma_{j}^{x}\), \(\sigma_{j}^{y}\), and \(\sigma_{j}^{z}\) denote Pauli operators, which are defined in the qubit basis \(|g\rangle_{j}\) and \(|e\rangle_{j}\) of the \(j\)th qubit.
To demonstrate different topological physics with gap-tunable flux qubits, in this section, we mainly show how to realize two coupling mechanism between gap-tunable flux qubits. (i) The qubits are directly coupled to each other via a mutual inductance, with fixed coupling strengths between qubits, but the frequencies of qubits can be either modulated or fixed. (ii) The qubits are indirectly coupled to each other via a coupler, thus both the qubit frequencies and the coupling strengths between qubits can be controlled. The first approach might be experimentally challenging, but the second one is more popular in recent experiments.
### Qubit chain with fixed coupling strength
As schematically shown in Fig. 11, the qubits in the chain can be directly coupled to each other, that is, \(2N\) identical gap-tunable flux qubits in the chain are directly coupled to each other through their mutual inductance via \(\epsilon\)-loops. We assume that each qubit is only coupled to its nearest neighbors, the coupling strength between \(j\)th and \((j+1)\)th qubits can be obtained by \(J=MI_{pj}I_{p(j+1)}\). Here \(M\) is the mutual inductance between the \(j\)th and \((j+1)\)th qubit loops, \(I_{pj}\) and \(I_{p(j+1)}\) are the current circulating main loops of the \(j\)th and \((j+1)\)th qubits. Projecting \(J\) onto the eigenstates of the \(j\)th and \((j+1)\)th qubits, i.e., the states \(|g\rangle_{j}\), \(|g\rangle_{j+1}\), \(|e\rangle_{j}\), and \(|e\rangle_{j+1}\), we can obtain the coupled Hamiltonian, which includes the longitudinal coupling term \(\sigma_{j}^{z}\sigma_{j+1}^{z}\) with the coefficient \(g_{j,j+1}^{zz}\), transverse coupling term \(\sigma_{j}^{x}\sigma_{j+1}^{x}\) with the coefficient \(g_{j,j+1}^{xx}\), and cross coupling terms, e.g., \(\sigma_{j}^{x}\sigma_{j+1}^{z}\) with the coefficient \(g_{j,j+1}^{xz}\). Detailed analysis in the Appendix shows that the interaction via the main loops only results in transverse coupling when the qubits work at the optimal point, while the coefficients \(g_{j,j+1}^{zz}\) and \(g_{j,j+1}^{zz}\) of the longitudinal and cross couplings are zero.
To make the qubit have long coherence, we assume that all gap-tunable qubits in the chain work at the optimal point in the following discussions, then there is only transverse coupling between the qubits. As shown in the Appendix, the coupling coefficient \(g_{j,j+1}^{xx}\) of the transverse coupling between the \(j\)th and \((j+1)\)th qubits is written as \(g_{j,j+1}^{xx}=Mg_{\epsilon,\perp}^{j}g_{\epsilon,\perp}^{j+1}\), with \(g_{\epsilon,\perp}^{j}={}_{j}\langle e|I_{pj}|g\rangle_{j}\) and \(g_{\epsilon,\perp}^{(j+1)}={}_{j+1}\langle e|I_{p(j+1)}|g\rangle_{j+1}\). To create alternating coupling pattern as schematically shown in Fig. 10, the spacings between the qubits need to be varied respectively in order to alter \(M\). This is experimentally accessible with current technology of the superconducting qubit circuits. The qubit chain is fabricated such that \(a\equiv g_{(2m-1),2m}^{xx}\) and \(b\equiv g_{2m,(2m+1)}^{xx}\) with \(m=1,\,2,\cdots,L\). We note that \(b=0\) when \(m=L\). Then, the Hamiltonian of the coupled-qubit chain can be written as
\[H = \sum_{j=1}^{2N}\frac{\omega_{j}}{2}\sigma_{j}^{z}+\sum_{j\in \text{odd}}^{2N}a(\sigma_{j}^{+}\sigma_{j+1}^{-}+\text{H.c.}) \tag{12}\] \[+ \sum_{j\in\text{even}}^{2N}b(\sigma_{j}^{+}\sigma_{j+1}^{-}+ \text{H.c.}),\]
with \(\omega_{j}\) denoting the frequency of the qubit. Because we assume that all qubits are identical, thus the qubits have the same frequency, that is, \(\omega_{j}=\omega\) with \(j=1,\cdots,2N\). That is, the qubits resonantly interact with each other, and then we can make rotating wave approximation such that the transverse coupling term \(\sigma_{j}^{x}\sigma_{j+1}^{x}\) between the \(j\)th and \((j+1)\)th qubits is approximated as \(\sigma_{j}^{x}\sigma_{j+1}^{x}\approx(\sigma_{l}^{+}\sigma_{l+1}^{-}+\sigma_{j}^ {-}\sigma_{j+1}^{+})\). It is clear that the coupling strengths \(a\) and \(b\) are fixed once the circuit is fabricated.
As shown in Fig. 11 and discussed in Section A, when the qubit works at the optimal point, the frequency mod
Figure 11: Schematic diagram for the chain of coupled gap-tunable flux qubits with alternating coupling strengths \(a\) and \(b\) (Top panel). As shown in the lower panel, each qubit is coupled to its nearest-neighbors via the flux \(f_{\epsilon_{i}}\). The different coupling strengths \(a\) and \(b\) are created by varying the spacing between the qubits. The frequency of the qubit can be tuned in situ by the magnetic frustration \(f_{\alpha}\) threading the \(\alpha\)-loop.
ulation can be only done by applying the time-dependent magnetic flux through the \(\alpha\)-loop. However, when all the qubits do not work at the optimal point, although the frequency of each qubit and the coupling strength between the qubits can be modulated by a time-dependent magnetic flux applied through the main loop of each qubit [59], the coherence of the qubits becomes worse. Thus, hereafter, we assume that the all qubits work at their optimal points. The qubit readout and control can be done in principle as in experiments [55; 56; 57]. In practise, there exists the next-nearest neighbor coupling in all superconducting circuits of many qubits with either capacitive or inductive coupling, these couplings can be minimized to be negligibly small by carefully designing circuits, e.g., as in the superconducting flux qubit chain with direct inductive coupling [56].
In above, we construct a qubit chain with fixed coupling. If the coupling between qubits in the chain can be tuned, then the system can be tuned from topological to non-topological phase, vice versa. In the natural atomic systems, the coupling is not tunable. However, the superconducting systems provide us ability to tune the coupling between the qubits. Below, we show two ways to tune the couplings between qubits in the chain.
### Qubit chain with tunable couplings
In superconducting qubit circuits, the tunable coupling can be realized by using many methods. In this section, we show two methods, which are used for tunable coupling. One is that the qubit coupling can be tuned by modulating qubit frequencies with longitudinal coupling fields as shown in Eq. (A). Another one is tunable coupling realized by adding the additional coupler between qubits.
#### b.2.1 tunable coupling realized by longitudinal coupling field
We now show how to tune the coupling by modulating the qubit frequencies. We assume that all flux qubits in the chain work at their optimal points, thus the qubit frequency can be modulated by applying the magnetic flux through the \(\alpha\)-loop as shown in Refs. [103; 104]. In this case, the Hamiltonian \(\omega_{j}\sigma_{j}^{z}/2\) of the single qubit in Eq. (B) is replaced by the modulated Hamiltonian in Eq. (A), and the Hamiltonian in Eq. (B) can be written as
\[H_{M} = \sum_{j=1}^{2N}\frac{\omega_{j}+u_{j}(t)}{2}\sigma_{j}^{z}+\sum_ {j\in\mathrm{odd}}^{2N}\left[a(\sigma_{j}^{+}\sigma_{j+1}^{-}+\sigma_{j}^{-} \sigma_{j+1}^{+})\right]\] (B2) \[+ \sum_{j\in\mathrm{even}}^{2N}\left[b(\sigma_{j}^{+}\sigma_{j+1} ^{-}+\sigma_{j}^{-}\sigma_{j+1}^{+})\right].\]
with \(u_{j}(t)=\lambda_{j}\cos(\omega_{0j}t)\). Here, \(\lambda_{j}\) is proportional to the strength of the driving field and \(\omega_{0j}\) is the frequency of the driving field. For convenience, we also assume that the initial phases of all driving fields are zero. We now apply a unitary transformation \(U(t)=\bigotimes U_{j}(t)\) to Eq. (B) with [101].
\[U_{j}(t)=\exp\left[-\frac{i}{2}\left(\omega_{j}t+\frac{\lambda_{j}}{\omega_{0j }}\sin(\omega_{0j}t)\right)\sigma_{j}^{z}\right]\] (B3)
then we find that the term \(u_{j}(t)\) in Eq. (B) is canceled, and then operators \(\sigma_{j}^{\pm}\) become into
\[U_{j}^{\dagger}(t)\sigma_{j}^{\pm}U_{j}(t) = \sigma_{j}^{\pm}\exp\left[\pm i\left(\omega_{j}t+\frac{\lambda_{j }}{\omega_{0j}}\sin(\omega_{0j}t)\right)\right]\] \[= \sum_{n=-\infty}^{\infty}i^{n}\exp[\pm i(\omega_{j}+n\omega_{0j}t ]J_{n}^{j}\left(\pm\alpha_{j}\right)\]
with \(\alpha_{j}\equiv\lambda_{j}/\omega_{0j}\) and the Bessel functions \(J_{n}^{j}\left(\pm\alpha_{j}\right)\) of the first kind. Here, \(j\) is the index of the \(j\)th qubit.
If all the qubits are identical and the frequencies of the driving fields are the same, that is, \(\omega_{j}=\omega\) and \(\omega_{0j}=\omega_{0}\) with \(j=1,\cdots,2N\), then the Hamiltonian in Eq. (B) becomes into an effective Hamiltonian
\[H_{1}=\sum_{j\in\mathrm{odd}}^{2N}\left[P\sigma_{j}^{+}\sigma_{j+1}^{-}+ \mathrm{H.c.}\right]+\sum_{j\in\mathrm{even}}^{2N}\left[Q\sigma_{j}^{+}\sigma _{j+1}^{-}+\mathrm{H.c.}\right].\] (B5)
with effective coupling strengths
\[P = a\sum_{n=-\infty}^{\infty}(-1)^{n}J_{n}^{j}(\alpha_{j})J_{n}^{(j +1)}(\alpha_{j+1}),\] (B6) \[Q = b\sum_{n=-\infty}^{\infty}(-1)^{n}J_{n}^{j}(\alpha_{j})J_{n}^{(j +1)}(\alpha_{j+1}),\] (B7)
after all oscillation terms of high frequencies are neglected. Here, \(j\) takes odd and even numbers for coefficients \(A\) and \(B\), respectively. It is clear that the coefficients \(A\) and \(B\) can be tuned by changing ratios \(\alpha_{j}\equiv\lambda_{j}/\omega_{0j}\) through the amplitudes of driving fields when the frequencies \(\omega_{0j}\) are given.
However, in practise, the qubit frequencies are not exactly same, this provides us more convenient way to tune the coupling strength via frequency matching [59]. If we assume that \(\omega_{j}-\omega_{j-1}=\omega_{0j}\) for odd number \(j\) and \(\omega_{j}-\omega_{j-1}=\omega_{0(j-1)}\) for even number, then the effective Hamiltonian can be written as
\[H_{2}=\sum_{j\in\mathrm{odd}}^{2N}\left[A^{\prime}\sigma_{j}^{+}\sigma_{j+1}^{ -}+\mathrm{H.c.}\right]+\sum_{j\in\mathrm{even}}^{2N}\left[B^{\prime}\sigma_{j }^{+}\sigma_{j+1}^{-}+\mathrm{H.c.}\right].\] (B8)
with effective coupling strengths
\[P = iaJ_{0}^{j}(\alpha_{j})J_{1}^{(j+1)}(\alpha_{j+1}),\] (B9) \[Q = ibJ_{1}^{j}(\alpha_{j})J_{0}^{(j+1)}(\alpha_{j+1}),\] (B10)
which has been experimentally demonstrated in a qubit chain with five transmon qubits [105] and also two coupled qubits [104]. Therefore, tunable couplings between qubits can be realized by modulating the qubit frequencies without increasing the complexity of the circuit.
#### b.2.2 tunable coupling via couplers
If the qubits work at the optimal point and their frequencies are not modulated, then as schematically shown in Fig. 12, additional couplers [59, 60, 61, 62, 63, 64] can be used to achieve tunable couplings between the qubits. Various couplers have been demonstrated in the two-qubit cases [59, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 445, 446, 447, 451, 452, 453, 46, 471, 472, 473, 474, 48, 490, 411, 422, 434, 449, 453, 46, 475, 476, 477, 48, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 515, 515, 52, 53, 54, 54, 55, 55, 56, 57, 58, 59, 60, 61, 62, 62, 63, 64, 65, 66, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 88, 89, 91, 83, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 206, 207, 211, 224, 208, 211, 225, 209, 226, 212, 227, 28, 229, 231, 232, 240, 241, 242, 25, 256, 267, 268, 279, 280, 290, 292, 294, 295, 296, 297, 298, 399, 400, 401, 402, 403, 404, 405, 406, 407, 409, 410, 411, 422, 434, 445, 446, 447, 451, 452, 453, 46, 476, 48, 493, 408, 494, 410, 409, 411, 400, 412, 434, 405, 406, 407, 408, 409, 413, 409, 414, 415, 409, 414, 42, 434, 44, 454, 46, 476, 48, 495, 496, 500, 507, 509, 510, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 89, 91, 92, 94, 95, 96, 97, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 109, 111, 112, 113, 114, 150, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 1
\(b=1\) marked by a red point in Fig. 13 (a). (iii) There is a zero energy (\(E=0\)) mode lying in the middle of the bulk gap, where \(a<b\). The zero-energy mode has two degenerate states. They are presented in Fig. 13 (b) and (c) corresponding to \(a=0.1\) point marked by a star in Fig. 13 (a). The eigenfunctions are localized at the left and right edge, and decay exponentially towards the bulk.
The appearance of \(E=0\) mode with localized eigenfunctions is the key feature of the topological phase when \(a<b\). The localized eigenfunctions shown in Figs.13(b) and (c) are the superpositions \(|L\rangle\pm|R\rangle\) of the left and right edge states \(|L\rangle\) and \(|R\rangle\). Here the left edge state is defined as
\[|L\rangle=\sum_{j\in\text{odd}}a_{j}|e_{j}\rangle, \tag{10}\]
where \(j\) is an odd number and \(a_{j}\) is the amplitude on the odd qubits. Similarly, the right edge state is written as
\[|R\rangle=\sum_{j\in\text{even}}b_{j}|e_{j}\rangle, \tag{11}\]
here \(j\) is an even number and \(b_{m}\) is the amplitude on the even qubits. Note the vanishing amplitudes on the even (odd) site of the left (right) edge state are the consequence of the chiral symmetry. The decay depth into the bulk is characterized by [10]
\[|a_{j}|=|a_{1}|\text{exp}\left(-\frac{j-1}{2\xi}\right), \tag{12}\]
where the localization length \(\xi=(\ln|b|-\ln|a|)^{-1}\). When the ratio \(|b|/|a|\) becomes appreciably large, the wave function will almost be confined at the first and last qubit.
The particle distributions in the eigenfunctions \(|\psi_{l}\rangle\) of the qubit chain can be measured via the variable \(\sigma_{n}^{z}\) of each qubit, with the measurement result \(\langle\psi_{l}|\sigma_{n}^{z}|\psi_{l}\rangle\). Here, the subscript \(n\) denotes the \(n\)th qubit. We note that the in the single-excitation subspace, the Pauli operator \(\sigma_{n}^{z}\) can be expressed as \(\sigma_{n}^{z}=2|e_{n}\rangle\langle e_{n}|-I\), where \(I\) is the identity matrix. Although the qubits are coupled to each other, the excitation stays at the end of qubit chain as shown in Fig. 13 (e) and (f).
To demonstrate the existence of edge states, we study the quench dynamics after the first qubit is flipped. That is, let us assume that all the qubits are initially in the spin-down state, then a \(\pi\) pulse is applied to flip the first qubit. The dynamics of the qubit chain can be measured by
\[\langle\sigma_{n}^{z}(t)\rangle=\langle e_{1}|e^{iH_{\text{S}}t}\sigma_{n}^{z }e^{-iH_{\text{S}}t}|e_{1}\rangle. \tag{13}\]
with the Hamiltonian given in Eq. (3). Below, we compare the dynamics of a topological SSH chain, described by the Hamiltonian in Eq. (3) when \(a=0.1\) and \(b=1\), with a transverse spin chain, described by the Hamiltonian in Eq. (3) when \(a=b=1\).
To model the small divergence of the qubits resulted from the sample fabrication, a random noise \(\eta\) is introduced to the coupling strengths \(a\) and \(b\), as well as the frequency \(\omega\) in Eq. (3). Here \(\eta\) follows Gaussian distribution with mean value \(0\), and a standard deviation \(0.01\). That is the fluctuations of the coupling strength and qubit frequencies are \(10\%\) of the smaller coupling strength \(a\).
For the topological SSH chain with \(a=0.1\) and \(b=1\) in the Hamiltonian of Eq. (3), Fig. 14(a) and (b) show that the excitation remains as a soliton at the first qubit. This can be understood by the evolution of the wave function
\[|\psi(t)\rangle=\sum_{l}e^{-E_{l}t/\hbar}\langle\psi_{l}|e_{1}\rangle|\psi_{l}\rangle, \tag{14}\]
where \(|\psi_{l}\rangle\) denotes the \(l\)th eigenfunctions of the Hamiltonian in Eq. (3) with corresponding eigenenergy \(E_{l}\). We start with the state \(|e_{1}\rangle\) after the excitation is injected at the first qubit. \(|e_{1}\rangle\) has a substantial overlap with the degenerate edge states with corresponding eigenenergy \(E=0\). This leads to a stationary state. Conversely, if we inject the excitation in the transverse Ising chain described by the Hamiltonian of Eq. (3) with \(a=b=1\), without localized edge states, it will quickly diffuse into the bulk. This can be seen from Figs. 14(c) and (d). The excitation at the first qubit quickly expands into the bulk, reaches the end of the qubit chain, and then is reflected back. The similar propagation of such excitations is demonstrated in Refs. [51; 52].
Figure 13: Spectrum and wave functions of the Hamiltonian (3). The total number of qubits is assume as 14. (a) Energy spectrum verses the coupling strength \(a\) with \(b=1\). Degenerate wave functions, corresponding to the zero-energy mode with \(a=0.1\) indicated by a red point in (a), are shown in (b) and (c), respectively. A typical bulk state at the point \(a=1\), marked by a green point in (a), is plotted in (d). The observables \(\langle\sigma_{n}^{z}\rangle\) of the corresponding wave functions in (b), (c) and (d), are shown in (e), (f) and (g). All the navy blocks denote the amplitude on the odd qubits, while the red blocks represent the even sites.
In summary, due to the alternating coupling pattern \(a<b\) of the whole chain, the soliton is topologically protected and robust against disorder. This is because the soliton resides in the gap, extra energy is required if we want to excite the soliton to other states. We have also shown the random noise added to the parameters in Eq. (10) have no appreciable influence on the soliton state.
Though the soliton appears at the very end of the chain, it can also be created at the interface between the topological phase with \(a<b\) and topologically trivial phase with \(a>b\). For example, as shown in Ref. [107], if a defect is created at the center of a topological SSH chain, then a zero energy mode localizes at the defect. Moreover, the defect can serve as a high-fidelity memory, which is topologically protected. An arbitrary state encoded with the presence and absence of the localized state can allow for perfect state transfer in the spin chain [107; 108].
## Appendix D Hamiltonian derivation of the gap-tunable flux qubit
In this section, the Hamiltonian of inductively coupled gap-tunable flux qubit (Fig. 10) will be derived. With our design, the qubit frequencies and the couplings between qubits (e.g. in Eq. (10)) can be tailored and tuned in situ by external fluxes.
### Hamiltonian of single qubit
A Gap-tunable flux qubit [55; 56; 57], as shown in Fig. 15, replaces the \(\alpha\) junction of a three-junction flux qubit [71] with a SQUID. This introduces an external controllable flux \(f_{\alpha}\) to tune \(\alpha\) in situ, thus changing the gap (qubit frequency). To keep \(f_{\alpha}\) from affecting the biasing flux \(f_{\epsilon}\) threading the main loop, a gradiometric design is adopted. In this 8-shaped design, the central current of the three-junction flux qubit is split into two opposite running currents. Because the magnetic flux generated by these two currents now cancel each other in the main loop, independent control over both flux \(f_{\alpha}\) and \(f_{\epsilon}\) is ensured. Different types of tunable couplings can also be created via different loops. For example, longitudinal coupling between two qubits (\(\sigma_{\tilde{t}}^{z}\sigma_{\tilde{t}+1}^{z}\)) can be realized by inductively coupling two qubits through their \(f_{\alpha}\)-loop [102]. In this paper, we focus on the transverse coupling (\(\sigma_{l}^{x}\sigma_{l+1}^{x}\)) through the \(f_{\epsilon}\)-loop.
We follow the notations in Refs. [56; 102], as shown in Fig. 15. We assume the phase accumulated along the main trap-loop is \(\theta\). Here \(\beta\) denotes the ratio between the circumference of the \(\alpha\)-loop to the main trap-loop. The magnetic frustration threading the corresponding loop is denoted by \(f_{i}\). The phase difference across the junction is \(\varphi_{i}\).
Then the flux quantization conditions for the main
Figure 14: Time evolution of \(\langle\sigma_{j}^{z}\rangle\), after the first qubit is flipped to the spin-up state. (a) and (c) show the time evolution \(\langle\sigma_{1}^{z}\rangle\) of the first qubit. (b) and (d) show the time evolution \(\langle\sigma_{1}^{z}\rangle\) of all the qubits. A random noise with an amplitude of 10% of the coupling strength \(a\) is added to the qubit frequency and coupling strength. The topological SSH chain with \(a=0.1\) and \(b=1\) is shown in (a) and (b). The transverse Ising chain with \(a=b=1\) is plotted in (c) and (d). In our plot, the qubit number in the chain is assumed as 14.
Figure 15: Circuit representation of a gap tunable flux qubit. The sign \(\times\) denotes the Josephson junction. The long arrows denote the current direction. The phase accumulated along the main trap-loop is denoted by \(\theta\). The parameter \(\beta\) denotes the ratio between the circumference of the \(\alpha\)-loop and that of the main trap-loop. The \(f_{\alpha}=\Phi_{\alpha}/Phi_{0}\) denotes the reduced magnetic flux threading the \(\alpha\) loop, and \(f_{1}=\Phi_{1}/\Phi_{0}\) and \(f_{2}=\Phi_{2}/\Phi_{0}\) denote the reduced magnetic fluxes through the left and right parts of the main loop, respectively. \(E_{J}\) and \(C_{J}\) denote the Josephson energy and capacitance, respectively. The phase difference across each junction is denoted by \(\varphi_{i}\).
trap-loop, \(\alpha\)-loop, \(f_{\epsilon_{1}}\)-loop, \(f_{\epsilon_{2}}\)-loop are
\[\theta+2\pi(f_{\epsilon_{1}}+f_{\epsilon_{2}}+f_{\alpha})=2\pi N\] \[\varphi_{3}+\varphi_{4}+\beta\theta+2\pi f_{\alpha}=2\pi N_{\alpha}\] \[\frac{1}{2}(1-\beta)\theta-\varphi_{3}-\varphi_{2}-\varphi_{1}+2 \pi f_{\epsilon_{1}}=2\pi N_{1}\] \[\frac{1}{2}(1-\beta)\theta+\varphi_{1}+\varphi_{2}-\varphi_{4}+2 \pi f_{\epsilon_{2}}=2\pi N_{2}, \tag{101}\]
where \(N_{i}\) is the number of trapped fluxoids.
Using above conditions, \(\varphi_{3}\), \(\varphi_{4}\) can be expressed in terms of \(\varphi_{1}\), \(\varphi_{2}\),
\[\varphi_{3}=-\pi[\beta(N-f_{\Sigma})+f_{\alpha}]-(\varphi_{1}+ \varphi_{2})-\pi(n-f_{\epsilon})+\pi N_{\alpha},\] \[\varphi_{4}=-\pi[\beta(N-f_{\Sigma})+f_{\alpha}]+(\varphi_{1}+ \varphi_{2})+\pi(n-f_{\epsilon})+\pi N_{\alpha}, \tag{102}\]
where \(f_{\Sigma}=f_{\epsilon_{1}}+f_{\epsilon_{2}}+f_{\alpha}\), \(f_{\epsilon}=f_{\epsilon_{1}}-f_{\epsilon_{2}}\), \(N=N_{1}+N_{2}+N_{\alpha}\), \(n=N_{1}-N_{2}\). For simplicity, we assume \(N_{\alpha}=0\) in the following analysis.
Following the standard circuit quantization process [109, 110], the charging energy of the capacitor represents the kinetic energy, while the Josephson energy represents the potential energy. Then the Lagrangian of the circuit in terms of \(\varphi_{1}\), \(\varphi_{2}\) is
\[\mathscr{L} (\dot{\varphi}_{i},\varphi)\ =\] \[(\frac{\hbar}{2e})^{2} [(1+2\alpha)\frac{C}{2}(\dot{\varphi_{1}}^{2}+\dot{\varphi_{2}}^{ 2})+2\alpha C\dot{\varphi_{1}}\dot{\varphi_{2}}]\] \[-E_{J} (2(1+\alpha)-\cos\varphi_{1}-\cos\varphi_{2}-\] \[2\alpha\cos \{\pi[\beta(N-f_{\Sigma})+f_{\alpha}]\}\cos[(\varphi_{1}+ \varphi_{2})+\pi(n-f_{\epsilon})]).\]
The canonical momentum \(p_{i}\) conjugated to coordinate \(\varphi_{i}\) is
\[p_{1}=\frac{\partial L}{\partial\dot{\varphi}_{1}}=(\frac{\hbar }{2e})^{2}[(1+2\alpha)C\dot{\varphi}_{1}+2\alpha C\dot{\varphi}_{2}],\] \[p_{2}=\frac{\partial L}{\partial\dot{\varphi}_{2}}=(\frac{\hbar }{2e})^{2}[(1+2\alpha)C\dot{\varphi}_{2}+2\alpha C\dot{\varphi}_{1}]. \tag{104}\]
The Hamiltonian is related to the Lagrangian by Legendre transformation
\[H(p_{i},\varphi_{i})\ =\sum p_{i}\dot{\varphi}_{i}-\mathscr{L} \tag{105}\] \[= \frac{4E_{C}}{(1+4\alpha)}[(1+2\alpha)n_{1}^{2}-4\alpha n_{1}n_{ 2}+(1+2\alpha)n_{2}^{2}]\] \[+E_{J}( 2(1+\alpha)-\cos\varphi_{1}-\cos\varphi_{2}\] \[-2\alpha\cos \{\pi[\beta(N-f_{\Sigma})+f_{\alpha}]\}\cos[(\varphi_{1}+\varphi_ {2})+\pi(n-f_{\epsilon})]),\]
where charging energy of the junction is defined as \(E_{C}=e^{2}/2C\). We also introduced the number operator of Cooper-pairs on the junction capacitor \(n_{i}=p_{i}/\hbar\), which can also be written as \(n_{i}=-i\partial/\partial\varphi_{i}\).
The energy levels of the qubit are solved numerically by the plane-wave solutions \(\Psi(\varphi_{1},\varphi_{2})=\frac{1}{2\pi}\sum_{k,l=-N}^{N}c_{k,l}\exp\{-i( k\varphi_{1}+l\varphi_{2})\}\). Here \(k\) (\(l\)) is an integer, corresponding to a state that has \(k\) (\(l\)) Cooper pairs on junction 1 (2). The total charge states is set to \(N=15\).
The energy levels as a function of the bias flux \(f_{\epsilon}\) are plot in Fig. 16 (a). At the optimal working point where \(f_{\epsilon}=0\), the lowest two energy levels is well separated from higher excited states. The splitting of these two levels is the qubit frequency \(\omega\). As shown in Fig. 16 (b), \(\omega\) can be tuned by \(f_{\alpha}\).
### Qubit-qubit coupling
As shown in Fig. 10, the qubits are coupled inductively via \(f_{\epsilon}\)-loop. The current \(I_{p,j+1}\) circulating the \(f_{\epsilon}\) loop of the \(j+1\)th qubit can induce a change of flux \(\delta f_{j}\) in the \(f_{\epsilon}\) loop of qubit \(j\), giving rise to the coupling term \(J=MI_{pj}I_{p(j+1)}\), where the current \(I_{pj}=\partial\mathscr{H}_{j}/\partial f_{\epsilon}\). Here we assume \(\delta f_{j}\) does not affect \(f_{\alpha}\)-loop.
As shown in the main text, the transverse coupling strength \(a=g_{\epsilon,\perp}^{j}g_{\epsilon,\perp}^{j+1}\), with
\[g_{\epsilon,\perp}=\langle e|\frac{\partial\mathscr{H}}{\partial f_{\epsilon}}|g\rangle. \tag{107}\]
For simplicity we have dropped the superscript \(j\).
Similarly the longitudinal coupling \(\lambda\sigma_{j}^{z}\sigma_{j+1}^{z}\) is also possible, where \(\lambda=g_{\epsilon,\parallel}^{j}g_{\epsilon,\parallel}^{j+1}\), with
\[g_{\epsilon,\parallel}=\langle+|\frac{\partial\mathscr{H}}{\partial f_{\epsilon }}|-\rangle. \tag{108}\]
Here \(|\pm\rangle=(|e\rangle\pm|g\rangle)/\sqrt{2}\).
Note because of the gradiometric geometry, the flux in the main trap loop is insensitive to a homogeneous magnetic field. Thus only the asymmetrical flux \(f_{\epsilon}=f_{\epsilon_{1}}-f_{\epsilon_{2}}\) contributes in the coupling.
We plot \(g_{\epsilon,\parallel}\) and \(g_{\epsilon,\perp}\) as a function of the bias flux \(f_{\epsilon}\) in Fig. 16 (c) and (d). At the optimal working point \(f_{\epsilon}=0\), \(g_{\epsilon,\perp}\) is the largest, while \(g_{\epsilon,\parallel}=0\). This is consistent with the symmetry of the flux qubit. The potential energy is symmetrical about the optimal working point, the loop currents in the ground state \(I_{0}=\langle g|\frac{\partial\mathscr{H}}{\partial f_{\epsilon}}|g\rangle\), and in the first excited \(I_{1}=\langle e|\frac{\partial\mathscr{H}}{\partial f_{\epsilon}}|e\rangle\) are zero. Thus when biasing at the optimal point, the qubit is first-order insensitive to dephasing noise.
To summarize, when the qubits are all biased at the optimal working point, only transverse coupling are present. Even there is a residual longitudinal coupling, this will only add fluctuations to the diagonal term of the matrix in the single-spin excitation subspace. However, the previous result shows the chain is robust under fluctuations. To create alternating coupling pattern of the SSH model, one can vary the qubit spacing, that will change the mutual inductance \(M\). |
2310.07085 | A canonical Hamiltonian formulation of the Navier-Stokes problem | This paper presents a novel Hamiltonian formulation of the isotropic
Navier-Stokes problem based on a minimum-action principle derived from the
principle of least squares. This formulation uses the velocities
$u_{i}(x_{j},t)$ and pressure $p(x_{j},t)$ as the field quantities to be
varied, along with canonically conjugate momenta deduced from the analysis.
From these, a conserved Hamiltonian functional $H^{*}$ satisfying Hamilton's
canonical equations is constructed, and the associated Hamilton-Jacobi equation
is formulated for both compressible and incompressible flows. This
Hamilton-Jacobi equation reduces the problem of finding four separate field
quantities ($u_{i}$,$p$) to that of finding a single scalar functional in those
fields--Hamilton's principal functional $\text{S}^{*}[t,u_{i},p]$. Moreover,
the transformation theory of Hamilton and Jacobi now provides a prescribed
recipe for solving the Navier-Stokes problem: Find $\text{S}^{*}$. If an
analytical expression for $\text{S}^{*}$ can be obtained, it will lead via
canonical transformation to a new set of fields which are simply equal to their
initial values, giving analytical expressions for the original velocity and
pressure fields. Failing that, if one can only show that a complete solution to
this Hamilton-Jacobi equation does or does not exist, that will also resolve
the question of existence of solutions. The method employed here is not
specific to the Navier-Stokes problem or even to classical mechanics, and can
be applied to any traditionally non-Hamiltonian problem. | John W. Sanders, Adam C. DeVoria, Nathan J. Washuta, Gafar A. Elamin, Kevin L. Skenes, Joel C. Berlinghieri | 2023-07-01T01:21:07Z | http://arxiv.org/abs/2310.07085v3 | # A canonical Hamiltonian formulation of the Navier-Stokes problem
## Abstract
This paper presents a novel Hamiltonian formulation of the isotropic Navier-Stokes problem based on a minimum-action principle derived from the principle of least squares. This formulation uses the velocities \(u_{i}(x_{j},t)\) and pressure \(p(x_{j},t)\) as the field quantities to be varied, along with canonically conjugate momenta deduced from the analysis. From these, a conserved Hamiltonian functional \(H^{*}\) satisfying Hamilton's canonical equations is constructed, and the associated Hamilton-Jacobi equation is formulated for both compressible and incompressible flows. This Hamilton-Jacobi equation reduces the problem of finding four separate field quantities (\(u_{i}\),\(p\)) to that of finding a single scalar functional in those fields--Hamilton's functional S\({}^{*}[t,u_{i},p]\). Moreover, the transformation theory of Hamilton and Jacobi now provides a prescribed recipe for solving the Navier-Stokes problem: Find S\({}^{*}\). If an analytical expression for S\({}^{*}\) can be obtained, it will lead via canonical transformation to a new set of fields which are simply equal to their initial values, giving analytical expressions for the original velocity and pressure fields. Failing that, if one can only show that a complete solution to this Hamilton-Jacobi equation does or does not exist, that will also resolve the question of existence of solutions. The method employed here is not specific to the Navier-Stokes problem or even to classical mechanics, and can be applied to any traditionally non-Hamiltonian problem.
## 1 Introduction
Given the title of this paper, it is incumbent on the authors to assure the reader that we do not claim to have done the impossible. A viscous fluid is, after all, a non-Hamiltonian system. There is no action for which Hamilton's principle [1, 2, 3] yields the Navier-Stokes equations [4] in their usual form, and we do not claim otherwise. Remarkably, however, a Hamiltonian formulation can still be found by considering a _mathematically equivalent higher-order problem_, as we will now demonstrate via simple example.
### A motivating example
Consider the first-order initial-value problem
\[\dot{v}=-v,\quad v(0)=1, \tag{1}\]
with unique solution \(v(t)=e^{-t}\). Here \(v(t)\) can be interpreted as the velocity of a lumped mass moving in a viscous medium in one dimension with linear damping. Like the traditional Navier-Stokes equations [4], this too is an intrinsically non-Hamiltonian problem, in that there is no action \(\mathcal{S}\) for which Hamilton's principle (\(\delta\mathcal{S}=0\)) yields the governing equation \(\dot{v}=-v\). And yet, if we simply differentiate both sides of the equation (\(\ddot{v}=-\dot{v}\)), use the original equation to write \(\dot{v}=-v\), and apply the additional initial condition \(\dot{v}(0)=-v(0)=-1\), we arrive at the mathematically equivalent second-order problem
\[\ddot{v}=v,\quad v(0)=1,\quad\dot{v}(0)=-1, \tag{2}\]
which has the same unique solution \(v(t)=e^{-t}\) but which _is_ Hamiltonian--not in the sense that the total mechanical energy is conserved, but in the sense that it has mathematically Hamiltonian structure.
As first observed by Sanders [5, 6, 7, 8, 9], the associated action can be obtained by writing the original equation in standard form (\(\mathcal{R}\equiv\dot{v}+v=0\)), squaring the residual \(\mathcal{R}\), and integrating over time:
\[\mathcal{S}^{*}[v]=\int\text{d}t\left(\frac{1}{2}\mathcal{R}^{2}\right)=\int \text{d}t\left[\frac{1}{2}\left(\dot{v}^{2}+2v\dot{v}+v^{2}\right)\right] \sim\int\text{d}t\left[\frac{1}{2}\left(\dot{v}^{2}+v^{2}\right)\right], \tag{3}\]
where we have used the fact that \(2v\dot{v}=\text{d}(v^{2})/\text{d}t\) is a total time derivative and can therefore be excluded from the action without changing the resulting Euler-Lagrange equation [10]. This is the so-called "time-averaged principle of least squares" [5, 6, 7, 8, 9]: since \(\mathcal{R}=0\) is a local minimum of \(\mathcal{R}^{2}\), it is also a local minimum of \(\int dt(\mathcal{R}^{2})\). Varying \(v\), the first variation of \(\mathcal{S}^{*}\) is
\[\delta\mathcal{S}^{*}=\int\text{d}t\bigg{[}\dot{v}\delta\dot{v}+v\delta v \bigg{]}=\int\text{d}t\bigg{[}(-\ddot{v}+v)\delta v\bigg{]}+\bigg{[}\dot{v} \delta v\bigg{]}_{t_{1}}^{t_{2}}, \tag{4}\]
yielding the second-order equation \(\ddot{v}=v\) and revealing the canonically conjugate "momentum" \(\pi\equiv\dot{v}\). Here and in what follows, we will use the symbol \(\pi\) for canonically conjugate momenta, as is customary in Hamiltonian field theory, in order to avoid later confusion with the pressure field \(p\). Since the mathematical constant \(3.14159...\) does not appear in the present work, there will be no ambiguity.
The corresponding Hamiltonian is obtained via the Legendre transform
\[H^{*}[v,\pi]=\pi\dot{v}-\frac{1}{2}\left(\dot{v}^{2}+v^{2}\right)=\frac{1}{2} \left(\pi^{2}-v^{2}\right). \tag{5}\]
Notably, this Hamiltonian has nothing to do with the total mechanical energy of the system, although it _is_ a conserved quantity. In fact, \(H^{*}=0\) for the actual motion satisfying \(\pi\equiv\dot{v}=-v\). We note in passing that Liouville's theorem is satisfied, as the motion occurs along the line \(\pi=-v\), so that the phase-space volume, being always zero, is conserved. Hamilton's equations
\[\dot{v}=\frac{\partial H^{*}}{\partial\pi},\quad\dot{\pi}=-\frac{\partial H^{ *}}{\partial v} \tag{6}\]
are mathematically equivalent to the second-order problem \(\ddot{v}=v\) and therefore also mathematically equivalent to the original, first-order problem.
The associated Hamilton-Jacobi equation [1, 2, 3, 10, 11, 12, 13] is
\[\frac{1}{2}\left(\frac{\partial\mathsf{S}^{*}}{\partial v}\right)^{2}-\frac{1}{ 2}v^{2}+\frac{\partial\mathsf{S}^{*}}{\partial t}=0, \tag{7}\]
where Hamilton's principal function \(\mathsf{S}^{*}=\mathsf{S}^{*}(v,t)\) serves as the generating function for a canonical transformation to a new coordinate \(\phi\) which is constant and equal to its initial value. Although this is almost identical to the Hamilton-Jacobi equation for the simple harmonic oscillator--the only difference being the sign in front of \((1/2)v^{2}\)--the usual separable solution of the form \(\mathsf{S}^{*}(v,t)=W(v)+T(t)\) does not work, as the reader may check.
Instead, let us use a trial solution of the form
\[\mathsf{S}^{*}(v,t)=F(t)v+\frac{1}{2}v^{2}+f(t), \tag{8}\]
where \(F(t)\) and \(f(t)\) are as yet undetermined functions of \(t\). This trial solution was chosen to cancel the term \((1/2)v^{2}\) from the equation. Substituting our trial solution into the Hamilton-Jacobi equation, we find that
\[\frac{1}{2}[F(t)]^{2}+[F(t)+F^{\prime}(t)]v+f^{\prime}(t)=0. \tag{9}\]
In order for this equation to hold for all \(v\), we must have the following:
\[F(t)+F^{\prime}(t)=0\quad\Rightarrow\quad F(t)=\alpha e^{-t}, \tag{10}\]
where \(\alpha\) is a constant of integration which will be used to transform to the new coordinate, and
\[\frac{1}{2}[F(t)]^{2}+f^{\prime}(t)=0\quad\Rightarrow\quad f(t)=\frac{1}{4} \alpha^{2}e^{-2t}+\gamma, \tag{11}\]
where \(\gamma\) is another constant of integration which is simply additive and can therefore be discarded.
In this way, we have that
\[\mathsf{S}^{*}(v,t;\alpha)=\alpha e^{-t}v+\frac{1}{2}v^{2}+\frac{1}{4}\alpha^ {2}e^{-2t}. \tag{12}\]
With one constant of integration (\(\alpha\)) to match the single degree of freedom (\(v\)), this is a complete solution to the Hamilton-Jacobi equation. The new coordinate \(\phi\) (which is constant and equal to its initial value) is obtained via the canonical transformation
\[\phi=\frac{\partial\mathsf{S}^{*}}{\partial\alpha}=e^{-t}v+\frac{1}{2}\alpha e ^{-2t}. \tag{13}\]
The numerical value of \(\alpha\) is in turn obtained via the canonical relation
\[\pi=\frac{\partial\mathsf{S}^{*}}{\partial v}=\alpha e^{-t}+v, \tag{14}\]
which, evaluated at \(t=0\), gives \(\alpha=-2\) (recall that \(\pi=\dot{v}\), and \(\dot{v}(0)=-v(0)=-1\)). Using the fact that the new coordinate \(\phi\) is equal to its initial value, we have that
\[e^{-t}v-e^{-2t}=v(0)-1=0, \tag{15}\]
giving the correct solution \(v(t)=e^{-t}\).
In summary, by doubling the order of the governing equation and supplying additional auxiliary conditions, we made a non-Hamiltonian problem into a Hamiltonian one [5, 6, 7, 8, 9]. Furthermore, this simple example demonstrates that the method correctly gives the solution to the original, non-Hamiltonian problem. Indeed, _it would appear that every non-Hamiltonian problem belongs to an equivalence class of problems with the same solution, and within that equivalence class there are Hamiltonian variants_. The remainder of this paper applies that concept to the isotropic Navier-Stokes problem [4].
### The Navier-Stokes problem
The incompressible Navier-Stokes equations [4] are given by
\[\rho\dot{u}_{i}+\rho u_{i,j}u_{j}+p_{,i}-\mu u_{i,jj}-\rho b_{i}=0, \tag{16}\]
\[u_{i,i}=0, \tag{17}\]
where \(\rho\) is the constant and uniform density, \(u_{i}=u_{i}(x_{j},t)\) is the velocity field, \(p=p(x_{j},t)\) is the pressure field, \(b_{i}=b_{i}(x_{j},t)\) is the body force field, subscript Roman indices label Euclidean tensor components (\(i,j=1,2,3\)), the \(x_{j}\) are Eulerian spatial coordinates, \(t\) is time, \(\mu\) is the viscosity, a superscribed dot denotes a _partial_ time derivative (\(\dot{u}_{i}=\partial u_{i}/\partial t\)), a comma in a subscript indicates a spatial gradient (\(p_{,i}=\partial p/\partial x_{i}\)), and we employ the Einstein summation convention on repeated subscript indices. In the case of a uniform gravitational field, \(b_{i}=g_{i}\) coincides with the local acceleration due to gravity; however, in what follows, we make no assumptions about the functional form of \(b_{i}(x_{j},t)\): it is completely arbitrary. There are four unknown field quantities: \(u_{i}(x_{j},t)\) and \(p(x_{j},t)\).
We seek, ultimately, a functional
\[H^{*}=H^{*}[t,u_{i},p,\pi_{i},\pi_{4}], \tag{18}\]
where (\(\pi_{i},\pi_{4}\)) are suitable "momenta" conjugate to the field quantities (\(u_{i},p\)), such that Hamilton's canonical equations
\[\dot{u}_{i} =\frac{\delta H^{*}}{\delta\pi_{i}}, \dot{p} =\frac{\delta H^{*}}{\delta\pi_{4}}, \tag{19}\] \[\dot{\pi}_{i} =-\frac{\delta H^{*}}{\delta u_{i}}, \dot{\pi}_{4} =-\frac{\delta H^{*}}{\delta p}, \tag{20}\]
constitute a mathematically equivalent _second-order_ formulation of the problem, where \(\delta H^{*}/\delta u_{i}\), \(\delta H^{*}/\delta p\), \(\delta H^{*}/\delta\pi_{i}\), and \(\delta H^{*}/\delta\pi_{4}\) are the functional derivatives of \(H^{*}\) with respect to the field quantities and the conjugate momenta. We will find that this is generally possible for a compressible fluid. For an incompressible fluid, the equation \(\dot{p}=\delta H^{*}/\delta\pi_{4}\) will need to be replaced by the incompressibility condition \(u_{i,i}=0\), consistent with the well known result that the pressure usually serves as a Lagrange multiplier for the incompressibility constraint [10].
The remainder of this paper is organized as follows. Section 2 gives a comprehensive overview of the relevant literature to date. Section 3 contains the main results of the present work: a conserved Hamiltonian functional \(H^{*}\) satisfying Hamilton's equations (19) and (20) for the mathematically equivalent second-order problem, along with the accompanying Hamilton-Jacobi equation [1, 2, 3, 10, 11, 12, 13]. Section 4 contains a discussion of the physical interpretation of the second-order
formulation. Section 5 presents a brief case study in the form of one-dimensional flow over an infinite, flat plate. Finally, Section 6 concludes the paper with a few closing remarks and an outline of how the present formulation can aid in resolving the question of existence and uniqueness of solutions to the Navier-Stokes problem.
By the end of the paper, we will have achieved precisely what the title promises: a canonical Hamiltonian formulation of the problem, opening new avenues toward resolution of one of the most famous unsolved problems in mathematics.
## 2 Literature review
The field of analytical mechanics, with foundations planted in Hamilton's principle of stationary action [1, 2, 3] or d'Alembert's principle of virtual work [14], has been vital to the development of both classical and quantum physics since the eighteenth century. This approach is versatile and helpful to the physical understanding of the problem in question, and the foundation, structure, and utility of Hamiltonian formalism is well-documented [15, 16, 17, 18, 19, 20]. The supporting mathematics of the calculus of variations as well as symplectic and differential geometry can also be found in many excellent sources [21, 22, 23, 24, 25, 26, 27]. It is therefore no surprise that researchers have been applying analytical formalism to classical fluids dating back to the time of Lagrange [28, 29, 30, 31, 32, 33].
The task of obtaining solutions to the governing equations of fluid flow represents one of the most challenging problems in science and engineering. In most cases, the mathematical formulation is expressed as an initial-boundary-value problem: a set of coupled, nonlinear second order partial differential equations, which are to be solved subject to various initial- and boundary conditions. The degree of complication of the governing equations depends on the type of the fluid. For a viscous fluid where the transport phenomena of friction and thermal conduction are included, the governing equations are called the Navier-Stokes equations [4]. The Navier-Stokes equations are derived by applying fundamental physical principles--conservation of mass, conservation of momentum, and conservation of energy--to a viscous fluid, and the derivation can be found in any fluid mechanics textbook [34, 35, 36]. As far as the present authors are aware, to date there is still no firm answer to the question of whether or not there always exist unique, smooth, nonsingular solutions to the three-dimensional Navier-Stokes equations [37], and this constitutes one of the most famous unsolved problems in mathematics.
The application of analytical mechanics to the field of fluid mechanics [38, 39, 21, 10, 25] has recently seen a resurgence in interest [30, 40, 41] after a long history. Serrin [42], Benjamin [43], and Holm _et al._[44] have all described variational and Hamiltonian formulations of incompressible, inviscid fluid flow. Roberts [45] presented a Hamiltonian dynamic for weakly interacting vortices. This research obtained the canonical equations of Hamiltonian dynamics a set of two well-separated vortex rings by setting up a Hamiltonian to define the set. Olver [46] showed that the Euler equations of inviscid and incompressible fluid flow can be put into Hamiltonian form. Benjamin and Olver [47] investigated the Hamiltonian structure of the water waves problem. They examined the symmetry groups of this problem, finding that Hamiltonian analysis enables the solution of conservative elements of the problem. However, the study also acknowledged that further study is needed to identify the physical significance of the mathematical results. Maddocks and Pego [48] presented a novel Hamiltonian formulation of ideal fluid flow expressed in material coordinates. Their Hamiltonian formulation arises from a general approach for constrained systems that is not restricted to problems in fluid mechanics. Rather, it is widely applicable for obtaining
unconstrained Hamiltonian dynamical systems from Lagrangian field equations that are subject to pointwise constraints. More recently, Arnold [49] also studied the Hamiltonian nature of the ideal Euler equations.
Viscous forces are non-conservative, which presents a fundamental challenge when applying Hamilton's principle to fluid mechanics [37]. The mathematical study of the variational methods as applied to the Navier-Stokes equations is an ongoing endeavor, though obtaining the Navier-Stokes equations from a purely Hamiltonian formulation remains a relatively unexplored area of study [50, 51, 52]. Solutions to the problem of non-conservative forces in an inherently conservative formalism have been attempted by many [53, 54, 55, 56, 57] resulting in several procedures to analyze certain non-conservative cases.
Oseledets [58] attempted to express the Navier-Stokes equations using Hamiltonian formalism. He was able to formalize the incompressible Euler equation but stated that his result is not valid for a compressible fluid. More recent attempts, such as Fukagawa and Fujitani [59] and Jones [60], have enforced dissipation using a non-holonomic constraint on the entropy. Hochgerner [61] attempted to obtain a Hamiltonian interacting particle system that could accurately model fluid dynamics. His research separated the dynamics into slow (deterministic) and fast (stochastic) components to capture fine-scale effects. The study was able to derive the Navier-Stokes equation from a stochastic Hamiltonian system but ignored the stress tensor, was unable to separate configuration and momentum variables, and did not establish energy conservation or dissipation.
Rashad _et al._[62] modeled the incompressible Navier-Stokes equations in so-called "port-Hamiltonian" framework rather than the standard Hamiltonian framework. Their model used vector calculus instead of exterior calculus to minimize the number of operators. While the main goal of this research was increasing the interest of computational researchers in using vector calculus, they also demonstrated that vector calculus can help in the formulation of individual subsystems of Navier-Stokes equations and boundary ports of the model.
Particularly relevant to the present work, Sanders [5, 6, 7, 9] has shown that higher-order dynamics are "intrinsically variational," in the sense that higher-derivative versions of the classical equations of motion can be derived from a minimum action principle even for dissipative systems, thus allowing inherently non-Hamiltonian problems to be treated as though they are Hamiltonian. This discovery has already led to two applications: the direct modal analysis of damped dynamical systems [6] and subsequently a new and more efficient algorithm for computing a damped system's resonant frequencies [8]. Higher-derivative theories had been studied before in the realm of quantum gravity physics [63, 64, 65, 66, 67, 68, 69, 70] but until now they have not been applied to classical fluids. While the Navier-Stokes equations, in their standard form, may be unsuited to Hamiltonian formalism [37, 51, 52, 71], it will be shown here that higher-order dynamics can be used to restate the problem in a form consistent with Hamiltonian and Hamilton-Jacobi formalism.
In conclusion, although the body of research surrounding the Navier-Stokes equations is extensive, it would appear that no canonical Hamiltonian formulation of the Navier-Stokes equations has been found to date. Indeed, that is what the present work aims to achieve.
## 3 Analysis
Although we are primarily interested in the incompressible form of the equations given by (16) and (17), for reasons that will become clear shortly, here we will begin with the compressible form of the equations, with the understanding that we will eventually take the incompressible limit. For
the compressible case, the linear momentum balance and continuity equations are given by
\[\mathcal{R}_{i}[t,x_{j},u_{j},p,\rho]\equiv\rho\dot{u}_{i}+\rho u_{i,j}u_{j}+p_{,i }-\mu u_{i,jj}-(\mu+\lambda)u_{j,ji}-\rho b_{i}=0, \tag{21}\]
\[\mathcal{R}_{4}[u_{j},\rho]\equiv\dot{\rho}+\rho_{,i}u_{i}+\rho u_{i,i}=0, \tag{22}\]
where \(\rho=\rho(x_{j},t)\) is the density field (now one of the unknown field quantities along with \(u_{i}\) and \(p\)), and \(\lambda\) is an additional viscosity coefficient which, under Stokes's [4] hypothesis, is related to \(\mu\) as \(\lambda=-2\mu/3\), ensuring that the mechanical pressure agrees with the thermodynamic pressure. Henceforth we will assume that all quantities have been suitably non-dimensionalized. The non-dimensional (constant) viscosities in (21) and (22) may be regarded as inverse Reynolds numbers, and the non-dimensional pressure may be considered to be normalized by the inertial scale \(\rho_{0}U^{2}\), with \(\rho_{0}\) and \(U\) appropriate density and velocity scales.
In general, (21) and (22) would be appended with the energy equation, which introduces additional thermodynamic variables, such as temperature and enthalpy or entropy. Two of the thermodynamic variables are designated as "primary," and equations of state are required to relate the remaining variables to the primary variables. Typically, pressure and temperature are chosen as the primary variables, and the equation of state for the density, for example, is expressed as \(\rho=\rho(p,T)\). The conservation equations along with the equation of state constitute six equations for the six unknowns fields \((u_{i},p,T,\rho)\). Henceforth in the present work, we will take the temperature to be constant, though we intend to consider variations in temperature in future work.
An incompressible flow is one for which the material derivative of the density vanishes, _i.e._, \(\mathrm{d}\rho/\mathrm{d}t=\dot{\rho}+\rho_{,i}u_{i}=0\), and this condition serves as an equation of state. It is usually also assumed, for the sake of simplicity, that the density is both constant and uniform, further reducing the equation of state \(\rho=\rho(p,T)\) to specification of \(\rho=\rho_{0}\) as a system parameter. Consequently, (22) reduces to \(\rho u_{i,i}=0\) and the energy equation is decoupled from the system. Accordingly, there are now only four unknown field quantities \((u_{i},p)\) and the momentum balance and continuity equations suffice for the governing field equations.
We pause here to remark that all four field equations (21), (22) are _first-order_ in time with respect to the field quantities \(u_{i}\) and \(\rho\). This will be important shortly, when we double the order of the equations. It should also be noted, as mentioned previously, that the first-order problem described above is inherently _non-Hamiltonian_, in that there is no action \(\mathcal{S}\) for which Hamilton's principle (\(\delta\mathcal{S}=0\)) yields the first-order field equations. Finally, we note that in the incompressible limit, \(\mathcal{R}_{4}\) becomes independent of \(\dot{\rho}\) and is no longer first-order in time.
### Second-order formulation
Although the first-order formulation of the problem is intrinsically non-Hamiltonian, nevertheless a Hamiltonian for the system may be found by considering a second-order formulation. Following Sanders [9], we observe that the actual motion of the fluid corresponds to the particular fields \((u_{i},p,\rho)\) for which the following action achieves a local minimum:
\[\mathcal{S}^{*}[u_{j},p,\rho]=\int\mathrm{d}^{4}x\left(\frac{1}{2}\mathcal{R }_{i}\mathcal{R}_{i}+\frac{1}{2}\mathcal{R}_{4}\mathcal{R}_{4}\right), \tag{23}\]
where \(\mathrm{d}^{4}x=\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}x_{3}\mathrm{d}t\), and the integral is carried out over both the control volume \(\mathcal{V}\) occupied by the fluid (\(x_{j}\in\mathcal{V}\)) and the time period of interest (\(t\in[t_{1},t_{2}]\)). It must be emphasized that this
action contains no new physics. Again, this is simply the principle of least squares averaged over the spacetime occupied by the fluid. The entire physics of the problem are already contained in the residuals (\(\mathcal{R}_{i}\), \(\mathcal{R}_{4}\)).
Without an equation of state relating \(\rho\) to \(p\), the problem is underconstrained with five unknown field quantities and only four dynamical field equations. Anticipating the case of incompressible flow, where the density is constant and the four field quantities are \(u_{i}\) and \(p\), henceforth we will assume an equation of state of the form \(\rho=\hat{\rho}(p)\), with \(\hat{\rho}\) a known function determined either from first principles or empirically. In this way, the density field may be eliminated in favor of the pressure field, and the field equations assume the following form:
\[\mathcal{R}_{i}[t,x_{j},u_{j},p]\equiv\hat{\rho}\dot{u}_{i}+\hat{\rho}u_{i,j}u_ {j}+p_{,i}-\mu u_{i,jj}-(\mu+\lambda)u_{j,ji}-\hat{\rho}b_{i}=0, \tag{24}\]
\[\mathcal{R}_{4}[u_{j},p]\equiv\hat{\rho}^{\prime}\dot{p}+\hat{\rho}^{\prime}p_ {,i}u_{i}+\hat{\rho}u_{i,i}=0, \tag{25}\]
where \(\hat{\rho}^{\prime}=d\hat{\rho}/dp\). We note that, under equilibrium conditions, \(\hat{\rho}^{\prime}\) is related to the speed of sound \(c\) and the bulk modulus \(K\) as \(\hat{\rho}^{\prime}=1/c^{2}=\rho/K\) (for incompressible fluids, \(\hat{\rho}^{\prime}\equiv 0\) and the speed of sound and bulk modulus are both infinite). Having specified \(\hat{\rho}(p)\), \(\mu\), \(\lambda\), and \(b_{i}(x_{j},t)\), and having prescribed appropriate auxiliary conditions (initial and boundary conditions), one seeks the four field quantities \((u_{i},p)\) satisfying the governing field equations and the auxiliary conditions. To recover the case of incompressible flow, we will eventually take \(\hat{\rho}^{\prime}\equiv 0\).
We pause here to note that, even though the residuals (\(\mathcal{R}_{i}\), \(\mathcal{R}_{4}\)) vanish for the actual motion, they are _not_ trivially zero. That is, the residuals _only_ vanish for the _particular_ fields \((u_{i},p)\) which satisfy the first-order field equations (24) and (25); they do not vanish for _every conceivable_\((u_{i},p)\). Thus it is not appropriate to take \(\mathcal{R}_{i}\equiv 0,\mathcal{R}_{4}\equiv 0\). We will return to this point later when we discuss the Hamiltonian formulation of the problem.
For now, we note that the action \(\mathcal{S}^{*}=\mathcal{S}^{*}[u_{i},p]\) defines a Lagrangian
\[L^{*}[t,u_{i},p]=\int\text{d}^{3}x(\mathcal{L}^{*}), \tag{26}\]
where the integral is carried out over the volume \(\mathcal{V}\) only (\(\text{d}^{3}x=\text{d}x_{1}\text{d}x_{2}\text{d}x_{3}\)), with Lagrangian density
\[\mathcal{L}^{*}[t,x_{j},u_{j},p]=\frac{1}{2}\mathcal{R}_{i}\mathcal{R}_{i}+ \frac{1}{2}\mathcal{R}_{4}\mathcal{R}_{4}. \tag{27}\]
Because the residuals (\(\mathcal{R}_{i}\), \(\mathcal{R}_{4}\)) have been non-dimensionalized, the Lagrangian density is also dimensionless. Once again, even though the Lagrangian vanishes for the actual motion, it is not trivially zero, and it is not appropriate to take \(L^{*}\equiv 0\).
As noted above, the actual motion of the fluid corresponds to the particular fields \((u_{i},p)\) for which \(\mathcal{S}^{*}\) achieves a local minimum. To obtain the Euler-Lagrange equations, the conjugate momenta, and the natural auxiliary conditions, we insist that \(\mathcal{S}^{*}\) not vary to first order (\(\delta\mathcal{S}^{*}=0\)) under small variations in the fields \((\delta u_{i},\delta p)\). Evaluating \(\delta\mathcal{S}^{*}\), integrating by parts, and collecting
like terms, we find that
\[\delta\mathcal{S}^{*}=\int\text{d}^{4}x\bigg{\{}\bigg{[}-\frac{ \partial}{\partial t}\left(\hat{\rho}\mathcal{R}_{i}\right)-\frac{\partial}{ \partial x_{j}}\left(\hat{\rho}\mathcal{R}_{i}u_{j}\right)+\hat{\rho}\mathcal{R }_{j}u_{j,i}-\mu\mathcal{R}_{i,jj} \tag{28}\] \[\qquad\qquad-(\mu+\lambda)\mathcal{R}_{j,ij}+\hat{\rho}^{\prime} \mathcal{R}_{4}p_{,i}-\frac{\partial}{\partial x_{i}}\left(\hat{\rho}\mathcal{ R}_{4}\right)\bigg{]}\delta u_{i}\] (29) \[\qquad+\bigg{[}\hat{\rho}^{\prime}\mathcal{R}_{i}\dot{u}_{i}+\hat {\rho}^{\prime}\mathcal{R}_{i}u_{i,j}u_{j}-\mathcal{R}_{i,i}-\hat{\rho}^{ \prime}\mathcal{R}_{i}b_{i}+\hat{\rho}^{\prime\prime}\mathcal{R}_{4}\dot{p}\] (30) \[\qquad\qquad-\frac{\partial}{\partial t}\left(\hat{\rho}^{\prime }\mathcal{R}_{4}\right)+\hat{\rho}^{\prime\prime}\mathcal{R}_{4}p_{,i}u_{i}- \frac{\partial}{\partial x_{i}}\left(\hat{\rho}^{\prime}\mathcal{R}_{4}u_{i} \right)+\hat{\rho}^{\prime}\mathcal{R}_{4}u_{i,i}\bigg{]}\delta p\bigg{\}}\] (31) \[\qquad+\int\!\text{d}^{3}x\bigg{[}\hat{\rho}\mathcal{R}_{i}\delta u _{i}+\hat{\rho}^{\prime}\mathcal{R}_{4}\delta p\bigg{]}_{t_{1}}^{t_{2}}\] (32) \[\qquad\qquad\left[-\mu\mathcal{R}_{i}n_{j}-(\mu+\lambda)\mathcal{ R}_{j}n_{i}\right]\!\delta u_{i,j}+\bigg{[}\mathcal{R}_{i}n_{i}+\hat{\rho}^{ \prime}\mathcal{R}_{4}u_{i}n_{i}\bigg{]}\delta p\bigg{\}} \tag{34}\]
where the purely volumetric integral (\(\text{d}^{3}x\)) is carried out over \(\mathcal{V}\), and the surface integral (\(\text{d}^{2}x\)) is carried out over the boundary \(\partial\mathcal{V}\). Note that, because we are using Eulerian coordinates \(x_{j}\), the volume element \(\text{d}^{3}x\) is not to be varied.
The Euler-Lagrange equations (which hold for all \(x_{j}\in\mathcal{V}\)) may be read directly from the spacetime (\(\text{d}^{4}x\)) integral:
\[\delta u_{i}: -\frac{\partial}{\partial t}\left(\hat{\rho}\mathcal{R}_{i} \right)-\frac{\partial}{\partial x_{j}}\left(\hat{\rho}\mathcal{R}_{i}u_{j} \right)+\hat{\rho}\mathcal{R}_{j}u_{j,i}-\mu\mathcal{R}_{i,jj} \tag{35}\] \[-(\mu+\lambda)\mathcal{R}_{j,ij}+\hat{\rho}^{\prime}\mathcal{R}_{ 4}p_{,i}-\frac{\partial}{\partial x_{i}}\left(\hat{\rho}\mathcal{R}_{4} \right)=0\] \[\delta p: \hat{\rho}^{\prime}\mathcal{R}_{i}\dot{u}_{i}+\hat{\rho}^{\prime }\mathcal{R}_{i}u_{i,j}u_{j}-\mathcal{R}_{i,i}-\hat{\rho}^{\prime}\mathcal{R}_ {i}b_{i}+\hat{\rho}^{\prime\prime}\mathcal{R}_{4}\dot{p}\] (36) \[-\frac{\partial}{\partial t}\left(\hat{\rho}^{\prime}\mathcal{R} _{4}\right)+\hat{\rho}^{\prime\prime}\mathcal{R}_{4}p_{,i}u_{i}-\frac{ \partial}{\partial x_{i}}\left(\hat{\rho}^{\prime}\mathcal{R}_{4}u_{i}\right) +\hat{\rho}^{\prime}\mathcal{R}_{4}u_{i,i}=0\]
It should be noted that all four Euler-Lagrange equations (35), (36) are _second-order_ in time, as they involve time derivatives of the residuals. By doubling the order of the equations, we have put the problem in Hamiltonian form, consistent with the general result of Sanders [9]. We also note that all four Euler-Lagrange equations of the second-order formulation are automatically satisfied by the solution to the first-order formulation (_i.e._, the actual motion), for which \(\mathcal{R}_{i}=0\) and \(\mathcal{R}_{4}=0\) everywhere and at all times.
Corresponding to each of the four field quantities is a canonically conjugate "momentum" field, which can be read from (32). The momenta conjugate to the velocities \(u_{i}\) are
\[\pi_{i}\equiv\hat{\rho}\mathcal{R}_{i}, \tag{37}\]
and the momentum conjugate to the pressure \(p\) is
\[\pi_{4}\equiv\hat{\rho}^{\prime}\mathcal{R}_{4}. \tag{38}\]
The natural "temporal conditions" are that the conjugate momenta should vanish at the endpoints \(t_{1}\) and \(t_{2}\). When enforced as initial conditions, these guarantee that the solution to the second-order formulation coincides with the solution to the first-order formulation (_i.e._, the actual motion). In other words, the actual motion _is_ the natural evolution of the second-order formulation, and unlike the familiar form of Hamilton's principle, here the variations \((\delta u_{i},\delta p)\) are _not_ taken to vanish at the endpoints. Although the conjugate momenta (\(\pi_{i}\), \(\pi_{4}\)) do not coincide with conventional linear or angular momenta, there is nonetheless a curious mathematical connection between the conjugate momenta and the linear momentum density \(P_{i}=\rho u_{i}\), which we will see presently from the natural boundary conditions.
The natural boundary conditions are read directly from the surface (d\({}^{2}x\)) integral:
\[\delta u_{i}: \hat{\rho}\mathcal{R}_{i}u_{j}n_{j}+\mu\mathcal{R}_{i,j}n_{j}+( \mu+\lambda)\mathcal{R}_{j,i}n_{j}+\hat{\rho}\mathcal{R}_{4}n_{i}=0 \tag{39}\] \[\delta u_{i,j}: -\mu\mathcal{R}_{i}n_{j}-(\mu+\lambda)\mathcal{R}_{j}n_{i}=0\] (40) \[\delta p: \mathcal{R}_{i}n_{i}+\hat{\rho}^{\prime}\mathcal{R}_{4}u_{i}n_{ i}=0 \tag{41}\]
This last condition, (41), establishes a connection between the new conjugate momenta and the conventional linear momenta. Multiplying (41) by \(\hat{\rho}\), and noting that \(\hat{\rho}u_{i}=\rho u_{i}=P_{i}\), we find that
\[(\pi_{i}+\pi_{4}P_{i})n_{i}=0. \tag{42}\]
Evidently, boundary condition (41) states that the flux of the vector \(\Pi_{i}\equiv\pi_{i}+\pi_{4}P_{i}\) through the boundary \(\partial\mathcal{V}\) should vanish. It is interesting that this new vector \(\Pi_{i}\) contains both old and new momenta, with \(\pi_{4}\) "carried" (_i.e._, given direction) by \(P_{i}\). The actual physical meaning of these natural boundary conditions is less clear and may require further investigation.
### Equivalence of the first- and second-order formulations
The first- and second-order formulations are mathematically equivalent, in the sense that imposing _identical auxiliary conditions_ on the two formulations will yield identical solutions \((u_{i},p)\). In other words, with identical auxiliary conditions, \((u_{i},p)\) is a solution to the first-order formulation _if and only if_ the same \((u_{i},p)\) is a solution to the second-order formulation.
The proof is straightforward. Consider the two formulations separately, and impose on the second-order formulation identical auxiliary conditions to those of the first-order formulation. In particular, just like the simple example given in Section 1.1, the second-order formulation requires additional auxiliary conditions over and above those applied to the first-order formulation. These include initial conditions making \(\mathcal{R}_{i}(x_{j},0)=0\) and \(\mathcal{R}_{4}(x_{j},0)=0\) for all \(x_{j}\in\mathcal{V}\cup\partial\mathcal{V}\), along with boundary conditions making \(\mathcal{R}_{i}(x_{j},t)=0\), \(\mathcal{R}_{i,j}(x_{j},t)=0\), and \(\mathcal{R}_{4}(x_{j},t)=0\) for all \(x_{j}\in\partial\mathcal{V}\) and all times \(t\). By supposition, the auxiliary conditions applied to the two formulations are identical, so it suffices to show that \((u_{i},p)\) satisfies the governing field equations (24), (25) of the first-order formulation (\(\mathcal{R}_{i}=0\) and \(\mathcal{R}_{4}=0\)) everywhere in \(\mathcal{V}\) and at all times _if and only if_\((u_{i},p)\) satisfies the Euler-Lagrange equations (35), (36) of the second-order formulation everywhere in \(\mathcal{V}\) and at all times.
Suppose first that \((u_{i},p)\) satisfies the governing field equations (24), (25) of the first-order formulation everywhere in \(\mathcal{V}\) and at all times. Then \(\mathcal{R}_{i}=0\), \(\mathcal{R}_{4}=0\), and \((u_{i},p)\) is a trivial solution to the Euler-Lagrange equations (35), (36) of the second-order formulation.
Conversely, suppose that \((u_{i},p)\) satisfies the Euler-Lagrange equations (35), (36) of the second-order formulation everywhere in \(\mathcal{V}\) and at all times. We note that \((\mathcal{R}_{i}=0,\mathcal{R}_{4}=0)\) constitutes an equilibrium solution of the Euler-Lagrange equations (35), (36). Thus, because the initial conditions are chosen such that \(\mathcal{R}_{i}(x_{j},0)=0\) and \(\mathcal{R}_{4}(x_{j},0)=0\) for all \(x_{j}\in\mathcal{V}\), and because the boundary conditions are chosen such that \(\mathcal{R}_{i}(x_{j},t)=0\) and \(\mathcal{R}_{4}(x_{j},t)=0\) for all \(x_{j}\in\partial\mathcal{V}\) and all times \(t\), then \(\mathcal{R}_{i}\) and \(\mathcal{R}_{4}\) will remain identically zero everywhere in \(\mathcal{V}\) for all future times. Thus, \((u_{i},p)\) satisfies the governing field equations (24), (25) of the first-order formulation everywhere in \(\mathcal{V}\) and at all times. This completes the proof, and we have established that the two formulations are equivalent. \(\square\)
### Hamilton's equations
We are now ready to proceed with the Hamiltonian formulation of the problem. The Lagrangian \(L^{*}\) has a corresponding Hamiltonian
\[H^{*}=\int\text{d}^{3}x(\mathcal{H}^{*}), \tag{43}\]
with the Hamiltonian density \(\mathcal{H}^{*}\) obtained from the Lagrangian density \(\mathcal{L}^{*}\) via the Legendre transform
\[\mathcal{H}^{*}=\pi_{i}\dot{u}_{i}+\pi_{4}\dot{p}-\mathcal{L}^{*}=\pi_{i}\dot {u}_{i}+\pi_{4}\dot{p}-\frac{1}{2}\mathcal{R}_{i}\mathcal{R}_{i}-\frac{1}{2} \mathcal{R}_{4}\mathcal{R}_{4}. \tag{44}\]
Again, this \(H^{*}\) has nothing to do with the total mechanical energy of the system, although it _is_ a conserved quantity, since \(H^{*}=0\) for the actual motion--just as in the example of Section 1.1. In order to write down Hamilton's equations, we must express \(\mathcal{H}^{*}\) in terms of the fields and the conjugate momenta, eliminating \(\dot{u}_{i}\) and \(\dot{p}\).
Observe that \(\mathcal{R}_{i}=\pi_{i}/\hat{\rho}\), and ignoring for the moment the incompressible limit, we may write \(\mathcal{R}_{4}=\pi_{4}/\hat{\rho}^{\prime}\). In this way, using the functional expressions for the residuals given by (24) and (25), we find that
\[\dot{u}_{i}=\frac{1}{(\hat{\rho})^{2}}\pi_{i}-\frac{1}{\hat{\rho}}\bigg{(}\hat {\rho}u_{i,j}u_{j}+p_{,i}-\mu u_{i,jj}-(\mu+\lambda)u_{j,ji}-\hat{\rho}b_{i} \bigg{)}, \tag{45}\]
\[\dot{p}=\frac{1}{(\hat{\rho}^{\prime})^{2}}\pi_{4}-\frac{1}{\hat{\rho}^{ \prime}}\bigg{(}\hat{\rho}^{\prime}p_{,i}u_{i}+\hat{\rho}u_{i,i}\bigg{)}, \tag{46}\]
and
\[\mathcal{H}^{*}[t,x_{i},u_{i},p,\pi_{i},\pi_{4}]= \frac{1}{2}\frac{1}{(\hat{\rho})^{2}}\pi_{i}\pi_{i}-\frac{1}{\hat {\rho}}\bigg{(}\hat{\rho}u_{i,j}u_{j}+p_{,i}-\mu u_{i,jj}-(\mu+\lambda)u_{j, ji}-\hat{\rho}b_{i}\bigg{)}\pi_{i}\] \[+\frac{1}{2}\frac{1}{(\hat{\rho}^{\prime})^{2}}\pi_{4}\pi_{4}- \frac{1}{\hat{\rho}^{\prime}}\bigg{(}\hat{\rho}^{\prime}p_{,i}u_{i}+\hat{\rho }u_{i,i}\bigg{)}\pi_{4}. \tag{47}\]
Hamilton's equations [2, 3] are as follows:
\[\dot{u}_{i} =\frac{\delta H^{*}}{\delta\pi_{i}}, \dot{p} =\frac{\delta H^{*}}{\delta\pi_{4}}, \tag{48}\] \[\dot{\pi}_{i} =-\frac{\delta H^{*}}{\delta u_{i}}, \dot{\pi}_{4} =-\frac{\delta H^{*}}{\delta p}, \tag{49}\]
where \(\delta H^{*}/\delta u_{i}\), \(\delta H^{*}/\delta p\), \(\delta H^{*}/\delta\pi_{i}\), and \(\delta H^{*}/\delta\pi_{4}\) are the functional derivatives of \(H^{*}\) with respect to the field quantities and the conjugate momenta. The latter equations, (49), reproduce the Euler-Lagrange equations (35), (36) of the second-order formulation.
We return now to our previous observation concerning the vanishing of the residuals. While \(H^{*}\) vanishes for the _particular_ fields \((u_{i},p)\) that satisfy the governing field equations (24), (25) of the first-order formulation, it does not vanish for _every conceivable_\((u_{i},p)\). The latter would imply, according to Equations (48), that \(\dot{u}_{i}\equiv 0\) and \(\dot{p}\equiv 0\), which is not generally the case. This observation, and the fact that Equations (49) faithfully reproduce the Euler-Lagrange equations (35), (36) of the second-order formulation, confirm that the Hamiltonian formulation described above is, in fact, a legitimate reformulation of the problem. [In the following section, we develop the Hamilton-Jacobi theory as it relates to the present formulation, the goal being to find a canonical transformation to a new set of fields (\(\phi_{i}\), \(\phi_{4}\)) and conjugate momenta for which the Hamiltonian _does_ vanish identically.]
To obtain the Hamiltonian for incompressible flow, we set \(\hat{\rho}^{\prime}\equiv 0\) from the beginning, in which case \(\mathcal{R}_{4}\) reduces to \(\hat{\rho}u_{i,i}\) and \(\pi_{4}\) vanishes identically, consistent with the fact that \(\mathcal{R}_{4}\) becomes independent of \(\dot{p}\). The Hamiltonian density in turn reduces to
\[\mathcal{H}^{*}=\pi_{i}\dot{u}_{i}-\frac{1}{2}\mathcal{R}_{i} \mathcal{R}_{i}, \tag{50}\]
or, in terms of the conjugate momenta,
\[\mathcal{H}^{*}[t,x_{i},u_{i},p,\pi_{i}]=\frac{1}{2}\frac{1}{\rho ^{2}}\pi_{i}\pi_{i}-\frac{1}{\rho}\bigg{(}\rho u_{i,j}u_{j}+p_{,i}-\mu u_{i, jj}-\rho b_{i}\bigg{)}\pi_{i}, \tag{51}\]
where the density \(\hat{\rho}=\rho\) is a constant and we have used the fact that \(u_{i,i}=0\). Hamilton's equations \(\dot{u}_{i}=\delta H^{*}/\delta\pi_{i}\), \(\dot{\pi}_{i}=-\delta H^{*}/\delta u_{i}\), and \(0\equiv\dot{\pi}_{4}=-\delta H^{*}/\delta p\) still apply, but \(\dot{p}=\delta H^{*}/\delta\pi_{4}\) must be replaced by the constraint that \(u_{i,i}=0\), since for incompressible flow \(\dot{p}\) is not determined by the governing equations. That the incompressibility condition should take the place of the pressure equation \(\dot{p}=\delta H^{*}/\delta\pi_{4}\) is consistent with the well known result that the pressure usually serves as the Lagrange multiplier for the incompressibility constraint [10].
### Hamilton-Jacobi equation
One of the most significant aspects of the Hamiltonian formalism is that it leads to the transformation theory of Hamilton and Jacobi [1, 2, 3, 10, 11, 12, 13], celebrated both for unifying particle mechanics with wave optics [1] and for its relationship to the Schrodinger equation of quantum mechanics [72, 73]. Here we will obtain a Hamilton-Jacobi equation representing the Navier-Stokes problem.
In the context of discrete mechanics, Hamilton's principal function is obtained as the solution to the Hamilton-Jacobi equation, which is in turn defined by the functional form of the Hamiltonian.
Hamilton's principal function provides the generating function for a canonical transformation to a new set of generalized coordinates and conjugate momenta for which the Hamiltonian vanishes identically, in which case Hamilton's equations do, in fact, become trivial. The new coordinates and their conjugate momenta are simply equal to their initial values.
In the present context, the role of Hamilton's principal function is played by a characteristic functional \(\text{S}^{*}=\text{S}^{*}[t,u_{i},p]\) (not to be confused with the action \(\mathcal{S}^{*}\), although they are related; see Appendix A), which is the solution to the following Hamilton-Jacobi equation:
\[H^{*}\left[t,u_{i},p,\frac{\delta\text{S}^{*}}{\delta u_{i}},\frac{\delta \text{S}^{*}}{\delta p}\right]+\frac{\partial\text{S}^{*}}{\partial t}=0, \tag{52}\]
where \(\delta\text{S}^{*}/\delta u_{i}\) and \(\delta\text{S}^{*}/\delta p\) are the functional derivatives of \(\text{S}^{*}\) with respect to the field quantities. Interested readers will find the derivation of (52) in Appendix A. Henceforth we will refer to \(\text{S}^{*}\) as "Hamilton's functional." Substituting for the conjugate momenta in (47), we obtain for the Hamilton-Jacobi equation
\[\int\text{d}^{3}x\left[\frac{1}{2}\frac{1}{(\hat{\rho})^{2}}\frac {\delta\text{S}^{*}}{\delta u_{i}}\frac{\delta\text{S}^{*}}{\delta u_{i}}- \frac{1}{\hat{\rho}}\bigg{(}\hat{\rho}u_{i,j}u_{j}+p_{,i}-\mu u_{i,jj}-(\mu+ \lambda)u_{j,ji}-\hat{\rho}b_{i}\bigg{)}\frac{\delta\text{S}^{*}}{\delta u_{i}}\right.\] \[\left.+\frac{1}{2}\frac{1}{(\hat{\rho}^{\prime})^{2}}\frac{\delta \text{S}^{*}}{\delta p}\frac{\delta\text{S}^{*}}{\delta p}-\frac{1}{\hat{\rho }^{\prime}}\bigg{(}\hat{\rho}^{\prime}p_{,i}u_{i}+\hat{\rho}u_{i,i}\bigg{)} \frac{\delta\text{S}^{*}}{\delta p}\right]+\frac{\partial\text{S}^{*}}{ \partial t}=0. \tag{53}\]
In contrast to the four original field equations--(21) and (22)--the Hamilton-Jacobi equation (53) is a _single_ equation in Hamilton's functional \(\text{S}^{*}\). This constitutes an equivalent formulation of the problem, as a complete and nontrivial solution to (53) is tantamount to an integration of Hamilton's equations (48) and (49) (note that it is not appropriate to take \(\text{S}^{*}\equiv 0\) for the same reason that it is not appropriate to take \(H^{*}\equiv 0\)). In this way, we have reduced the problem of finding four separate field quantities to that of finding a single functional in those field quantities. One need only deduce (or even guess) the general form of \(\text{S}^{*}\) in order to solve the problem. If an analytical expression for \(\text{S}^{*}\) can be obtained, it will lead via canonical transformation to a new set of fields (\(\phi_{i}\), \(\phi_{4}\)) and conjugate momenta which are simply equal to their initial values, giving analytical expressions for the four original fields \((u_{i},p)\).
The case of incompressible flow requires care, as \(\hat{\rho}^{\prime}\equiv 0\) and \(\hat{\rho}^{\prime}\) appears in the denominators of terms in (53). Even so, the Hamiltonian formulation remains well posed in the incompressible limit. Recall that, with \(\hat{\rho}^{\prime}\equiv 0\), \(\mathcal{R}_{4}\) reduces to \(\hat{\rho}u_{i,i}\), \(\pi_{4}\) vanishes identically, and the Hamiltonian density reduces to (51). Substituting for the conjugate momenta \(\pi_{i}\) in (51), the corresponding Hamilton-Jacobi equation is
\[\int\text{d}^{3}x\left[\frac{1}{2}\frac{1}{\rho^{2}}\frac{\delta\text{S}^{*}}{ \delta u_{i}}\frac{\delta\text{S}^{*}}{\delta u_{i}}-\frac{1}{\rho}\bigg{(} \rho u_{i,j}u_{j}+p_{,i}-\mu u_{i,jj}-\rho b_{i}\bigg{)}\frac{\delta\text{S}^ {*}}{\delta u_{i}}\right]+\frac{\partial\text{S}^{*}}{\partial t}=0, \tag{54}\]
with \(\delta\text{S}^{*}/\delta p=0\), since again \(\pi_{4}\) vanishes identically for incompressible flow. This is the form of the Hamilton-Jacobi equation as it relates to the traditional Navier-Stokes problem. In this case, the pressure is determined last of all, and is whatever it needs to be to enforce the incompressibility constraint \(u_{i,i}=0\) (again consistent with the role of pressure as Lagrange multiplier [10]).
Discussion
In this section we provide some qualitative interpretations of the developments of Section 3. More specifically, we investigate the incompressible form (via constant, uniform density) of the Euler-Lagrange equations (35) and (36) when the residuals \(\mathcal{R}_{i}\) and \(\mathcal{R}_{4}\) are substituted.
Our motivation is again the simple example of Section 1 for which the first-order non-Hamiltonian system \(\dot{v}=-v\) was converted to the second-order Hamiltonian system \(\ddot{v}=v\) by (manual) _elimination_ of the non-conservative 'damping' term \(\dot{v}\) (see [6] for a similar result for the damped harmonic oscillator converting from second- to fourth-order dynamics). Sanders [9] showed that the elimination process is 'automated' by the definition of the action in the first integral of (3), which is generalized to the action in (23) for our current continuum dynamics problem containing fields.
First consider the pressure equation (36) and corresponding natural boundary condition (41), which take the following incompressible forms:
\[-\mathcal{R}_{i,i}=0\quad\forall x_{j}\in\mathcal{V},\quad\text{subject to}\quad \mathcal{R}_{i}n_{i}=0\quad\forall x_{j}\in\partial\mathcal{V} \tag{55}\]
This higher-order field equation is simply the divergence of the residual \(\mathcal{R}_{i}\). Upon substituting for \(\mathcal{R}_{i}\) from (21) and subsequently imposing the incompressible continuity condition \(u_{i,i}=\mathcal{R}_{4}/\rho=0\) from (22), we obtain:
\[p_{,ii}=-[\rho u_{j}u_{i,j}]_{,i}+\rho b_{i,i}, \tag{56}\]
which is a Poisson equation for the pressure. The boundary condition is a Neumann type requiring the specification of the normal pressure gradient, \(n_{i}p_{,i}=p_{,n}\equiv f(x_{j},t)\), on the boundary:
\[f(x_{j},t)=-n_{i}\big{[}\rho\dot{u}_{i}+\rho u_{j}u_{i,j}-\mu u_{i,jj}-\rho b _{i}\big{]} \tag{57}\]
Equation (56) and boundary condition (57) evolve the pressure in a manner that ensures the velocity field is solenoidal. This is a well known pressure-velocity based formulation commonly used in the numerical solution of incompressible flows (_e.g._[33, 74]).
Next, we consider the velocity equations (35) which, at present, have a more elusive physical interpretation. Here, we instead begin with the natural boundary conditions (39) and (40), which are due to the \(\delta u_{i}\) and \(\delta u_{i,j}\) variations. The incompressible versions of these equations are:
\[\rho\mathcal{R}_{i}u_{j}n_{j}+\mu\mathcal{R}_{i,j}n_{j}=0\quad\text{and}\quad -\mu\mathcal{R}_{i}n_{j}=0\quad\forall x_{j}\in\partial\mathcal{V} \tag{58}\]
The boundary conditions involving the residual \(\mathcal{R}_{i}\) are those compatible with the first-order Navier-Stokes equations, such as the no-slip and no-penetration conditions. Indeed, if we specify the velocity vector of the _actual motion_ on the boundary, then \(\mathcal{R}_{i}\equiv 0\) there. Note that the pressure of the actual motion on the boundary will be known from the simultaneous solution of (56).
However, the _gradient_ terms \(\mathcal{R}_{i,j}\) will introduce up to third-order spatial derivatives that must be specified. These represent the additional boundary conditions that must accompany the higher-order governing equation, which will be seen shortly to be second-order in time and fourth-order in space. Again, recall the example of Section 1, whereby the system (2) must be appended with a second (initial) condition specifying the (time) derivative of the coordinate \(v(t)\). In the present context, these boundary conditions are ostensibly tantamount to specification of the viscous stress on the boundary by way of velocity gradients.
In general, the conditions at a boundary require two _transition relations_[30, 32] to ultimately describe the momentum transport. Mathematically speaking, these conditions are the jump in velocity (momentum intensity) and the jump in stress (momentum flux). Under ordinary physical circumstances the velocity and stress are assumed to be continuous. However, this is one particular form of the transition relations, and there are familiar examples to which they do not apply. For example, at a liquid-gas interface the stress relation is modified to account for a non-zero jump in the normal stress that is balanced by a force due to surface tension (the tangential stress component usually still taken to be continuous). Similarly, in the event that molecular slip occurs, the typical transition relation gives an expression for the slip velocity (_e.g._[75, 76]). In the case of energy transport, analogous conditions are needed regarding jumps in temperature (intensity) and heat flow (flux), which are recognized as the concept of thermal contact resistance.
We now turn our attention to the Euler-Lagrange equations (35), which upon imposing incompressibility and expanding derivatives of product terms yields:
\[\rho\dot{\mathcal{R}}_{i}+\rho u_{j}\mathcal{R}_{i,j}=\rho\mathcal{R}_{j}u_{ j,i}-\mu\mathcal{R}_{i,jj}\quad\forall x_{j}\in\mathcal{V} \tag{59}\]
The left-hand side is the material derivative of the residual \(\mathcal{R}_{i}\). Our purpose here is to observe which terms from the first-order Navier-Stokes equation are 'eliminated' in the higher-order formulation. Specifically, we are interested in the non-conservative viscous terms; while the body force \(b_{i}\) could also be non-conservative, we will not concern ourselves with this possibility. Direct substitution of \(\mathcal{R}_{i}\) into (59) generates many terms, but it is found that only one is canceled: the viscous Laplacian of the (time derivative of the) velocity, namely \(\mu\dot{u}_{i,jj}\). This term mutually appears from the \(\rho\dot{\mathcal{R}}_{i}\) and \(-\mu\mathcal{R}_{i,jj}\) terms in (59). To maintain notional clarity, we write the residual as:
\[\mathcal{R}_{i}=\rho\dot{u}_{i}-\mu u_{i,kk}+\tilde{\mathcal{R}}_{i} \tag{60}\]
where index \(k\) has been used to avoid confusion with gradient operators in (59) having index \(j\), and \(\tilde{\mathcal{R}}_{i}=\rho u_{k}u_{i,k}-\rho b_{i}\) are the remaining terms in the residual. Substituting the above into the first and last terms of (59), canceling the aforementioned \(\mu\dot{u}_{i,jj}\) term, and then dividing out by the density gives:
\[\rho\ddot{u}_{i}+\dot{\tilde{\mathcal{R}}}_{i}+u_{j}\mathcal{R}_{i,j}= \mathcal{R}_{j}u_{j,i}-\nu\big{[}-\mu u_{i,kkjj}+\tilde{\mathcal{R}}_{i,jj} \big{]} \tag{61}\]
where \(\nu=\mu/\rho\) is the kinematic viscosity (recall that all variables are non-dimensional). We see that this equation is second-order in time and fourth-order in space. Viscous terms still appear in the equation including second- and third-order spatial derivatives. Nevertheless, the technique detailed by Sanders [9] and employed here evidently ensures that (61) has a corresponding Hamiltonian structure.
## 5 Case Study
We can explore how this method can be applied by considering a simplified example with a known field solution. In looking at the variety of cases in which the Navier-Stokes equations have a known analytical solution, the simplest are those involving steady flows. While the Euler-Lagrange equations (35), (36) can be written for these cases, the corresponding Hamilton-Jacobi equation is trivial because for steady flows the fields are already equal to their initial values.
It is therefore worthwhile to examine the simplest unsteady flows, which should result in a nontrivial Hamilton-Jacobi equation. Indeed, there exists a class of flows for which the Navier-Stokes equations take the same simplified form: those in which the flow is incompressible and unidirectional [30]. This class of problems include both of Stokes's flows [77], in which a semi-infinite fluid is influenced by a boundary moving in its own plane. In the first of these cases, the boundary is impulsively started and in the second, the boundary oscillates. We can also include developing flow in a channel or pipe. The only difference between these flows results from initial and boundary conditions, but the Navier-Stokes equations and therefore the present Hamilton-Jacobi equation take the same form.
Here we will examine the case in which there is motion only in the \(x_{1}\) direction, and the velocities take the form \(\{u_{i}\}=\{u_{1}(x_{2},t),0,0\}\). In the absence of a body force, our pressure gradient in the \(x_{1}\) direction is solely a function of time and the pressure gradients in the \(x_{2}\) and \(x_{3}\) directions are zero. There are thus only two unknown field quantities: \(u_{1}(x_{2},t)\) and \(p(x_{1},t)\), where \(p\) is linear in \(x_{1}\). The field equation of primary interest is
\[\mathcal{R}_{1}\equiv\rho\dot{u}_{1}+p_{,1}-\mu u_{1,22}=0, \tag{62}\]
and the remaining field equations are satisfied automatically by the assumed form of the fields. Following the procedure described above, the momenta conjugate to \(u_{1}\) and \(p\) are given by
\[\pi_{1}\equiv\rho\mathcal{R}_{1},\quad\pi_{4}\equiv 0. \tag{63}\]
This results in a Hamiltonian density given by:
\[\mathcal{H}^{*}=\frac{1}{2}\frac{1}{\rho^{2}}\pi_{1}\pi_{1}-\frac{1}{\rho} \bigg{(}p_{,1}-\mu u_{1,22}\bigg{)}\pi_{1}. \tag{64}\]
Hamilton's functional \(\text{S}^{*}=\text{S}^{*}[t,u_{1},p]\) can be expressed as an integral over \(x_{2}\) only, since the other spatial coordinates do not appear and may be integrated out. In this way, we may write the Hamilton-Jacobi equation as follows:
\[\int\text{d}x_{2}\left[\frac{1}{2}\frac{1}{\rho^{2}}\frac{\delta\text{S}^{*}} {\delta u_{1}}\frac{\delta\text{S}^{*}}{\delta u_{1}}-\frac{1}{\rho}\bigg{(}p _{,1}-\mu u_{1,22}\bigg{)}\frac{\delta\text{S}^{*}}{\delta u_{1}}\right]+ \frac{\partial\text{S}^{*}}{\partial t}=0, \tag{65}\]
with \(\delta\text{S}^{*}/\delta p=0\). The solution to (65) would provide a canonical transformation to a new set of coordinates, giving analytical expressions for \((u_{1},p)\). Despite knowing the analytical solution for these fields in this particular example, the present authors have not been able to solve this Hamilton-Jacobi equation. Indeed, it would seem that solving a differential equation containing functional derivatives represents an ongoing field of study [78]. This example therefore appears to be a good place to start for tackling the general problem.
## 6 Conclusion
This paper has presented a novel Hamiltonian formulation of the isotropic Navier-Stokes problem for both compressible and incompressible fluids. This canonical formulation opens several previously unexplored avenues toward a final resolution of the problem, which we briefly describe below.
Perhaps the most obvious route would be to solve the Hamilton-Jacobi equation--either (53) for the compressible case or (54) for the incompressible case--for Hamilton's functional \(\mathrm{S}^{*}[t,u_{i},p]\) directly. If a complete solution for \(\mathrm{S}^{*}\) can be found, it will lead via canonical transformation to a new set of fields which are equal to their initial values, thereby giving analytical expressions for the original velocity and pressure fields. Or, failing that, if one can simply establish based on existing analytical techniques that a complete solution to this Hamilton-Jacobi equation does (or does not) always exist under the usual assumptions, that will also settle the question of existence of solutions.
An alternative strategy might be to investigate the corresponding Lagrangian formulation based on the action \(\mathcal{S}^{*}\) as given by (23). Because the first- and second-order formulations are mathematically equivalent (recall the proof in Section 3.2), \(\mathcal{S}^{*}\) must have as many local minima as there are solutions to the traditional, first-order formulation. Intuitively, it seems as though it ought to be possible to determine--or at least to establish bounds on--the number of critical points an action has based on the form of the Lagrangian [64, 65]. If one can establish that, under the usual assumptions, \(\mathcal{S}^{*}\) always has exactly one local minimum, or else demonstrate that there are cases where it fails to achieve a local minimum, that too will resolve the question of existence and uniqueness.
Finally, it is worth noting that the techniques employed here are by no means specific to the Navier-Stokes problem, nor are they restricted to the field of classical mechanics. The suitably-averaged principle of least squares [5, 6, 7, 8, 9] can be applied to any traditionally non-Hamiltonian dynamical system in order to formulate a mathematically equivalent higher-order Hamiltonian system. It is believed that this fundamental result will also find uses in other branches of pure and applied mathematics.
## Acknowledgments
The authors wish to thank Maggie Sanders for posing thought-provoking questions during the development of this paper.
## Statements and declarations
### Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
### Declaration of interests
The authors report no conflict of interest.
### Data availability
Any data appearing in the present work are available from the corresponding author upon reasonable request.
_Author ORCID_
* J. W. Sanders, [https://orcid.org/0000-0003-3059-3815](https://orcid.org/0000-0003-3059-3815)
* A. C. DeVoria, [https://orcid.org/0000-0001-5615-0807](https://orcid.org/0000-0001-5615-0807)
* N. J. Washuta, [https://orcid.org/0000-0002-4575-0564](https://orcid.org/0000-0002-4575-0564)
* J. C. Berlinghieri, [https://orcid.org/0000-0001-7921-1895](https://orcid.org/0000-0001-7921-1895)
_Author contributions_
Sanders conceived the original idea for the work, wrote the Abstract, Introduction, Analysis, Conclusion, and Appendix, and contributed to the Literature Review. DeVoria wrote the Discussion, contributed to the Analysis and Literature Review, and provided expertise in the area of fluid mechanics. Washuta wrote the Case Study, contributed to the Literature Review, and provided expertise in the area of fluid mechanics. Elamin contributed to the Literature Review and provided expertise in the area of fluid mechanics. Skenes organized and wrote the Literature Review and provided expertise in the area of fluid mechanics. Berlinghieri contributed to the Introduction and Literature Review, and provided expertise in the areas of analytical mechanics and variational calculus. All authors contributed equally in checking the accuracy of the work, discussing the results together as a group on multiple occasions, drawing conclusions, and proofreading the manuscript.
_Ethical guidelines_
Not applicable.
## Appendix A Derivation of the Hamilton-Jacobi equation
In what follows, it is important to distinguish between two sets of solutions: solutions to the second-order Euler-Lagrange equations (35) and (36), and solutions to the first-order field equations (24) and (25). The latter are a subset of the former, but not vice versa.
We define _Hamilton's functional_\(\mathsf{S}^{*}[t,u_{i},p]\) as
\[\mathsf{S}^{*}[t,u_{i},p]\equiv\int_{t_{0}}^{t}\text{d}t\left(\tilde{L}^{*} \right), \tag{100}\]
where \(t\) is the current (variable) time, \(t_{0}\) is an arbitrary initial time, and \(\tilde{L}^{*}\) denotes the Lagrangian (26) evaluated for fields satisfying the _second-order_ Euler-Lagrange equations (35) and (36)--not necessarily the first-order field equations (24) and (25). Crucially, because the fields do not necessarily satisfy the first-order field equations, it is not appropriate to take \(\mathsf{S}^{*}\equiv 0\). We imagine that the time integral has already been carried out, so that the functional \(\mathsf{S}^{*}\) may be regarded as an integral over \(\mathcal{V}\), that is,
\[\mathsf{S}^{*}[t,u_{i},p]=\int\text{d}^{3}x\left(s^{*}\right), \tag{101}\]
where \(s^{*}\) is the Lagrangian density (27) evaluated for fields satisfying the second-order Euler-Lagrange equations (which will be denoted \(\tilde{\mathcal{L}}^{*}\)) integrated over time from \(t_{0}\) to \(t\).
Starting from (A.1) and evaluating the first variation of \(\mathrm{S}^{*}\) as we did the action \(\mathcal{S}^{*}\) in Section 3, we find that
\[\delta\mathrm{S}^{*}=\int\mathrm{d}^{3}x\bigg{[}\pi_{i}\delta u_{i}+\pi_{4} \delta p\bigg{]}_{t_{0}}^{t}+\int\mathrm{d}^{2}x\left(\cdots\right),\] (A.3)
where we have used the fact that the second-order Euler-Lagrange equations (35) and (36) are satisfied by definition. But, by the definition of functional derivatives, we have that
\[\delta\mathrm{S}^{*}=\int\mathrm{d}^{3}x\bigg{[}\frac{\delta\mathrm{S}^{*}}{ \delta u_{i}}\delta u_{i}+\frac{\delta\mathrm{S}^{*}}{\delta p}\delta p\bigg{]} +\int\mathrm{d}^{2}x\left(\cdots\right).\] (A.4)
In this way, we may identify the conjugate momenta with the functional derivatives of \(\mathrm{S}^{*}\):
\[\pi_{i}=\frac{\delta\mathrm{S}^{*}}{\delta u_{i}},\quad\pi_{4}=\frac{\delta \mathrm{S}^{*}}{\delta p}.\] (A.5)
Now, starting from (A.1) and evaluating the time derivative, we find that
\[\frac{\mathrm{d}\mathrm{S}^{*}}{\mathrm{d}t}=\tilde{L}^{*}=\int\mathrm{d}^{3 }x\left(\tilde{\mathcal{L}}^{*}\right).\] (A.6)
But, by the chain rule,
\[\frac{\mathrm{d}\mathrm{S}^{*}}{\mathrm{d}t}=\frac{\partial\mathrm{S}^{*}}{ \partial t}+\int\mathrm{d}^{3}x\bigg{[}\frac{\delta\mathrm{S}^{*}}{\delta u_{i }}\dot{u}_{i}+\frac{\delta\mathrm{S}^{*}}{\delta p}\dot{p}\bigg{]}.\] (A.7)
In this way, we find that
\[\int\mathrm{d}^{3}x\bigg{[}\frac{\delta\mathrm{S}^{*}}{\delta u_{i}}\dot{u}_{ i}+\frac{\delta\mathrm{S}^{*}}{\delta p}\dot{p}-\tilde{\mathcal{L}}^{*} \bigg{]}+\frac{\partial\mathrm{S}^{*}}{\partial t}=0.\] (A.8)
Here the integral is simply the Hamiltonian \(H^{*}\), with the conjugate momenta replaced by the functional derivatives in accordance with (A.5). Hence we arrive at the Hamilton-Jacobi equation
\[H^{*}\left[t,u_{i},p,\frac{\delta\mathrm{S}^{*}}{\delta u_{i}},\frac{\delta \mathrm{S}^{*}}{\delta p}\right]+\frac{\partial\mathrm{S}^{*}}{\partial t}=0,\] (A.9)
as claimed in (52).
|
2305.03055 | The maximum mass and deformation of rotating strange quark stars with
strong magnetic fields | We study the structure and total energy of a strange quark star (SQS) endowed
with a strong magnetic field with different rotational frequencies. The MIT bag
model is used, with the density-dependent bag constant for the equation of
state (EOS). The EOS is computed considering the Landau quantization effect
regarding the strong magnetic fields (up to $5\times10^{17}$ G) in the interior
of the strange quark star. Using the LORENE library, we calculate the
structural parameters of SQS for different setups of magnetic field strengths
and rotational frequencies. In each setup, we perform calculations for $51$
stellar configurations, with specified central enthalpy values. We investigate
the configurations with the maximum gravitational mass of SQS in each setup.
Our models of SQSs are compared in the maximum gravitational mass, binding
energy, compactness, and deformation of the star. We show that the
gravitational mass might exceed $2.3 M_\odot$ in some models, which is
comparable with the mass of the recently detected ``black widow'' pulsar
\emph{PSR J0952-0607} and the mass of \emph{GW190814} detected by the
LIGO/Virgo collaboration. The deformation and maximum gravitational mass of SQS
can be characterized by simple functions that have been fitted to account for
variations in both magnetic field strength and frequency. Rapidly rotating
strange stars have a minimum gravitational mass given by the equatorial
mass-shedding limit. | Fatemeh Kayanikhoo, Mateusz Kapusta, Miljenko Čemeljić | 2023-05-03T18:17:53Z | http://arxiv.org/abs/2305.03055v1 | # The maximum mass and deformation of rotating strange quark stars with strong magnetic fields
###### Abstract
We study the structure and total energy of a strange quark star (SQS) endowed with a strong magnetic field with different rotational frequencies. The MIT bag model is used, with the density-dependent bag constant for the equation of state (EOS). The EOS is computed considering the Landau quantization effect regarding the strong magnetic fields (up to \(5\times 10^{17}\) G) in the interior of the strange quark star. Using the LORENE library, we calculate the structural parameters of SQS for different setups of magnetic field strengths and rotational frequencies. In each setup, we perform calculations for \(51\) stellar configurations, with specified central enthalpy values. We investigate the configurations with the maximum gravitational mass of SQS in each setup. Our models of SQSs are compared in the maximum gravitational mass, binding energy, compactness, and deformation of the star. We show that the gravitational mass might exceed \(2.3M_{\odot}\) in some models, which is comparable with the mass of the recently detected "black widow" pulsar _PSR J0952-0607_ and the mass of _GW190814_ detected by the LIGO/Virgo collaboration. The deformation and maximum gravitational mass of SQS can be characterized by simple functions that have been fitted to account for variations in both magnetic field strength and frequency. Rapidly rotating strange stars have a minimum gravitational mass given by the equatorial mass-shedding limit.
Strange quark star, Equation of state, Magnetic field, MIT bag model, LORENE library
## 1 Introduction
At extremely high densities (\(\geq 10^{15}\ {\rm g/cm^{3}}\)), which might be found in the cores of compact objects like neutron stars, the strong force that holds quarks together within hadrons can weaken to the point that the quarks are no longer confined within these particles. Instead, they form a dense quark matter phase in which strange quarks can exist. The stability of strange quark matter (SQM) regarding the comparable energy per baryon (E/A) with the value of \(E/A\) (\({}^{56}Fe)\cong 930\) MeV confirms that it might be the stable type of matter Bodmer [1971], Terazawa et al. [1977], Witten [1984], Farhi and Jaffe [1984].
Among many suggested kinds of objects containing SQM, a strange quark star (SQS) and a hybrid star are often considered. Strange quark stars are hypothetical compact objects that are made entirely of SQM, while hybrid stars are composed of a quark matter core surrounded by a shell of hadronic matter. The study of these exotic types of compact objects is of great interest, as they can provide insights into the properties of matter at extreme densities and temperatures. The possibility of the existence of SQSs for the first time was studied independently in Alcock et al. (1986) and Haensel et al. (1986). They studied the stability of SQSs and the stellar parameters of these objects compared to the neutron stars. Weber et al. (1997) discussed a quark deconfinement in the core of neutron stars. According to their model, the nuclear matter in the outer layers of the star is composed of protons and neutrons, while the quark matter in the central core is made up of quarks and gluons.
Scenarios for the phase transition of nuclear matter to the quark matter in the core of compact objects depend on the astrophysical situation. One possibility is a binary system containing a neutron star (NS) or a proto-NS, wherein the companion star is overflowing its Roche lobe and experiencing accretion Ouyed et al. (2002), Ouyed and Staff (2013), Ouyed et al. (2015). Another possibility is a decrease in the centrifugal force when an NS spins down. In both those scenarios the associated gravitational implosion may be accompanied by a luminous ejection of the envelope that is called a Quark-Nova and the remnant core might be an SQS Ouyed et al. (2002). Conversion of an NS to an SQS releases the energy in order of \(10^{52}\) ergs Ouyed (2022). The possibility that quark-novae explosions are possible sources for the emission of X-rays, gamma-rays, and fast radio bursts (FRBs) observed in the universe is explored in Ouyed et al. (2020). Another observational signature, proposed to indicate the presence of a strange star, is the recent discovery of an object in the supernova remnant _HESS J1731-347_. Through analysis of the X-ray spectrum and use of distance estimates obtained from Gaia observations, it is possible to estimate the object's mass and radius as \(M_{g}=0.77^{+0.20}_{-0.17}M_{\odot}\) and \(R=10.40^{+0.86}_{-0.78}\) km, respectively Doroshenko et al. (2022). These findings suggest that the observed object may either be the lightest known neutron star, or an SQS characterized by an exotic equation of state.
In order to provide a more realistic model for compact objects one should consider the properties of these objects such as composition, magnetic field strength, and rotation of the object. The equation of state (EOS) of compact objects is an open problem in astrophysics. There are an enormous number of EOS models for compact objects. Some well-known models are MIT bag model Johnson (1975), Hecht (2000) and NJL model Menezes et al. (2009), Ferreira et al. (2020, 2020) and CDDM model for SQS Chakrabarty et al. (1989, 1989), Hou et al. (2015). The choice of the EOS model can have a significant impact on the predicted properties of compact objects, such as mass, radius, and moment of inertia. Comparing these predictions to observational data, such as pulsar timing and gravitational wave observations, can help to constrain the EOS and provide insight into the nature of matter under extreme conditions. LIGO-VIRGO collaborations detected gravitational waves from the compact objects which carry important information about the interior material and shape of compact objects Demorest et al. (2010), Zhao (2015), Abbott et al. (2020).
In the \(\rm P\dot{P}\)-diagram in Fig. 1, shown are the magnetic field, period, and the age of objects Halpern and Gotthelf (2009). The distribution of magnetars is shown with the red crosses in the right top corner of the panel, isolated pulsars are shown with dots and binary pulsars are shown with circled dots. The stronger magnetic field is more spinning down the fast-spinning nascent magnetars so that at present they are rotating slowly. In Fig. 1, it is shown that the surface magnetic field of magnetars is \(\geq 10^{14}\) G with a spin period of \(\sim 10\) s, and isolated pulsars have a surface magnetic field of \(\sim 10^{10}-10^{14}\) G with a spin period of \(\sim 0.1-10\) s. The lowest magnetic fields correspond to the millisecond pulsars in binary systems with a spin period of \(10^{-3}-10^{-1}\) s.
According to the theoretical studies the magnetic field in the core of pulsars and magnetars might reach \(10^{18}\) G Haensel et al. (1986), Lai and Shapiro (1991), Bocquet et al. (1995), Isayev (2014). There are a few studies on the stellar properties of magnetized neutron stars and strange stars Mallick and Schramm (2014), Chatterjee et al. (2015), Mastrano et al. (2015). They studied the stellar parameters such as maximum gravitational mass, radius, and deformation of NS affected by the strong magnetic field. The other considerable property of compact objects which affects the dynamic and configuration of the star is spin. Rotating compact objects are supposed to be stable with larger maximum gravitational masses compared to the non-rotating models. The rapidly rotating SQS is studied by Gondek-Rosinska et al. (2000). They showed that compared to models of neutron stars, the effect of rotation has a more significant impact on the overall parameters of strange stars. The Keplerian frequency of strange stars is studied by Haensel, P. et al. (2009).
In this paper, we study the effect of stellar spin on the parameters of magnetized SQS. We are interested in estimating the highest possible rotational frequency for the magnetized SQS and the effect of stellar spin and magnetic field strength on the shape and dynamical properties of SQS. We investigate different stellar models: non-rotating non-magnetized SQS, non-rotating magnetized SQS, rotating non-magnetized SQS, and rotating magnetized SQS. The strength of the magnetic field is up to \(5\times 10^{17}\) G and the chosen rotational frequencies are \(0\), \(400\), \(800\), and \(1200\) Hz.
In the second section of this paper, we present the EOS model of SQS, and in SS3 we derive the structure equation of the star in axisymmetric space-time. We introduce the numerical code LORENE and give the numerical setup for the SQS
model in SS4. The stellar parameters such as gravitational mass, radius, and stability as well as the total energy, binding energy, and compactness of SQS are investigated in Section SS5.
## 2 Equation of state
We consider an SQS which contains up, down and strange quarks. The mass of the strange quark is \(150\) MeV. The fraction of electrons is low, \(10^{-3}\), so we simplify our model by neglecting their contribution. The EOS is computed in the MIT bag model Johnson (1975); Hecht (2000) with density-dependent bag constant \(\mathcal{B}_{\rm bag}\). In this model, the total energy contains the kinetic energy of quarks that is computed from Fermi relations and the bag constant:
\[\varepsilon_{\rm tot}=\sum_{i,j=\pm}\varepsilon_{i}^{(j)}+\mathcal{B}_{\rm bag}, \tag{1}\]
where \(j=\pm\) is the spin of quarks and \(i\in(1,2,3)\) represents \(up\), \(down\), and \(strange\) quarks. Due to the strong magnetic field interior of the compact object, we rewrite the Fermi relations considering the Landau quantization effect Landau and Lifshitz (1977); Lopes and Menezes (2015); Mukhopadhyay et al. (2017).
The single particle energy density is,
\[\epsilon^{i}=[p_{i}^{2}c^{2}+m_{i}^{2}c^{4}(1+2\nu B_{D})]^{1/2}, \tag{2}\]
where \(p_{i}\) and \(m_{i}\) are the momenta and the mass of quarks, the Landau levels are denoted by \(\nu\) and the dimensionless magnetic field is defined as \(B_{D}=B/B_{C}\), where \(B_{C}=m_{i}^{2}c^{3}/q_{i}\hbar\), with \(q_{i}\) the charge of quark \(i\).
The number density of quarks is obtained as follows,
\[\rho=\sum_{\nu=0}^{\nu_{max}}\frac{2qB}{h^{2}c}g(\nu)p_{F}(\nu) \tag{3}\]
Figure 1: The \(\rm{\dot{P}\dot{P}}\)-diagram of isolated pulsars (dots), binary radio pulsars (circled dots), and magnetars (crosses). Figure adapted from Halpern and Gotthelf (2009).
where \(\nu_{max}\) is the maximum number of Landau levels corresponding to the maximum Fermi energy \(\epsilon_{\rm{Fmax}}\),
\[\nu_{\rm{max}}=\frac{\epsilon_{\rm{Fmax}}^{2}-1}{2m_{i}eB_{D}}, \tag{4}\]
in Eq. 3\(g(\nu)\) and \(p_{F}(\nu)\) are the degeneracy and Fermi momentum of the \(\nu\)-th Landau level. The kinetic energy density of particles is defined as
\[\varepsilon_{i}^{(j)}=\frac{2B_{D}}{(2\pi)^{2}\lambda^{3}}m_{i}c^{2}\sum_{\nu =0}^{\nu_{\rm{max}}}g_{\nu}(1+2\nu B_{D})\eta(x). \tag{5}\]
where
\[\eta(x)=\frac{1}{2}\left[x\sqrt{1+x^{2}}+ln(x\sqrt{1+x^{2}})\right]. \tag{6}\]
with
\[x=\frac{X_{F}^{(j)}}{(1+2\nu B_{D})^{1/2}}, \tag{7}\]
and
\[X_{F}^{(j)}=(\epsilon_{F}^{(j)2}-1-2\nu B_{D})^{1/2} \tag{8}\]
Considering the zero magnetic field, \(\nu_{\rm{max}}\rightarrow\infty\), so that the kinetic energy is simplified to the Fermi relation.
The density-dependent bag constant \(\mathcal{B}_{\rm{bag}}\) is defined with a Gaussian relation,
\[\mathcal{B}_{\rm{bag}}(\rho)=\mathcal{B}_{\infty}+(\mathcal{B}_{0}-\mathcal{ B}_{\infty})e^{-\alpha(\rho/\rho_{0})^{2}}, \tag{9}\]
with \(\alpha=0.17\) and \(\mathcal{B}_{0}=\mathcal{B}_{bag}(0)=400~{}{\rm{MeV}}/{\rm{fm}}^{3}\). We should define \(\mathcal{B}_{\infty}\) in such a way that the bag constant would be compatible with experimental data (CERN-SPS). We determine \(\mathcal{B}_{\infty}=8.99~{}{\rm{MeV}}/{\rm{fm}}^{3}\) by putting the quark energy density equal to the hadronic energy density Heinz and Jacob (2000).
The EOS is given by
\[P(\rho)=\rho\left(\frac{\partial\varepsilon_{tot}}{\partial\rho}\right)- \varepsilon_{tot}. \tag{10}\]
Pressure versus the energy density in the presence of magnetic fields of different strengths in SQM is shown in Fig.2. Our model of EOS indicated zero pressure at the energy density of \(\approx 0.52\times 10^{15}~{}{\rm{gr}}/{\rm{cm}}^{3}\) (\(\approx 290{\rm{MeV}}/{\rm{fm}}^{3}\)), and is continued to the energy density corresponding to the maximum mass of SQS, which is approximately pointed with a red cross on Fig. 2. Although the effect of the magnetic field strength on EOS is negligible, in the following sections we show that the structural parameters and shape of the star change significantly.
A number of studies have investigated different models of EOS for quark stars. To assess the performance of our model, we compared it to the EOS presented by Chatterjee et al. (2015). They provided the linear EOSs in which the maximum mass is inversely proportional to the square root of energy density at zero pressure. Our EOS model is approximately linear at higher densities and the extrapolation of our EOS shows the energy density \(\approx 2\times 10^{14}~{}{\rm{g}}/{\rm{cm}}^{3}\) that is half of the value of the model provided by Chatterjee et al. (2015). The larger maximum gravitational mass is expected from our EOS model. They also confirm that the magnetic field less than \(10^{19}\) G does not significantly impact the stiffness of EOS.
## 3 The equations of stellar structure
In this section, we derive the differential equations of the stellar structure by solving Einstein's equation,
\[R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}=8\pi T^{\mu\nu} \tag{11}\]
where \(R^{\mu\nu}\) is the Ricci tensor, \(R\) is the Ricci scalar, \(g^{\mu\nu}\) is the metric coefficient and \(T^{\mu\nu}\) is the energy-momentum tensor. We choose units with \(G=c=1\).
We start with the energy-momentum tensor of the perfect fluid and the spherically symmetric star. Then, regarding the strong magnetic field in the microscopic properties of SQS, we inspect the energy-momentum tensor coupling with Maxwell energy-momentum tensor.
Tolman-Oppenheimer-Volkov equations (TOV) are derived by solving the Einstein field equations in the spherically symmetric space-time and the perfect fluid's energy-momentum tensor Tolman (1939); Oppenheimer and Volkoff (1939),
\[T^{\mu\nu}=(\varepsilon+P)u^{\mu}u^{\nu}+Pg^{\mu\nu} \tag{12}\]
where \(u^{\mu}\) is the fluid 4-vector, \(\varepsilon\) is the energy density and \(P\) is the pressure of the perfect fluid. We can write
\[\frac{dP}{dr}=-(P+\varepsilon)\frac{m+4\pi r^{3}P}{r(r-2m)} \tag{13}\]
and
\[\frac{dm}{dr}=4\pi r^{2}\varepsilon. \tag{14}\]
In the presence of the magnetic field, the interaction of the electromagnetic field with the matter (magnetization) is considerable, the energy-momentum tensor is given by
\[T^{\mu\nu}=(\varepsilon+P)u^{\mu}u^{\nu}+Pg^{\mu\nu}+\frac{\mathcal{M}}{B} \Big{[}b^{\mu}b^{\nu}-(b\cdot b)(u^{\mu}u^{\nu}+g^{\mu\nu})\Big{]}+\frac{1}{ \mu_{0}}\Big{[}-b^{\mu}b^{\nu}+(b\cdot b)(u^{\mu}u^{\nu}+\frac{1}{2}g^{\mu\nu} )\Big{]}, \tag{15}\]
where the two first terms are the perfect fluid contribution (Eq. 12), the third term is the magnetization contribution and the last term is the pure magnetic field contribution to the energy-momentum tensor. \(B\) is the magnetic field, and \(b^{\mu}\) is the magnetic field 4-vector. \(\mathcal{M}\) represents the interaction of the electromagnetic field with the matter, which is given by the coupling between the electric current \(j^{\phi}\) and the magnetic vector potential,
\[j^{\phi}=\Omega j^{t}+(\varepsilon+P)k_{0}, \tag{16}\]
where \(\Omega\) is rotational velocity of the star and \(k_{0}\) is the current function.
We solve the Einstein field equations within the 3+1 formalism in a stationary, axisymmetric space-time Chatterjee et al. (2015); Franzon (2017). The metric is given by
\[ds^{2}=-N^{2}dt^{2}+A^{2}(dr^{2}+r^{2}d\theta^{2})+\lambda^{2}r^{2}\sin^{2}( \theta)(d\phi-N^{\phi}dt)^{2} \tag{17}\]
where \(N\), \(A\), \(\lambda\), and \(N^{\phi}\) are functions of \((r,\theta)\). By applying 3+1 formalism we obtain a set of four elliptic partial differential equations,
\[\Delta_{3}=4\pi A^{2}(E^{T}+S_{r}^{r}+S_{\theta}^{\theta}+S_{\phi}^{\phi})+ \frac{\lambda^{2}r^{2}\sin^{2}(\theta)}{2N^{2}}\delta N^{\phi}\delta N^{\phi} -\delta\nu\delta(\nu+\beta) \tag{18}\]
Figure 2: Pressure as a function of the energy density of SQM in the presence of magnetic fields of different strengths. The energy density corresponding to the maximum gravitational mass is pointed with the red cross. The curves with four magnetic fields overlap.
\[\Delta_{2}[\alpha+\nu]=8\pi A^{2}S_{\phi}^{\phi}+\frac{3\lambda^{2}r^{2}\sin^{2}( \theta)}{4N^{2}}\delta N^{\phi}\delta N^{\phi}-\delta\nu\delta\nu \tag{19}\]
\[\Delta_{2}[(N\lambda-1)r\sin(\theta)]=8\pi NA^{2}\lambda r\sin(\theta)(S_{r}^{ r}+S_{\theta}^{\theta}) \tag{20}\]
and
\[\left[\Delta_{3}-\frac{1}{r^{2}\sin^{2}(\theta)}\right](N^{\phi}r\sin(\theta) )=-16\pi\frac{NA^{2}}{\lambda^{2}}\frac{J^{\phi}}{r\sin(\theta)}+r\sin(\theta )\delta N^{\phi}\delta(\nu-3\beta), \tag{21}\]
where \(\nu=\ln N\), \(\alpha=\ln A\), \(\beta=\ln\lambda\), and \(J^{\phi}\) is electromagnetic current. In the above equations, \(E^{T}\) and \(S_{j}^{i}\) are total energy and stress, respectively.
Considering a magnetic field pointing in the z-direction, we can rewrite the energy-momentum tensor in a well-known form:
\[T^{\mu\nu}=\mathrm{diag}\left(\varepsilon+\frac{B^{2}}{2\mu_{0}},P-\mathcal{ M}B+\frac{B^{2}}{2\mu_{0}},P-\mathcal{M}B+\frac{B^{2}}{2\mu_{0}},\ \ P-\frac{B^{2}}{2\mu_{0}}\right). \tag{22}\]
In this equation, the magnetization term reduces the total pressure of the system. It is also clear that the magnetic field reduces the parallel pressure, but the perpendicular pressure increases with increasing the magnetic field.
## 4 The numerical method
We solve a set of four elliptic partial differential equations presented in Section SS3 using LORENE library ([http://www.lorene.obspm.fr](http://www.lorene.obspm.fr)) Bonazzola et al. (1998); Chatterjee et al. (2015); Franzon (2017). By employing spectral methods, LORENE provides a more accurate approach to solving partial differential equations than grid-based methods, thereby enabling more precise calculations of the solutions to this system. Space in LORENE is separated into domains and mapped onto the specific coordinate system that can be readjusted in order to tackle non-spherical shapes. Our setup consists of \(3\) domains: two inside and next to the surface of the star and one outside at infinity. We use Et_magnetisation class to calculate hydrostatic configurations for uniformly (not differentially) rotating magnetized stars (the code is located in "_Lorene/Codes/Mag_eos_star_").
To specify the magnetic field, LORENE uses the so-called current function, \(k_{0}\), which describes the amplitude of current inside the star to generate the magnetic field. In our setup, the current function amplitude changed from \(0\) to \(15000\) in the intervals of \(2000\), enabling us to cover vast ranges of central magnetic field values to see how SQS behaves even in the fields up to \(5\times 10^{17}\) G. As we mentioned in the Introduction, we solve the equations for different rotational frequencies. In every series of calculations, we compute the parameters of \(51\) stellar configurations with specified central enthalpy values in the range from \(0.01\) c\({}^{2}\) to \(0.51\) c\({}^{2}\) with the spacing of \(0.01\) c\({}^{2}\).
To expedite the calculations, we utilized a wrapper based on MPI that facilitates the distribution of the computational load across the available threads. By such parallelizing, we significantly enhanced the speed and efficiency of the calculations, resulting in faster processing times and improved performance.
Equilibrium configurations in Newtonian gravity are known to satisfy the virial relation when a polytropic equation of state is assumed. This relation is commonly utilized to verify the accuracy of computations. The 3-dimensional virial identity (GRV3), introduced by Gourgoulhon and Bonazzola (1994), extends the Newtonian virial identity to general relativity. On the other hand, the 2-dimensional virial identity (GRV2) proposed by Bonazzola (1973), generalizes the virial identity for axisymmetric space-times to general asymptotically flat space-times. Our computational results indicate a high level of accuracy which is \(\approx 10^{-5}\) in the non-magnetized non-rotating models and \(\approx 10^{-2}\) in the magnetized fast-rotating model.
## 5 Analysis of stellar properties
In this section, we examine the properties of the star under varying conditions, where the strengths of both the magnetic field and rotational frequency are modified. Through the exploration of different setups, we gain insight into the impact of these factors on the star, allowing for a more comprehensive understanding of its dynamics and properties.
In the following four-panel figures, each panel shows the rotational frequencies (\(0\), \(400\), \(800\), and \(1200\) Hz). The color bar in all plots indicates the strength of the central magnetic field which varies from \(0\) to \(5\times 10^{17}\) G.
Figure 4: Gravitational mass versus circumferential radius with different angular momentums. The color indicates the value of the central magnetic field.
Figure 3: Gravitational mass versus circumferential radius for different rotational frequencies. The color indicates the value of the central magnetic field.
### Gravitational mass and radius
Mass and radius are crucial parameters for the study of compact objects, as they provide an important understanding of the underlying physics and characteristics of these objects. In Fig. 3, we present a plot of the mass as a function of the circumferential radius, \(R_{\rm eirc}\) for each configuration. For relatively small masses (\(M<M_{\odot}\)) and low rotation rates, our model obeys the mass-radius relation \(M\simeq\frac{4}{3}\pi\rho_{0}R^{3}\), characterizing self-bound stars with density \(\rho_{0}\) at zero pressure Haensel et al. (1986). In each panel of Fig. 3, it is shown that the maximum gravitational mass \(M_{g}^{max}\) increases as a function of the magnetic field. \(M_{g}^{max}\) also increases with increasing the rotational frequency. Gourgoulhon et al. (1999) showed that the absolute maximum increase in the mass for rigidly rotating self-bound non-magnetized stars is about \(44\%\), giving \(M_{g}^{max}({\rm rot})\simeq 2.83{\rm M_{\odot}}\). In our model, the maximum considered frequency is \(1200\) Hz, which is significantly smaller than the absolute maximum frequency for the given equation of state approximated by the formula \(f_{\rm max}^{\rm EOS}=1.22(M/M_{\odot})^{1/2}R_{10}^{-3/2}{\rm Hz}\) Haensel et al. (1995), which gives \(f_{\rm max}^{\rm EOS}\simeq 1700\) Hz. In our model, the increase in the maximum mass for \(1200\) Hz is about \(16\%\).
The Keplerian frequency for considered stellar models can be estimated using the approximate formula \(f_{\rm Kep}=1.15(M/M_{\odot})^{1/2}R_{10}^{-3/2}~{}{\rm kHz}=1.2(\frac{\bar{ \rho}}{5.2\,10^{8}})^{1/2}~{}{\rm kHz}\) and is slightly above 1200 Hz Haensel, P. et al. (2009). In the frame corresponding to \(f=1200\) Hz, we see that only massive SQSs may exist as strongly magnetized, fast-rotating objects where the binding energy is in balance with the magnetic and rotation energy of the stars. Additionally, our model indicates that there is a maximum limit for the magnetic field in the fast-rotating model to obtain stable configurations.
For the stability of a rotating compact strange star, the four main constraints must be fulfilled Cook et al. (1994); Gondek-Rosinska et al. (2000): 1) a static constraint, demanding that a solution for rotating compact (NS/SQS) object should converge to the solution for a non-rotating in the limit of zero rotation, 2) a low mass constraint, defining that an NS can not form below a certain mass limit, 3) the Keplerian constraint, by which the maximum rotation rate of a compact object can not exceed the Keplerian frequency, and 4) a stability constraint to quasi-radial perturbations, stating that a rotating compact object should remain stable under perturbations like small changes in its shape or density distribution.
Our model meets these requirements. We provide a function defining the rotating model which, with zero rotation, converges to the non-rotating model. We approximate the equatorial mass-shedding limit. In the previous paragraphs, we discussed that the Keplerian frequency is slightly higher than the highest considered frequency in our study.
In order to meet the fourth constraint of stability for rotating compact stars, it is necessary to investigate the stability of the model against axisymmetric perturbations. This can be done by examining the derivative of the mass with respect to the radius at constant angular momentum \(J\), which is \(\big{(}\frac{dM}{dR}\big{)}_{J}>0\). Specifically, an increase in the stellar radius at a fixed angular momentum should give increased stellar mass. This criterion indicates that the star has the ability to withstand minor deformations and oscillations without undergoing collapse or mass loss.
To find the last stable configuration in each model, we examine the mass-radius plot at constant angular momentum \(J\). An example is illustrated in Figure 4, which displays three frames representing the angular momentum with \(1.13\), \(2.26\), and \(3.30~{}{\rm GM_{\odot}^{2}/c}\). As we discussed, the maximum gravitational mass in each sequence corresponds to the last
Figure 5: Gravitational mass as a function of central energy density with different angular momenta. The color indicates the value of the central magnetic field.
stable configuration in each magnetic field and angular momentum. We show the numerical results of the last stable configurations of SQS of each model in Table 1.
The maximum gravitational masses (\(M_{g}^{max}\)) of models versus the central magnetic field (\(B_{c}\)) are shown in Fig. 7. The \(M_{g}^{max}\) is a function of the central magnetic field \(B_{c}\) and rotational frequency \(f\),
\[\frac{M_{g}^{max}(B_{c},f)}{M_{g}^{max}(0,0)}\simeq(1+bB_{c}^{2})(1+cf^{2}) \tag{23}\]
where b and c are the constant values. In order to improve the accuracy of our fitting, we created a sequence of rotational frequencies ranging from \(0\) to \(1200\) Hz, with a step size of \(200\) Hz. In result, we obtained coefficients of \(c=8.57\times 10^{-8}\) s\({}^{2}\) and \(b=4.31\times 10^{-38}\) G\({}^{-2}\). Note that the accuracy in the fitting function is greater than \(10^{-3}\) in the rotational frequencies less than \(1200\) Hz.
We find a variation in the maximum gravitational mass (\(M_{g}^{max}\)) of non-magnetized strange quark stars (SQS) from \(2.35M_{\odot}\) to \(2.73M_{\odot}\), as the rotational frequency, rises from zero to \(1200\) Hz. In the magnetized non-rotating model \(M_{g}^{max}\) reaches \(2.37M_{\odot}\). However, when rotation and magnetic field are present, \(M_{g}^{max}\) reaches \(2.4M_{\odot}\), \(2.50M_{\odot}\), and \(2.80M_{\odot}\) at rotational frequencies of \(f=400\), \(800\), and \(1200\) Hz, respectively. It is also obvious from Fig. 5 that there is a lower limit for the gravitational mass in each sequence of the fast-rotating stars (\(f=1200\) Hz), the so-called equatorial mass-shedding limit that is discussed by Gondek-Rosinska et al. (2000) for non-magnetized rapidly rotating quark stars. We find that the magnetic field affects the equatorial mass-shedding limit. In the rapidly rotating model with \(f=1200\) Hz, the equatorial mass-shedding changes proportional to the central magnetic field (\(M_{g}^{min}\propto\sqrt[3]{B_{c}}\)), as shown in Fig. 6. We note that accuracy of the minimum mass value presented in this figure cannot be guaranteed due to the computational challenges encountered near the critical points. The numerical error involved in these calculations may have an impact on the fitting function.
The most recent detection of millisecond "black widow" pulsar _PSR J09520607_ estimates the gravitational mass of \(M_{g}\simeq 2.35M_{\odot}\pm 0.17\) and the dipole surface magnetic field of \(B\simeq 6\times 10^{7}\) G, with the period of \(1.4\) s and rotational frequency of \(\simeq 700\) Hz Romani et al. (2022). We compute the maximum gravitational mass of \(M_{g}^{max}=2.35M_{\odot}\) for the non-magnetized non-rotating SQS and \(M_{g}^{max}=2.38M_{\odot}\) for SQS with \(B_{c}\simeq 10^{17}\) G and the rotational frequency
Figure 6: The minimum mass limit for \(f=1200\) Hz as a function of the central magnetic field.
of \(400\) Hz. Also, our model for the pulsar with the rotational frequencies of \(800\) and \(1200\) Hz (\(M_{g}^{max}\geq 2.5M_{\odot}\)) can explain the recent detection of _GW190814_ by the LIGO/Virgo collaboration, where the gravitational mass of the lower mass object in the binary is estimated between \(2.5M_{\odot}\) and \(2.67M_{\odot}\). Compact objects with masses around \(\approx 2M_{\odot}\) have been observed also in _PSR J1614-2230_ (\(M=1.908\pm 0.016M_{\odot}\)) and _PSR J0348+0432_ (\(M=2.01\pm 0.04M_{\odot}\)) Demorest et al. (2010), Zhao (2015), and Chandra X-ray detection of _SGR J1745-2900_, estimates the gravitational mass and radius of this magnetar up to \(2M_{\odot}\) and \(13.7\) km, the surface magnetic field of this source is \(2\times 10^{14}\) G Coti Zelati et al. (2015), de Lima et al. (2020).
The gravitational mass as a function of central energy density for different values of magnetic field and angular momentum is shown in Fig. 5. For a given central energy density, the gravitational mass increases with increasing magnetic field and angular momentum. This behavior can be attributed to the increased pressure and density gradient near the center of the star, which can support a larger mass of material. In particular, as the magnetic field strength increases, the central energy density of the maximum gravitational mass also increases. Similarly, as the angular momentum increases, there is a shift towards higher gravitational masses at a given central energy density.
### Magnetic and rotational deformation
Rotation and magnetic field break the spherical symmetry of stars. The magnetic deformation depends on the magnetic field configuration of the star. We consider the poloidal magnetic field, where \(B_{r}\) and \(B_{\theta}\) are the non-vanishing components of the magnetic field. The magnetic and rotational deformations of SQS are shown in Fig. 8. In this figure, horizontal lines, which are at the same positions in all panels, indicate the polar radius of the non-magnetized configuration in each rotational frequency and show that rigidly rotating stars become more oblate. In each panel, colors indicate the magnetic field and we see that the strength of the magnetic field affects the shape of the star.
We plotted the deformation parameter \(a=R_{eq}/R_{pol}\) (where \(R_{eq}\) is the equatorial radius and \(R_{pol}\) is the polar radius) of the configurations with the maximum gravitational mass as a function of the central magnetic field in different rotational frequencies in Fig. 9. This helps to clarify how the magnetic field and rotational frequency affect the shape of the star. We find that the deformation parameter is a function of magnetic and rotational energy. The fitting function is
\[\frac{a(B_{c},f)}{a(0,0)}\simeq(1+\tilde{b}B_{c}^{2})(1+\tilde{c}f^{2}) \tag{24}\]
Figure 7: Gravitational mass as a function of the central magnetic field at different rotational frequencies.
Figure 8: Deformation of SQS for different rotational frequencies. The color of the ellipses indicates the magnitudes of the central magnetic field. Straight lines, which are at the same positions in all panels, indicate the polar radius of the non-magnetized configuration in each rotational frequency.
where \(\tilde{b}=1.54\times 10^{-37}\) G\({}^{-2}\) and \(\tilde{c}=2.39\times 10^{-7}\) s\({}^{2}\). Similar to Eq. 23, we generated a series of rotational frequencies to improve the precision of the fitting. The accuracy of the fitting is greater than \(10^{-3}\) in frequencies less than \(1200\) Hz.
The deformation parameter \(a\) can be found in the fifth column of Table 1. Based on our findings, we obtain that the maximum deformation parameter \(a=1.55\) corresponds to magnetized spinning SQS with \(B_{c}\simeq 5\times 10^{17}\) G and \(f=1200\) Hz.
### The total energy of SQS
The total energy of the star as measured at infinity is a sum of the internal and external energies: \(E_{tot}=E_{int}+E_{ext}\). The model being considered consist of SQM inside the star. Outside of the star, there is a dipolar magnetic field that the strength decreases with the distance from the star.
To calculate the internal energy \(E_{int}\), we integrate the energy-momentum tensor \(T^{\mu\nu}\) over the volume of SQS using the LORENE code. We compute the magnetic energy outside the star \(E_{ext}\), using the following method.
Given that there is only a magnetic field outside the surface of SQS, we neglect the general relativistic impacts on its external energy. The energy density of the magnetic field can be expressed as \(B^{2}/2\mu_{0}\). Since \(\mathbf{\nabla}\times\mathbf{B}=0\), we can introduce a magnetic potential \(\phi\) such that \(\mathbf{\nabla}\phi=\mathbf{B}\). This allows us to define the external energy of the star as follows:
\[E_{ext}=\int_{V}\frac{1}{2\mu_{0}}\mathbf{\nabla}\phi\cdot\mathbf{\nabla}\phi=\frac{1} {2\mu_{0}}\int_{dV}da\left(\mathbf{n}\cdot\mathbf{\phi}\mathbf{\nabla}\phi\right), \tag{25}\]
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline \(f\) (Hz) & \(B_{c}\) (\(10^{17}\) G) & \(M_{g}(M_{\odot})\) & \(M_{b}(M_{\odot})\) & \(R_{circ}\) (km) & \(a\) & \(|E_{EB}|/A\) (MeV) \\ \hline \hline & 0 & 2.35 & 2.92 & 11.92 & 1 & 184 \\ & 1.05 & 2.35 & 2.92 & 11.9 & 1.0 & 184 \\ & 2.43 & 2.36 & 2.93 & 11.97 & 1.01 & 184 \\
0 & 3.10 & 2.36 & 2.94 & 12.03 & 1.02 & 184 \\ & 3.84 & 2.36 & 2.94 & 12.03 & 1.03 & 184 \\ & 4.51 & 2.37 & 2.95 & 12.09 & 1.04 & 183 \\ & 5.11 & 2.37 & 2.95 & 12.21 & 1.05 & 183 \\ \hline \hline & 0 & 2.38 & 2.96 & 12.05 & 1.03 & 184 \\ & 1.03 & 2.38 & 2.95 & 12.1 & 1.03 & 184 \\ & 1.74 & 2.38 & 2.96 & 12.1 & 1.03 & 184 \\ & 3.12 & 2.39 & 2.97 & 12.15 & 1.05 & 183 \\ & 3.78 & 2.39 & 2.98 & 12.22 & 1.06 & 183 \\ & 4.5 & 2.40 & 2.98 & 12.22 & 1.07 & 183 \\ & 5.15 & 2.40 & 2.99 & 12.34 & 1.08 & 182 \\ \hline \hline & 0 & 2.48 & 3.07 & 12.54 & 1.11 & 182 \\ & 1.05 & 2.48 & 3.10 & 12.55 & 1.12 & 182 \\
800 & 2.45 & 2.49 & 3.10 & 12.59 & 1.13 & 182 \\ & 3.87 & 2.5 & 3.10 & 12.65 & 1.15 & 182 \\ & 4.6 & 2.51 & 3.11 & 12.70 & 1.16 & 182 \\ & 5.11 & 2.51 & 3.11 & 12.93 & 1.20 & 180 \\ \hline \hline & 0 & 2.73 & 3.38 & 13.08 & 1.36 & 179 \\ & 1.04 & 2.73 & 3.38 & 13.84 & 1.36 & 179 \\
1200 & 2.44 & 2.75 & 3.40 & 13.90 & 1.38 & 179 \\ & 3.83 & 2.78 & 3.43 & 14.10 & 1.43 & 178 \\ & 4.51 & 2.80 & 3.46 & 14.21 & 1.46 & 177 \\ & 4.97 & 2.80 & 3.43 & 14.6 & 1.55 & 171 \\ \hline \end{tabular}
\end{table}
Table 1: Structural parameters of the configuration with the maximum gravitational mass in different models.
where \(V\) represents the domain outside of SQS. According to the Gauss theorem,
\[\mathbf{\nabla}\cdot(\phi\mathbf{\nabla}\phi)=\phi\mathbf{\nabla}^{2}\phi+\mathbf{\nabla}\phi \cdot\mathbf{\nabla}\phi=\mathbf{\nabla}\phi\cdot\mathbf{\nabla}\phi \tag{26}\]
The triple integral can be simplified to a double integral over the surface of the star. By using the axisymmetric coordinates, this double integral can be further reduced to a single integral. As a result, we can consider \(\phi\) as the only effective parameter, which can be assumed to be produced by the magnetic dipole moment \(\mu\):
\[\phi(r,\theta)=\frac{\mu\cos\theta}{4\pi r^{2}}. \tag{27}\]
In this calculation, we used \(33\) points lying on the surface of the star, sampled from LORENE, and used the Lagrange interpolation to construct the function describing the stellar surface. Then an integral over the stellar surface was calculated using the trapezoid rule.
In Table 2, we show the numerical results for the external and total energy for configurations of non-magnetized, non-rotating SQS, and magnetized rotating SQSs in the highest magnetic field in each rotational frequency. The contribution of external energy to total energy is less than \(1\%\). We find that the magnetic field and spin of SQS affect both gravitational mass and total energy. In result, the total energy increases \(\simeq 4\%\) from non-rotating to rotating SQS.
In our models, the total energy of the SQS is approximately \(5\times 10^{47}\) J (or \(10^{54}\) ergs). For comparison, the energy of a Type II supernova is estimated to be up to \(10^{51}\) ergs Walch and Naab (2015), while the energy of a quark-nova is estimated to be approximately \(10^{52}\) ergs Ouyed et al. (2015); Ouyed (2022).
### Binding energy and compactness
An effective and practical way of understanding the microscopic properties of compact objects involves the inverse study of the EOS using observational data. It is possible through the investigation of the relationship between the observable and the theoretical parameters. In this section, we study the relationship between the total binding energy \(E_{BE}=(M_{g}-M_{b})c^{2}+E_{ext}\), where \(M_{b}\) denotes the baryon mass of the star, and the compactness parameter \(\beta=M_{b}/R_{\rm circ}\) (in units of \(M_{\odot}\)/km) in the presented models.
Figure 9: Deformation parameter versus the magnetic field in different rotational frequencies for the maximum stable configurations of SQS.
We determined the optimal fit of the binding energy plotted versus the compactness parameter for each rotational frequency and magnetic field. Our analysis indicates that the fitted curves are not significantly influenced by the magnetic field and rotational frequency, as illustrated in Fig. 10. Our results show that there is a linear relation between total binding energy and compactness of SQS with the given EOS model:
\[\frac{E_{BE}}{M_{g}}=-\beta \tag{28}\]
The binding energy value obtained in our study is slightly smaller than those reported in Refs. Lattimer (2019); Drago and Pagliara (2020); Jiang et al. (2019). For instance, the total binding energy of SQS in a confined density-dependent mass model (CDDM) is \(\sim 0.41-0.57\), as shown in Table II of Jiang et al. (2019). In contrast, the maximum value of \(|E_{EB}|/M_{g}\) in our models is \(\sim 0.24\). Notably, our value is consistent with the observational data for _J0737-3039B_, _J1756-2251c_, and _J1829+2456c_Holgado (2021).
In the last column of Table 1, we show that the total binding energy per baryon number is \(171\leq|E_{BE}|/A\leq 184\) Mev. These values of binding energy validate the stability of our models for the compact object.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(f\) (Hz) & \(B_{c}\)(\(10^{17}\)) G & \(M_{g}\)(\(M_{\odot}\)) & \(E_{ext}\) (\(10^{46}\)J) & \(E_{tot}\) (\(10^{46}\)J) \\ \hline \hline
0 & 0 & 2.35 & 0.0 & 54 \\ & 5 & 2.37 & 0.05 & 55 \\ \hline
400 & 0 & 2.38 & 0.0 & 54 \\ & 5 & 2.40 & 0.05 & 55 \\ \hline
800 & 0 & 2.48 & 0.0 & 54 \\ & 5 & 2.51 & 0.02 & 55 \\ \hline
1200 & 0 & 2.73 & 0.0 & 55 \\ & 5 & 2.80 & 0.05 & 56 \\ \hline \end{tabular}
\end{table}
Table 2: The total and external energy associated with the maximum gravitational mass for both magnetized (with the highest magnetic field) and non-magnetized configurations in each rotating model.
Figure 10: Total binding energy per unit of gravitational mass versus compactness.
## 6 Conclusions
We investigated the impact of magnetic fields and spin on the structural parameters, stability, and energy of SQS. We constructed a model of a non-magnetized non-rotating SQS, as well as several models of magnetized spinning SQS, by varying the central magnetic field and rotational frequency. In each model, we made a sequence of \(51\) configurations by changing the central enthalpy in the range of \(0.01c^{2}\) to \(0.51c^{2}\) with a spacing of \(0.01c^{2}\).
The equation of state of SQS is computed using the density-dependent MIT bag model which takes into account the Landau quantization effect in Fermi relations arising from the strong magnetic field interior of compact objects. To calculate the structural parameters, we used the \(3+1\) formalism to solve the axisymmetric Einstein field equations, which resulted in four elliptic partial differential equations.
The LORENE library Et_magnetisation class was employed to solve the structure equations. We investigated the stability of configurations and examined how the presence of magnetic fields and spin influenced various parameters such as the maximum and minimum gravitational mass, deformation, total energy, binding energy, and compactness.
Our study indicates that the gravitational mass of the SQS is \(\approx 2.35M_{\odot}\) in a non-rotating model and it slightly increases with the strength of the magnetic field. Furthermore, our analysis of rotating models shows that the star is stable with a larger gravitational mass. Specifically, our model shows a maximum gravitational mass of \(2.8M_{\odot}\) for a rotation frequency of \(f=1200\) Hz and a central magnetic field of \(B_{c}\simeq 5\times 10^{17}\) G. We derived a fitting function that relates the maximum gravitational mass to the central magnetic field and rotational frequency (as presented in Section SS5.1).
We also found that in the fast-rotating model, there is a minimum limit for the gravitational mass (equatorial mass-shedding limit) which is affected by the magnetic field (shown in the last panel of Fig. 3). We found that the rate of change in the minimum mass limit is proportional to \(\sqrt[4]{B_{c}}\).
It is important to note that the results of the minimum mass limit are affected by the computational challenges in the critical model. Therefore, further study is required to confirm the accuracy of the minimum mass value and verify the fitting function.
In addition, we studied the magnetic and rotational deformations of SQS. The deformation parameter is defined as a ratio of the equatorial radius to the polar radius. The results indicate that the maximum deformation of SQS is \(1.55\) in the fast-rotating magnetized model with \(f=1200\) Hz and \(B_{c}=5\times 10^{17}\) G. We found that the deformation parameter is a function of magnetic field and rotational frequency (discussed in Section SS5.2).
We estimated the total energy of SQS. The total energy of our SQS models is on the order of \(10^{54}\) ergs, a finding that is consistent with prior theoretical investigations and observational data.
Our analysis shows that the binding energy of SQS is a linear function of compactness. In our proposed models, the value of the ratio of binding energy and gravitational mass is approximately \(0.24\), which is consistent with observed values for _J0737-3039B_, _J1756-2251c_, and _J1829+2456c_. Moreover, the binding energy per baryon number of SQS is in the range of \(171-184\) MeV. The compactness of SQS is approximately \(0.25\) across all configurations examined in our study.
## Acknowledgments
This project was funded by the Polish NCN (grant No. 2019/33/B/ST9/01564), and MC's work in Opava also by the ESF projects No. CZ.\(02.2.69/0.0/0.0/18\_054/0014696\). We thank Prof. Wlodek Kluzniak and Prof. Leszek Zdunik for advice and discussions, and the LORENE team for the possibility to use the code.
|
2306.14079 | Fighting Uncertainty with Gradients: Offline Reinforcement Learning via
Diffusion Score Matching | Gradient-based methods enable efficient search capabilities in high
dimensions. However, in order to apply them effectively in offline optimization
paradigms such as offline Reinforcement Learning (RL) or Imitation Learning
(IL), we require a more careful consideration of how uncertainty estimation
interplays with first-order methods that attempt to minimize them. We study
smoothed distance to data as an uncertainty metric, and claim that it has two
beneficial properties: (i) it allows gradient-based methods that attempt to
minimize uncertainty to drive iterates to data as smoothing is annealed, and
(ii) it facilitates analysis of model bias with Lipschitz constants. As
distance to data can be expensive to compute online, we consider settings where
we need amortize this computation. Instead of learning the distance however, we
propose to learn its gradients directly as an oracle for first-order
optimizers. We show these gradients can be efficiently learned with
score-matching techniques by leveraging the equivalence between distance to
data and data likelihood. Using this insight, we propose Score-Guided Planning
(SGP), a planning algorithm for offline RL that utilizes score-matching to
enable first-order planning in high-dimensional problems, where zeroth-order
methods were unable to scale, and ensembles were unable to overcome local
minima. Website: https://sites.google.com/view/score-guided-planning/home | H. J. Terry Suh, Glen Chou, Hongkai Dai, Lujie Yang, Abhishek Gupta, Russ Tedrake | 2023-06-24T23:40:58Z | http://arxiv.org/abs/2306.14079v2 | # Fighting Uncertainty with Gradients: Offline Reinforcement Learning via Diffusion Score Matching
###### Abstract
Offline optimization paradigms such as offline Reinforcement Learning (RL) or Imitation Learning (IL) allow policy search algorithms to make use of offline data, but require careful incorporation of uncertainty in order to circumvent the challenges of distribution shift. Gradient-based policy search methods are a promising direction due to their effectiveness in high dimensions; however, we require a more careful consideration of how these methods interplay with uncertainty estimation. We claim that in order for an uncertainty metric to be amenable for gradient-based optimization, it must be (i) stably convergent to data when uncertainty is minimized with gradients, and (ii) not prone to underestimation of true uncertainty. We investigate smoothed distance to data as a metric, and show that it not only stably converges to data, but also allows us to analyze model bias with Lipschitz constants. Moreover, we establish an equivalence between smoothed distance to data and data likelihood, which allows us to use score-matching techniques to learn gradients of distance to data. Importantly, we show that offline model-based policy search problems that maximize data likelihood do not require values of likelihood; but rather only the gradient of the log likelihood (the score function). Using this insight, we propose Score-Guided Planning (SGP), a planning algorithm for offline RL that utilizes score-matching to enable first-order planning in high-dimensional problems, where zeroth-order methods were unable to scale, and ensembles were unable to overcome local minima. Website: [https://sites.google.com/view/score-guided-planning/home](https://sites.google.com/view/score-guided-planning/home)
Keywords:Diffusion, Score-Matching, Offline, Model-Based Reinforcement Learning, Imitation Learning, Planning under Uncertainty
## 1 Introduction
Uncertainty minimization is a central problem in offline optimization, which manifests as many different paradigms in robot learning. In offline model-based RL (MBRL [1, 2, 3, 4]), penalization of uncertainty acts as a regularizer against model bias [5, 6, 7] and prevents the optimizer from exploiting model error [8, 6, 9, 10]. In offline model-free RL, it regularizes against overestimation of the Q function [11]. In addition, many imitation learning (IL) algorithms can be viewed as minimizing distribution shift from the demonstrator distribution [12].
Despite the importance of uncertainty, statistical uncertainty quantification remains a difficult problem [13, 14, 8, 6, 15]; Gaussian processes (GPs) [6, 16, 17] rarely scale to high dimensions, and ensembles [8, 4, 18] are prone to underestimating the true uncertainty [14]. As such, previous works have often
taken the more direct approach of staying near the data by maximizing the data likelihood. These methods either minimize distribution shift between the optimized and data distribution for behavior regularization [19, 20, 21, 22], or the occupation measure of the optimized policy and the data distribution [11, 23, 24, 25, 26, 27, 28, 29]. Both of these directions would require the likelihood of the data distribution to be estimated.
A promising direction for estimating the data likelihood is to leverage techniques from likelihood-based generative modeling, such as variational autoencoders (VAE) [30, 31, 32], generative adversarial networks (GAN) [33, 34, 35, 12], and flow-based models [24, 36]. Yet, these prior works have shown that training density models to generate accurate likelihoods can be challenging, especially for high-dimensional data. While gradient-based methods have good scalability properties that make them desirable for tackling offline optimization problems in this high-dimensional regime, the effect of incorrect likelihoods are further exacerbated in this setting as they lead gradient-based methods into spurious local minima. Thus, we ask in this work: can we design gradient-based offline optimization methods that encourage data likelihood, without explicit generative modeling of likelihoods?
Our key insight is that in order to maximize the data likelihood with gradient-based methods, we do not need access to the likelihood itself. Rather, having access to a first-order oracle (gradients), known as the _score function_, is sufficient. We claim that directly utilizing the score function has two benefits compared to likelihood-based modeling. i) First, recent breakthroughs in score-based modeling [37, 38, 39] show that the score function is considerably easier to estimate with score-matching techniques [39, 38], as it bypasses estimation of the partition function that is required for computation of exact likelihoods [39, 40]. ii) In addition, we show that score matching with annealed perturbations [39] gives gradients that stably drive decision variables to land exactly on data when uncertainty is minimized with gradient-based optimization, a property we term _data stability_. We demonstrate this by showing that the negative log likelihood of the perturbed empirical distribution, whose gradients score-matching estimates, is equivalent to a softened distance to data [41].
Furthermore, we ask: when, and why, would approaches that penalize distance to data surpass the ensemble method of statistical uncertainty quantification [4, 9]? We show that unlike empirical variance among ensembles, we can relate how much smoothed distance to data underestimates true uncertainty with the Lipschitz constant, for which we can use statistical estimation to put confidence bounds [17], or utilize structured domain knowledge [42]. Moreover, we show that ensembles do not necessarily have the data stability property due to statistical noise; therefore, optimizing for ensemble variance can easily lead to local minima away from data in gradient-based optimization.
To put our theory into a practical algorithm, we propose Score-Guided Planning (SGP), a gradient-based algorithm that estimates gradients of the log likelihood with score matching, and solves uncertainty-penalized offline optimization problems that additively combine the cumulative reward and the log likelihood of data _without_ any explicit modeling of likelihood. SGP enables stable uncertainty minimization in high-dimensional problems, enabling offline MBRL to scale even to pixel action-spaces. We validate our theory on empirical examples such as the cart-pole system, the D4RL benchmark [43], a pixel-space single integrator, and a box-pushing task [44] on hardware.
## 2 Preliminaries
Offline Model-Based Optimization.We first introduce a setting of _offline model-based optimization_[45]. In this setting, we aim to find \(x\) that minimizes an objective function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), but are not directly given access to \(f\); instead, we have access to \(x_{i}\sim p(x)\), and their corresponding values \(f(x_{i})\), such that the dataset consists of \(\mathcal{D}=\{(x_{i},f(x_{i}))\}\). Denoting \(\hat{p}(x;\mathcal{D})\) as the empirical distribution corresponding to dataset \(\mathcal{D}\), offline model-based optimization solves
\[\min_{x}f_{\theta^{*}}(x)\quad\text{s.t.}\quad\theta^{*}=\text{ arg}\min_{\theta}\mathbb{E}_{x\sim\hat{p}(x;\mathcal{D})}\left[\|f_{\theta}(x)-f(x)\|^{2} \right]. \tag{1}\]
In words, we minimize a surrogate loss \(f_{\theta}(x)\), where we choose \(\theta\) as the solution to empirical risk minimization of matching \(f\) given the data. Denoting \(x^{*}=\text{arg}\min_{x}f_{\theta^{*}}(x)\), one measure of performance of this procedure is error at optimality, \(\|f(x^{*})-f_{\theta}(x^{*})\|\).
Uncertainty Penalization.The gap \(\|f(x^{*})-f_{\theta}(x^{*})\|\) potentially can be large if \(f_{\theta}\) fails to approximate \(f\) correctly at \(x^{*}\), which is likely if \(x^{*}\) is out-of-distribution (o.o.d.). To remedy this, previous works [8, 45, 46] have proposed adding a loss term that penalizes o.o.d. regions. We denote this penalized objective as
\[\bar{f}_{\theta}(x)\coloneqq f_{\theta}(x)+\beta\mu^{2}(x), \tag{2}\]
where \(\beta\in\mathbb{R}_{\geq 0}\) is some weighting parameter, and \(\mu(x)\) is some notion of uncertainty. Intuitively this restricts the choice of optimal \(x\) to the training distribution used to find \(\theta^{*}\), since these are the values that can be trusted. If the uncertainty metric overestimates the true uncertainty, \(\|f(x)-f_{\theta}(x)\|\leq\mu(x)\), it is possible to bound the error at optimality directly using \(\mu(x^{*})\).
Offline Model-Based RL.While the offline model-based optimization problem described is a one-step problem, the problem of offline model-based RL (MBRL) involves sequential decision making using an offline dataset. In offline MBRL, we are given a dataset \(\mathcal{D}=\{(x_{t},u_{t},x_{t+1})_{i}\}\) where \(x\in\mathbb{R}^{n}\) denotes the state, \(u\in\mathbb{R}^{m}\) denotes action, \(t\) is the time index and \(i\) is the sample index. Again denoting \(\hat{p}(x_{t},u_{t},x_{t+1};\mathcal{D})\) as the empirical distribution corresponding to \(\mathcal{D}\), and introducing \(\mu(x_{t},u_{t})\) as a state-action uncertainty metric [8], uncertainty-penalized offline MBRL solves
\[\max_{x_{1:T},u_{1:T}} \sum_{t=1}^{T}r_{t}(x_{t},u_{t})-\beta\mu^{2}(x_{t},u_{t})\] (3) s.t. \[x_{t+1}=f_{\theta^{*}}(x_{t},u_{t})\ \forall t,\] \[\theta^{*}=\text{arg}\min_{\theta}\mathbb{E}_{(x_{t},u_{t},x_{t+1 })\sim\hat{p}}\left[\|x_{t+1}-f_{\theta}(x_{t},u_{t})\|^{2}\right].\]
In words, we first approximate the transition dynamics of the system from data, then solve the uncertainty-penalized optimal control problem assuming the learned dynamics. Previous works have used ensembles [8, 9], or likelihood-based generative models of data [24] to estimate this uncertainty.
The resulting open-loop planning problem can either be solved with first-order methods [47, 48], or zeroth-order sampling-based methods [44, 9] such as CEM [49] or MPPI [50]. While insights from stochastic optimization [51] tell us that first-order methods are more favorable in high dimensions and longer horizons as the variance of zeroth-order methods scale with dimension of decision variables [52], first-order methods require careful consideration of how amenable the uncertainty metric is to gradient-based optimization.
## 3 Distance to Data as a Metric of Uncertainty
In order to find an metric of uncertainty that is amenable for gradient-based offline optimization, we investigate smoothed distance to data as a candidate. We show that this metric has favorable properties that allows us to stably minimize uncertainty back to data using gradients. In addition, it allows us to quantify how much we underestimate true uncertainty by using the Lipschitz constant of error. All proofs for theorems in this section are included in Appendix A.
### Properties of Distance to Data
We first formally define our proposed metric of uncertainty for offline model-based optimization.
**Definition 3.1** (Distance to Data).: Consider a dataset \(\mathcal{D}=\{x_{i}\}\) and an arbitrary point \(x\in\mathbb{R}^{n}\). The standard squared distance from \(x\) to the set \(\mathcal{D}\) can be written as \(d(x;\mathcal{D})^{2}=\min_{x_{i}\in\mathcal{D}}\frac{1}{2}\|x-x_{i}\|^{2}\). We define smoothed distance to data using a smoothed version of this standard squared distance,
\[d_{\sigma}(x;\mathcal{D})^{2}\coloneqq\mathsf{Softmin}_{\sigma}\tfrac{1}{2}\| x-x_{i}\|^{2}+C=-\sigma^{2}\log\left[\sum_{i}\exp\left[-\tfrac{1}{2\sigma^{ \sigma}}\|x-x_{i}\|^{2}\right]\right]+C, \tag{4}\]
where \(C\) is some constant to ensure positiveness of \(d_{\sigma}(x;\mathcal{D})^{2}\), and \(\sigma>0\), also known as the _temperature parameter_, controls the degree of smoothing with \(\sigma\to 0\) converging to the true min.
The motivation for introducing smoothing is to make the original non-smooth distance metric more amenable for gradient-based optimization [53, 54]. We now consider benefits of using \(\mu(x)=d_{\sigma}(x;\mathcal{D})\) as our uncertainty metric, and show that as the smoothing level is annealed down, minimizing this distance allows us to converge to the points in the dataset.
**Proposition 1** (Data Stability).: Consider a monotonically decreasing sequence \(\sigma_{k}\) such that \(\sigma_{k}\to 0\), and denote \(x_{k}^{n}\) as the \(n^{th}\) gradient descent iteration of \(\min_{x}d_{\sigma_{k}}(x;\mathcal{D})^{2}\). Then, almost surely with random initialization and appropriate step size, we have
\[\lim_{k\to\infty}\lim_{n\to\infty}x_{k}^{n}\in\mathcal{D}. \tag{5}\]
We note that GPs show similar characteristics empirically [6]; however, empirical variance among ensembles [8] is prone to having local minima away from data due to statistical variations, especially with small number of ensembles \(M\), which makes them an unreliable metric to use for gradient-based optimization. Next, we show that it is possible to analyze how much softened distance to data underestimates true uncertainty using the Lipschitz constant of the model bias \(L_{e}\).
**Proposition 2** (Lipschitz Bounds).: Let \(L_{e}\) be the local Lipschitz constant of the true error (bias) \(e(x)\coloneqq\|f(x)-f_{\theta}(x)\|_{2}\) valid over \(\mathcal{Z}\subseteq\mathcal{X}\), where \(\mathcal{X}\) is the domain of the input \(x\). Then, \(e(x)\) is bounded by
\[e(x)\leq e(x_{c})+\sqrt{2}L_{e}\sqrt{d_{\sigma}(x;\mathcal{D})^{2}+C_{2}}, \tag{6}\]
for all \(x\in\mathcal{Z}\), where \(x_{c}\coloneqq\arg\min_{x_{i}\in\mathcal{D}}\frac{1}{2}\|x-x_{i}\|_{2}^{2}\), i.e., the closest data-point, and \(C_{2}=\sigma^{2}\log N-C\), where \(C\) is defined in (4).
In general, it is difficult to obtain \(L_{e}\) in the absence of more structured knowledge of \(f\). However, it is possible to obtain confidence bounds on \(L_{e}\) using statistical estimation with pairwise finite slopes \(\|e(x_{i})-e(x_{j})\|/\|x_{i}-x_{j}\|\) within the dataset [55, Ch. 3][17]. We believe this offers benefits over ensembles as characterizing the convergence of neural network weights with randomly initialized points is far more complex to analyze. We compare distance to data to other uncertainty metrics in Figure 1 in simple 1D offline model-based optimization, where we show that ensembles [8, 9] have local minima outside of data, and unpredictably underestimates uncertainty due to model bias.
### Estimating Gradients of Distance to Data with Score Matching
Although we have shown benefits of smoothed distance to data as an uncertainty metric amenable for gradient-based optimization, it is costly to compute at inference time as we need to iterate through the entire dataset. At the cost of losing guarantees of exact computation in Section 3.1, we consider ways to amortize this computation with function approximation. We first show the equivalence of the
Figure 1: Comparison of different uncertainty metrics. Top row: Visualization of distance to data against GPs and ensembles with \(M=2\). Bottom row: Visualization of the penalty \(\mu(x)^{2}\). All the metrics underestimate uncertainty to varying degrees, but distance to data can be more amenable for analysis; as we increase samples, distance to data will be able to bound the true uncertainty more closely by estimating Lipschitz constants using pairwise slopes in the dataset. In addition, distance to data shows more stability to data while ensembles have local minima outside of data.
smoothed distance-to-data to the negative log likelihood of the _perturbed empirical distribution_[39], which applies randomized smoothing [53, 54] to the empirical distribution \(\hat{p}(x;\mathcal{D})\).
**Definition 3.2** (Perturbed Empirical Distribution).: Consider a dataset \(\mathcal{D}=\{x_{i}\}\) and its corresponding empirical distribution \(\hat{p}(x;\mathcal{D})\). We define \(p_{\sigma}(x;\mathcal{D})\) as the noise-perturbed empirical distribution,
\[p_{\sigma}(x^{\prime};\mathcal{D})\coloneqq\int\hat{p}(x;\mathcal{D})\mathcal{ N}(x^{\prime};x,\sigma^{2}\mathbf{I})dx=\tfrac{1}{N}\sum_{x_{i}\in\mathcal{D}} \mathcal{N}(x^{\prime};x_{i},\sigma^{2}\mathbf{I}). \tag{7}\]
**Proposition 3**.: The negative log-likelihood of the perturbed empirical distribution \(p_{\sigma}(x)\) is equivalent to smoothed distance to data by a factor of \(\sigma^{2}\), up to some constant that does not depend on \(x\),
\[-\sigma^{2}\log p_{\sigma}(x;\mathcal{D})=d_{\sigma}(x;\mathcal{D})^{2}+C(N,n,\sigma). \tag{8}\]
This connection to the perturbed empirical distribution allows us to use generative modeling tools that utilize random perturbations of data [56, 39, 38], such as denoising autoencoders. However, computing likelihoods directly have proven to be difficult for high-dimensional data [39]. Additionally, even if such a likelihood-based generative model can give good likelihoods, there are no guarantees on the quality of its gradients, which defeats our purpose of finding a metric that is amenable to gradient-based optimization. Thus, we propose to use approaches that estimate the gradients of the perturbed empirical distribution (score function) directly [39, 38], which have shown promising performance in generative modeling [37] as it bypasses the estimation of the partition function.
Estimating the score function is a process known as _score-matching_. Following [39], we introduce _noise-conditioned score function_\(s(x,\sigma)\coloneqq\nabla_{x}\log p_{\sigma}(x;\mathcal{D})\)[39] and aim to optimize the following objective given some sequence of annealed smoothing parameters \(\sigma_{k}\),
\[\min_{\theta}\sum_{k}\sigma_{k}^{2}\mathbb{E}_{x\sim p_{\sigma_{k}}(x; \mathcal{D})}\left[\|s_{\theta}(x,\sigma_{k})-\nabla_{x}\log p_{\sigma_{k}}(x ;\mathcal{D})\|^{2}\right], \tag{9}\]
which has been shown to be equivalent to the denoising-score-matching loss [56]. Compared to explicitly computing \(\nabla_{x}\log p_{\sigma_{k}}(x;\mathcal{D})\) which would require iterating through the entire dataset, the denoising loss allows us to learn the score function using batches of data.
\[\min_{\theta}\sum_{k}\sigma_{k}^{2}\mathbb{E}_{\begin{subarray}{c}x\sim\hat{ p}(x;\mathcal{D})\\ x^{\prime}\sim\mathcal{N}(x^{\prime};x,\sigma_{k}^{2}\mathbf{I})\end{subarray}} \left[\|s_{\theta}(x^{\prime};\sigma_{k})+\sigma_{k}^{-2}(x^{\prime}-x)\|^{2} \right]. \tag{10}\]
## 4 Planning with Gradients of Data Likelihood
Having illustrated benefits of using smoothed distance to data as an uncertainty metric, we now turn to the sequential decision-making setting of offline MBRL, where we use notation from Section 2.
Offline MBRL with Data Likelihood.Consider a learned dynamics model \(f_{\theta}\) from the dataset \(\mathcal{D}=\{(x_{t},u_{t},x_{t+1})_{i}\}\), as well as the perturbed empirical distribution \(p_{\sigma}(x_{t},u_{t};\mathcal{D})\) of the \((x_{t},u_{t})\) pairs in the dataset \(\mathcal{D}\). Then, given some sequence of rewards \(r_{t}\), we consider the following planning problem of maximizing both the reward and the likelihood of data,
\[\max_{x_{1:T},u_{1:T}} \sum_{t=1}^{T}r_{t}(x_{t},u_{t})+\beta\sigma^{2}\sum_{t=1}^{T} \log p_{\sigma}(x_{t},u_{t};\mathcal{D})\] (11) s.t. \[x_{t+1}=f_{\theta}(x_{t},u_{t})\quad\forall t\in[1,T].\]
Note that \(-\sigma^{2}\log p_{\sigma}(x_{t},u_{t})\) is equivalent to smoothed distance to data \(d_{\sigma}(x_{t},u_{t};\mathcal{D})^{2}\) as an objective, and acts as an uncertainty penalty, preventing the optimizer from exploiting o.o.d. solutions.
Score-Guided Planning (SGP).Given a differentiable \(r_{t}\), we propose a single-shooting algorithm that rolls out \(f_{\theta}\) iteratively and computes the gradient of the cumulative reward with respect to the input trajectory parameters \(u_{1:T}\). We modify this gradient by adding the learned score function projected into the space of decision variables. In particular, the derivative of \(\log p(x_{i},u_{i})\) at time \(i\) with respect to decision variable \(u_{j}\) at time \(j\) can be evaluated through the chain rule,
\[\frac{\partial}{\partial u_{j}}\log p_{\sigma}(x_{i},u_{i})=\frac{\partial}{ \partial x_{i}}\log p_{\sigma}(x_{i},u_{i})\frac{\partial x_{i}}{\partial u_{j} }+\frac{\partial}{\partial u_{i}}\log p_{\sigma}(x_{i},u_{i})\frac{\partial u _{i}}{\partial u_{j}}. \tag{12}\]
Note that \(\nabla_{x_{i}}\log p_{\sigma}(x_{i},u_{i})\) and \(\nabla_{u_{i}}\log p_{\sigma}(x_{i},u_{i})\) can be obtained using the noise-conditioned score estimator \(s_{\theta}(x,u,\sigma)\) in Section 3.2, where we extend the domain to include both \(x\) and \(u\). After computing this gradient with reverse-mode autodiff (Appendix B.1), we call optimizers that accept gradient oracles, such as Adam [48]. We also note that this gradient computation can be extended to feedback policy gradients in Appendix B.4. Finally, we train a noise-conditioned score function for some sequence \(\sigma_{k}\), and anneal the noise-level during optimization in order to ensure accurate score estimation [39], and leverage the result of Proposition 1.
**Related Methods**. LDM [24] is a closely-related MBRL method that solves a likelihood-constrained version of our problem, and uses flow models [36] to approximate the data likelihood. Many IL methods are also related, as SGP with zero rewards can be used to imitate demonstration data by maximizing data likelihood [12] (Appendix B.5). In particular, AIRL [34] maximizes the rewardless version of our objective using GANs[33]. However, these methods primarily rely on likelihood-based generative models, and do not consider the interplay of generative modeling with gradient-based optimization. Diffuser [25, 57] shares similar methodologies with SGP, and solves a variant of Equation (11) with a quadratic-penalty-based approximation of direct transcription (Appendix B.3), which comes with benefits of numerical stability for long horizons [58, exercise 10.1], and robustness to sparse rewards [59]. Finally, Diffusion Policies [40] attempt to maximize the state-conditioned action sequence distribution, but is a behavior cloning method that does not account for rewards.
## 5 Empirical Results
We now test our proposed algorithm (SGP) empirically and show that it is an effective method for offline optimization that has scalable properties in high-dimensional problems by leveraging gradient-based optimization. All details for the included environments are presented in Appendix C.
We observe that the convergence of CEM is much slower than first-order methods (Fig.2.F). However, we surprisingly obtain more asymptotic performance with CEM rather than using Adam for ensembles - we believe this signifies presence of local minima in gradient-based minimization of ensemble variance, as opposed to CEM which has some stochastic smoothing that allows it to escape local minima [52]. Finally, we note that unlike score functions, distance to data is considerably more difficult to train as we need to loop through the entire dataset to compute one sample. As a result, it is costly to train and we were unable to train it to good performance in a reasonable amount of time.
D4RL Benchmark.To evaluate our method against other methods on a standard benchmark, we use the MuJoCo [60] tasks in the D4RL [43] dataset with three different environments and sources of data. To turn our planner into a controller, we solve the planning problem with some finite horizon, rollout the first optimal action \(u_{1}^{*}\), and recompute the plan in a standard model-predictive control (MPC) [61] fashion. We compare against methods such as Behavior Cloning (BC) [21, 20], MOPO [8], MBOP [62], CQL [22], and Diffuser [25]. Our results demonstrate that our algorithm performs comparably to other state-of-the-art methods. On many of the tasks, we demonstrate better performance compared to MOPO [8] which uses ensembles for uncertainty estimation, while requiring less memory. This empirically supports our proposed benefits of using score matching for gradient-based offline MBRL. In addition, we outperform BC in many tasks, illustrating that we achieve better performance than pure imitation learning by incorporating rewards.
the resulting trajectory exploits the subtleties of the chosen reward function, planning an unrealistic trajectory which leads to poor rewards at runtime. Also, we show that ensembles are unable to stably converge back to the data manifold of images seen during training, instead getting trapped in local minima, planning unrealistic trajectories which translate to poor runtime rollouts on the true system (Fig.3.C). Moreover, due to the high dimensionality of this action space (\(u_{t}\in\mathbb{R}^{32^{2}}\)), we see that zeroth-order methods, such as CEM [49], lead to very low convergence rates (Fig.3.D) due to the large number of decision variables [51, 52]; again, this leads to poor reward seen at runtime.
Box-Pushing with Learned Marker Dynamics.To validate our method on the data-scarce regime of hardware, we prepare a box-pushing task from [44] where we leverage the quasistatic assumption [68] and treat the positions of the motion capture markers as if they are states. An interesting feature of this setup is that the markers implicitly live on a constraint manifold where the distance between each marker is fixed; we ask if using score matching can stabilize the rollouts of prediction to obey this implicit constraint, similar to how diffusion stabilizes back to the data manifold [41]. To test this, we collect about \(100\) demonstrations of the box being pushed to different positions, which lead to \(750\) samples of data tuples \((x_{t},u_{t},x_{t+1})\). We aim to show that i) SGP can enforce implicit constraints within the data, ii) distance to data acts as a successful uncertainty metric in data-scarce offline MBRL, and iii) we can use the reward to change the task from imitation data.
Our results in Fig.4 demonstrate that SGP successfully imposes implicit constraints on the data. Not only does minimization of uncertainty in the absence of rewards result in stabilization to the marker position constraints (Fig.4.B), but the rollouts also become considerably more stable when we use the distance to data penalty (Fig.4.C). In contrast, MBRL with \(\beta=0\) destroys the keypoint structure as dynamics are rolled out, resulting in suboptimal performance (Fig.4.D). Finally, we demonstrate through Fig.4.F,G,H that we can use rewards to show goal-driven behavior to various goals from a single set of demonstration data, which behavior cloning is not capable of.
## 6 Conclusion and Discussion of Limitations
We proposed SGP, which is a _first-order_ offline MBRL planning algorithm that learns gradients of distance to data with score-matching techniques, and solves planning problems that jointly maximize reward and data likelihood. Through empirical experiments, we showed that SGP beats baselines of zeroth-order methods and ensembles, has comparable performance with state-of-art offline RL algorithms, and scales to pixel-space action spaces with up to \(15,360\) decision variables for planning.
Figure 4: Visualization of the box-pushing experiment in **A**, where data was collected from 100 demonstrations illustrated in **E**. We show that SGP successfully stabilizes markers back to their implicit constraints (**B**), which allows better predictions (**C**) than vanilla MBRL (**D**). We additionally use the reward to change the goal into three different regions of down (**F**), middle (**G**), and up (**H**), where vanilla MBRL fails, and behavior cloning is not capable of.
We conclude with listing some limitations of our approach. Unlike ensembles, our method by construction discourages extrapolation, which can be a limitation when the networks jointly recover meaningful inductive bias. We also believe that computation is a current bottleneck for realtime application of our MPC, which can take between 1-2 seconds per iteration due to gradient computations. We believe formulating our problem as policy search (Appendix B.4) might lead to considerable amortization of computation at inference time. Our algorithm is additionally subject to the shortcomings of single shooting, such as gradient instability for long horizons [52], or not being able to handle sparse rewards due to myopicness. We plan to further explore a direct transcription version of our problem by leveraging our connection to existing work [25]. Finally, we have not yet investigated the performance of our method in the presence of aleatoric uncertainty.
#### Acknowledgments
We would like to thank Pulkit Agrawal and Max Simchowitz for helpful discussions on the paper.
|
2305.15637 | Morphological Inflection: A Reality Check | Morphological inflection is a popular task in sub-word NLP with both
practical and cognitive applications. For years now, state-of-the-art systems
have reported high, but also highly variable, performance across data sets and
languages. We investigate the causes of this high performance and high
variability; we find several aspects of data set creation and evaluation which
systematically inflate performance and obfuscate differences between languages.
To improve generalizability and reliability of results, we propose new data
sampling and evaluation strategies that better reflect likely use-cases. Using
these new strategies, we make new observations on the generalization abilities
of current inflection systems. | Jordan Kodner, Sarah Payne, Salam Khalifa, Zoey Liu | 2023-05-25T01:27:29Z | http://arxiv.org/abs/2305.15637v1 | # Morphological Inflection: A Reality Check
###### Abstract
Morphological inflection is a popular task in sub-word NLP with both practical and cognitive applications. For years now, state-of-the-art systems have reported high, but also highly variable, performance across data sets and languages. We investigate the causes of this high performance and high variability; we find several aspects of data set creation and evaluation which systematically inflate performance and obfuscate differences between languages. To improve generalizability and reliability of results, we propose new data sampling and evaluation strategies that better reflect likely use-cases. Using these new strategies, we make new observations on the generalization abilities of current inflection systems.
## 1 Introduction
Morphological inflection is a task with wide-reaching applications in NLP, linguistics, and cognitive science. As the reverse of lemmatization, it is a critical part of natural language generation, particularly for languages with elaborate morphological systems (Bender, 2009; Oflazer and Saraclar, 2018). Since morphological inflection is a particular type of well-defined regular string-to-string mapping problem (Roark and Sproat, 2007; Chandlee, 2017), it is also useful for testing the properties of different neural network architectures. Within cognitive science and linguistics, computational models of inflection have a long history in arbitrating between competing theories of morphological representation and acquisition (surveyed in Pinker and Ullman, 2002; Seidenberg and Plaut, 2014), and inflection is often a focus of computational typology (Bjerva and Augenstein, 2018; Elsner et al., 2019).
However, despite the task's popularity, standard evaluation practices have significant weaknesses. We discuss three aspects of these practices which hamper investigators' ability to derive informative conclusions. **(1)** Uniform sampling, which creates unnatural train-test splits, **(2)** Evaluation of single data splits, which yields unstable model rankings, and **(3)** uncontrolled overlaps between train and test data components, which obscure diagnostic information about systems' ability to perform morphological generalizations.
### Practice 1: Uniform Sampling
Training and evaluation sets have been (with some exceptions) sampled uniformly by type from a corpus such as those available in the UniMorph Database (Kirov et al., 2018; McCarthy et al., 2020; Batsuren et al., 2022). While practical to implement for corpora that lack frequency information, uniform sampling is also unrealistic because morphological forms exhibit a highly skewed Zipfian distribution in any large text (Lignos and Yang, 2018). Thus, uniform sampling creates an unnatural bias towards low-frequency types. Since high frequency is correlated with irregularity across many but not all languages (Bybee, 1991; Fratini et al., 2014; Wu et al., 2019), this creates a bias towards more regular and reliable training items.
We provide two alternatives for producing realistic or challenging data sets: **(1)** a frequency-weighted sampling strategy to achieve a more real
Figure 1: The four logically possible train-eval overlap types if evaluation data consists of (lemma, feature set) pairs: both, featsOnly, lemmaOnly, neither, as well as featsAttested= both \(\cup\) featsOnly and featsNovel= lemmaOnly \(\cup\) neither.
istic distribution of out-of-vocabulary (OOV) lemmas and inflectional categories and better match practical use-cases or input during child language acquisition, and **(2)** a sampling strategy that explicitly balances OOV lemmas and inflectional categories in order to directly evaluate models' generalization ability along these dimensions.
### Practice 2: Single Data Splits
The current practice in inflection evaluation, employed, for example, in the SIGMORPHON, CoNLL-SIGMORPHON and SIGMORPHON-UniMorph shared tasks in recent years (Cotterell et al., 2016, 2017, 2018; McCarthy et al., 2019; Vylomova et al., 2020; Pimentel et al., 2021; Kodner et al., 2022), examines different models with one particular data set that is considered representative of the language or the inflection task at hand. This data set, and therefore all evaluation, usually consists of one pre-defined train-(dev-)test split.
However, this method is problematic because it implicitly assumes that the results from a single split are informative and generalizable. In reality, this assumption is untenable, particularly when facing severe data limitation (Liu and Prud'hommeaux, 2022), as is the case for the majority of languages in the world (cf. Blasi et al., 2022): In UniMorph 4, for example, data set size varies significantly across languages, with the smallest, Manx (Celtic, IE), containing only one lemma with 14 inflected forms, and the largest, Czech (Slavic, IE) containing approximately 800,000 lemmas with 50.3 million forms. If the performance on a single split is not necessarily representative, then the original model ranking derived from the one particular data split might also not generalize well.
The concerns outlined above were demonstrated in Liu and Prud'hommeaux (2022), which investigated model generalizability in low-resource morphological segmentation. Using data from 11 languages, they provided evidence that: **(1)** there are major differences in the numerical performance and rankings of each evaluated model type when using different splits from the same data set, and **(2)** even within a single split, large performance variability can arise for each model type when it is trained using different random seeds. These findings illustrate that common methods of model evaluation can lead to largely coincidental conclusions. We extend this approach to morphological inflection by applying multiple data splits, and evaluating variability between splits.
### Practice 3: Uncontrolled Overlaps
The typical morphological inflection task paradigm presents (lemma, inflected form, feature set) triples during training and asks a system to predict inflected forms from (lemma, feature set) pairs during evaluation. Note that since the lemma and feature set can be combined independently, it is possible for either lemmas or feature sets that appeared during training to reappear during test without any individual triple violating train-on-test. Test pairs with OOV lemmas or feature sets require a system to generalize along different morphological dimensions. Performance is likely related to the relative rates of OOV lemmas and feature sets in the evaluation split, yet existing sampling strategies generally leave these variables uncontrolled.
We observe that uncontrolled OOV rates vary dramatically between different sampled data splits, and that uncontrolled sampling biases test sets towards "easier" items with in-vocabulary lemmas and feature sets. To remedy this, we argue that performance should be reported independently for items with each lemma/feature set overlap type regardless of sampling strategy. Furthermore, if a project's research goal is to evaluate the generalization ability of a model, lemma/feature set overlap-aware sampling should be used to ensure that a sufficient number of test items of each overlap type are present.
## 2 Defining Overlap
Morphological inflection requires generalization over two primary dimensions: to new lemmas ("_If I have witnessed the 2pl imperfective subjunctive with other verbs, how do I apply that to new verb X?_") and to new inflectional categories ("_If I have seen X inflected in several other categories, how do I create the 2pl imperfect subjunctive of X?_"). Because of the sparsity of morphological inflections in language use (Chan, 2008), both types of generalization are necessary during language acquisition as well as deployment of computational models.
As with many linguistic phenomena, the attestation of inflected forms follows an extremely sparse and skewed long-tailed distribution, as do attested lemmas ranked by the proportions of their potential paradigms that are actually attested (_paradigm saturation_; PS), and inflectional categories ranked by the number of lemmas with which they oc
cur (Chan, 2008). For example, the median PS for Spanish verbs in millions of tokens of child-directed speech is equivalent to _two_ of its three dozen possible forms, and the 2nd person plural imperfect subjunctive only occurs with two lemmas (cf. Lignos and Yang, 2018; Kodner, 2022).
Given the importance of both types of generalization, it is necessary to evaluate both to assess the abilities of a morphological learning model. In the evaluation made popular by the SIGMORPHON shared tasks, models are asked to predict inflected forms given (lemma, feature set) pairs, where feature sets can be seen as corresponding to inflectional categories or paradigm cells. Generalization across lemmas is required when an evaluation pair contains a lemma that was out-of-vocabulary (OOV) in training, and generalization across categories is required when an evaluation pair contains a feature set that was OOV. In all, there are four logically possible licit types of evaluation pairs distinguished by their lemma and feature overlap with training. These are expressed visually in Figure 1 along with two types which are unions of the other types:
**both Overlap:**: Both the lemma and feature set of an evaluation pair are attested in the training set (but not together in the same triple).
**lemmaOnly Overlap:**: An eval pair's lemma is attested in training, but its feature set is novel.
**featsOnly Overlap:**: An eval pair's feature set is attested in training, but its lemma is novel.
**neither Overlap:**: An evaluation pair is entirely unattested in training. Both its lemma and features are novel.
**featsAttested:**: An eval pair's feature set is attested in training (both \(\cup\) featsOnly)
**featsNovel:**: An eval pair's feature set is novel (lemmaOnly \(\cup\) neither)
For a concrete illustration, consider the training and evaluation sets provided in (1)-(2). Each evaluation pair exhibits a different kind of overlap.
1. **Example Training Set** t0: see seeing V;V.PTCP;PRS t1: sit sat V;PST
2. **Example Evaluation Set** e0: see V;PST <-- both el: sit V;NFIN <-- lemmaOnly e2: eat V;PST <-- featsOnly e3: run V;PRS;3;SG <-- neither featsAttested = {e0, e2} featsNovel = {e1, e3}
Computational work in morphological inflection has generally ignored these dimensions of evaluation. In the shared task, the four overlap types were uncontrolled before 2021, which contains one partial evaluation on featsOnly \(\cup\) neither items. But, recognition of the value of these overlap types has grown recently. Goldman et al. (2022) showed that four models consistently struggle to generalize across lemmas, concluding that test sets should avoid lemma overlap altogether. However, this proposal removes the option to contrast performance on seen and unseen lemmas. Furthermore, they did not control for or evaluate feature overlap, so both vs. lemmaOnly and featsOnly vs. neither also cannot be distinguished. (3) summarizes their partition scheme, which distinguishes two overlap types. We call these lemmaAttested (= both \(\cup\) lemmaOnly) and lemmaNovel (= featsOnly \(\cup\) neither).
1. **Goldman et al. (2022) Partition Types** e0: sit V;PST <-- lemmaAttested e1: see V;NFIN <-- lemmaAttested e2: eat V;PST <-- lemmaNovel e3: run V;PRS;3;SG <-- lemmaNovel
The 2022 SIGMORPHON-UniMorph shared task was the first to report results on all four overlap types (both, featsOnly, lemmaOnly, neither). Every system submitted to the shared task achieved much better performance on in-vocabulary feature sets (both and featsOnly) than OOV feature sets (lemmaOnly or neither). This discrepancy even held for languages for which a model should be able to generalize: highly regular agglutinative morphology for which this type of generalization is often transparent. On the other hand, lemma attestation produced a much smaller discrepancy. Following these observations, we focus our investigation on the four logical overlap types with extra emphasis on the featsAttested vs. featsNovel dichotomy. We address agglutinative languages specifically in Section 5.3
## 3 Data Sources and Preparation
We follow prior literature in providing training and evaluation data in UniMorph's format. Data sets were sampled from UniMorph 4 (Batsuren et al., 2022) and 3 (McCarthy et al., 2020)1 aug
mented with frequencies from running text corpora. When possible, frequencies were drawn from child-directed speech (CDS) corpora from the CHILDES database [14], since one possible downstream application of the morphological inflection task is contribution to the computational cognitive science of language acquisition. CHILDES lemma and morphological annotations were converted into UniMorph format and intersected with UniMorph to create frequency lists.2
Footnote 2: All data and code is available at [https://github.com/jkodner05/ACL2023_RealityCheck](https://github.com/jkodner05/ACL2023_RealityCheck).
### Languages
Languages were prioritized for typological diversity and accessibility of text corpora. Quantitative summaries of our frequency+UniMorph data sets are provided in Appendix B.
**Arabic (Semitic, AA):** Modern Standard Arabic frequencies were drawn from the diacritized and morphologically annotated Penn Arabic Treebank [13] and intersected with UniMorph 4 ara\(\cup\)ara\(\cup\)ara\(\_\)new. Diacritized text is a requirement because orthographic forms drawn from undiacritized text are massively morphologically ambiguous. The text in the CHILDES Arabic corpora is undiacritized and thus unusable.
**German (Germanic, IE):** German was drawn from the Leo Corpus [1], the only morphologically annotated German corpus in CHILDES, and intersected with UniMorph 3+4. Only nouns and verbs were extracted because annotation for adjectives is inconsistent.
**English (Germanic, IE):** English was included because it is heavily studied despite its relatively sparse morphology. Data was extracted from all morphologically annotated CHILDES English-NA corpora and intersected with UniMorph 3+4.3 Only nouns and verbs were extracted due to inconsistent adjective annotation in both data sources.
Footnote 3: A full list of utilized English and Spanish CHILDES corpora is provided in Appendix A.
**Spanish (Romance, IE):** Spanish exhibits a variety of fusional and agglutinative patterns. Data was extracted from all morphologically annotated Spanish CHILDES corpora intersected with Spanish UniMorph 3+4. Non-Spanish vocabulary was removed by intersecting with UniMorph. Only nouns and verbs were extracted.
**Swahili (Bantu, Niger-Congo):** Swahili morphology is highly regular and agglutinative with very large paradigms. Frequencies were drawn from Swahili Wikipedia dump 20221201 accessed through Huggingface [15] and intersected with UniMorph 4 swc\(\cup\)swc\(\_\)sm. In cases where mapping inflected forms to UniMorph creates ambiguity due to syncretism, frequency was divided evenly across each triple sharing the inflected form. This ensured that the frequencies of inflected forms remain consistent with Wikipedia. Intersecting with UniMorph removed the large amount of non-Swahili vocabulary in the Wikipedia text.
**Turkish (Turkic):** Turkish is also highly regular and agglutinative with very large paradigms. Frequencies were drawn from Turkish Wikipedia dump 20221201 accessed through Huggingface, intersected with UniMorph 4, and processed identically to Swahili.
### Data Splits
We employed three distinct sampling strategies to generate small (400 items) and large (1600) training, small (100) and large (400) fine-tuning, development (500), and test (1000) sets for each language.4 Small training and fine-tuning are subsets of large training and fine-tuning. Each splitting strategy was applied five times with unique random seeds to produce distinct data sets.
Footnote 4: Swahili large train and large fine-tune contain 800 and 200 items respectively due to the limited size of UniMorph.
**Uniform:** Raw UniMorph 3+4 corpora were partitioned uniformly at random. This approach is most similar to that employed by SIGMORPHON shared tasks, except for 2017 and 2022.
**Weighted:** Identical to Uniform except splits were partitioned at random weighted by frequency. Small training+fine-tuning were sampled first, then additional items were sampled to create large training+fine-tuning. Training and fine-tuning sets were then split uniformly at random. Dev+test was next sampled by weight and then were separated uniformly. This frequency-weighted sampling is reminiscent of the 2017 shared task: it strongly biases the small training set towards high-frequency items and dev+test towards low-frequency items. Since most UniMorph forms do not occur in our corpora due to morphological sparsity, most triples had zero weight and were never sampled.
**OverlapAware:** Similar to the 2022 SIGMORPHON shared task. It enforces a maximum proportion of featsAttested pairs in the test set relative to train+fine-tuning: as close to 50% as pos
sible without exceeding it. This ensures that there is ample representation of each overlap type in test. It is adversarial, since featsNovel pairs are expected to be more challenging than featsAttested pairs. This process also tends to increase the proportion of lemmaOnly items in the test set. Only items with non-zero frequency were sampled.
Uniform produces a heavy bias towards lower frequency words. For all languages and splits, the median frequency of sampled items is actually zero: that is, the majority of sampled items were not attested in our corpora. This is a consequence of the extreme sparsity of morphological forms discussed in Section 2. As a consequence, overlap between splits from different seeds is orders of magnitude lower for Uniform than the other strategies. Weighted achieves the expected high-frequency bias in training sets relative to test sets.
Table 1 provides average means and standard deviations for the proportion of featsAttested and featsNovel in test sets relative to small and large train. OverlapAware consistently achieves a roughly 50-50 split with low variability across languages and seeds. The other strategies bias test sets heavily towards featsAttested with high variance across languages and seeds.5
Footnote 5: See Appendix B for breakdowns by language, training size, and overlap partitions.
## 4 Experimental Setup
One non-neural and three neural systems were evaluated. These were chosen based on their availability and performance in recent shared tasks:
**CHR-TRM**(Wu et al., 2021) is a character-level transformer that was used as a baseline in 2021 and 2022. We used the hyper-parameters suggested by the original authors for small training conditions.
**CLUZH-GR** and **CLUZH-B4**(Wehrli et al., 2022) is a character-level transducer which substantially outperformed chr-trm in the 2022 shared task. The results submitted for the shared task are from an elaborate ensemble model optimized for each language. For this work, we evaluate two published variants with consistent hyper-parameters across languages, cluzh-gr with greedy decoding and cluzh-b4 with beam decoding, beam size = 4.
**nonneur**(Cotterell et al., 2017) has been used as a baseline in SIGMORPHON shared tasks since 2017. It heuristically extracts transformations between lemmas and inflected forms and applies a majority classifier conditioned on the associated feature sets. nonneur was trained on combined training and fine-tuning sets so that each architecture was exposed to the same amount of data.
## 5 Results
This section presents our analyses of the results. All evaluations report exact match accuracy. _Overall accuracy_ refers to average accuracy on an entire evaluation set. _Average overall accuracy_ refers to the mean of overall accuracy over all five seeds. See Appendix C for full breakdowns by language and architecture.
### Effect of Training Size
We begin by comparing average overall accuracy for each training size. All reported analyses focus on test, but there were no observable qualitative differences in behavior between dev and test. We summarize the results in Table 2, broken down by overlap partition and sampling strategy. The large training size consistently leads to higher accuracies than small training. Across languages, the average accuracy score difference between the two training sizes is 9.52%. Taking Arabic as an illustrative example, the score difference between the two training sizes ranges from 1.74% to 19.32% depending on model type and splitting strategy, with an average of 12.05%.
\begin{table}
\begin{tabular}{l|c c}
**Test vs S Train** & \(\mu\) \%featsAttested & \(\sigma\) \\ \hline Uniform & 80.33\% & 19.50\% \\ Weighted & 90.44 & 11.13 \\ OverlapAware & 48.81 & 0.98 \\ \hline \hline
**Test vs L Train** & \(\mu\) \%featsAttested & \(\sigma\) \\ \hline Uniform & 96.17\% & 5.55\% \\ Weighted & 95.36 & 7.28 \\ OverlapAware & 49.92 & 0.17 \\ \end{tabular}
\end{table}
Table 1: Language-by-language average mean percentage and standard deviation for proportion of featsAttested attested in test relative to small and large training. %featsNovel= 100 -%featsAttested.
\begin{table}
\begin{tabular}{l|c c}
**Test vs S Train** & featsAttested & featsNovel \\ \hline Uniform & 70.47\% & 33.57\% \\ Weighted & 79.25 & 22.77 \\ OverlapAware & 79.60 & 31.13 \\ \hline \hline
**Test vs L Train** & featsAttested & featsNovel \\ \hline Uniform & 80.00\% & 55.57\% \\ Weighted & 85.94 & 23.74 \\ OverlapAware & 86.22 & 35.51 \\ \end{tabular}
\end{table}
Table 2: Overall accuracy across languages by overlap type in test.
### Effect of Sampling Strategy
We next turn to measuring the effect of sampling strategy on overall accuracy. Figure 2 provides a visualization of accuracy by sampling strategy across seeds broken down by training size, language, model type. Using Arabic as an illustration, for large training, Weighted sampling leads to the highest average overall accuracy across model types (77.76%), while OverlapAware sampling yields the lowest (61.06%); comparing the results from the three sampling strategies given each of the four model types, Weighted consistently results in the highest accuracy for all model types except for cluzh-b4, where Uniform sampling (83.84%) leads to a performance slightly better than that of Weighted (83.82%). We make similar observations for small training: Weighted and OverlapAware result in the highest and the lowest average overall accuracy, respectively, across model types for Arabic (68.82% vs. 47.81%). Weighted sampling leads to a higher accuracy compared to the other two strategies for every model type other than chr-rm, where the result from Uniform sampling (71.90%) is again slightly higher than that of Weighted (71.60%).
When considering other languages, we also find some variation. Weighted sampling also yields the highest average accuracy scores across model types for Arabic, German, Spanish, and Turkish for both training sizes, except for Spanish under the large training condition with cluzh-gr, where Uniform leads. In contrast, Uniform consistently results in the highest average accuracy on English and Swahili for both training sizes.
Across languages, the average accuracy from Weighted is the highest for both large (83.75%) and small (74.22%) training sizes, followed by Uniform (large: 79.20%, small: 66.16%). OverlapAware always yields the lowest accuracy. These observations align with our expectations about the adversarial nature of OverlapAware, where challenging featsNovel (Table 2) constitutes a much larger proportion test set (Table 1).
### Effect of Overlap
We now provide an analysis of accuracy scores by overlap partition. Figure 3 provides a visualization of accuracy by partition across seeds broken down by training size, language, model type. Using Arabic again as an illustration, the average accuracy across model types and sampling strategies for large training is much higher for featsAttested (77.70%) than for featsNovel (41.92%), somewhat higher accuracy is achieved for both (79.53%) than for featsOnly (77.28%), and higher accuracy is achieved for lemmaOnly (49.12%) than for neither (41.92%). This ranking is consistent across model types, sampling strategies, and training sizes. Scores from these two overlap partitions are also higher than those from lemmaOnly and neither.
These patterns hold across languages. Specifically, we observe two general tendencies. First, the accuracy averaged across model types and sampling strategies is always substantially higher for featsAttested than it is for featsNovel; the average accuracy difference between the two is
Figure 2: Overall accuracy for each language/seed by training size, sampling strategy, and model type.
49.75% for the large training, and 48.02% for small training. This is reflected in a full breakdown by overlap type: higher accuracy is consistently achieved for both and \(\mathsf{featsOnly}\), than for neither and \(\mathsf{lemmaOnly}\). This large asymmetry corresponds to our expectations regarding the effect of feature overlap on performance.
We provide three sub-analyses to further investigate this asymmetry and compare it with the lemma-based division advocated for by [1]. First, we compute the average accuracy difference between \(\mathsf{lemmaAttested}\) (\(\mathsf{both}\cup\mathsf{lemmaOnly}\)) and \(\mathsf{lemmaNovel}\) (\(\mathsf{featsOnly}\cup\mathsf{neither}\)). The score difference between \(\mathsf{lemmaAttested}\) and \(\mathsf{lemmaNovel}\) is less than 2% averaged across languages for both training sizes, which is an order of magnitude smaller than the difference between \(\mathsf{featsAttested}\) and \(\mathsf{featsNovel}\). This trend is consistent with the results of the 2022 SIGMORPHON shared task, which also found a much greater impact of feature set attestation than lemma attestation.
Second, we measure the correlation between the proportion of \(\mathsf{featsAttested}\) items (number \(\mathsf{featsAttested}\) items divided by the size of the dev or test set), and overall accuracy (average accuracy on an entire dev or test set), as well as between the proportion of \(\mathsf{lemmaAttested}\) and overall accuracy. We used Spearman's \(\rho\), which assesses if there is any monotonic (not necessarily linear) relationship between the two variables.6 If \(\rho\) between an overlap type and overall accuracy is high, it would suggest that the distribution of overlaps is an important driver of performance. \(\mathsf{lemmaAttested}\) shows little correlation (small: 0.01, large: -0.10). However, we find substantial positive correlations for \(\mathsf{featsAttested}\) (small: 0.69, large: 0.68).
Footnote 6: \(\rho\) falls in the range [-1,1], where -1 is a perfect negative correlation and 1 is a perfect positive correlation.
Third, we compute the correlation between the accuracy score of individual partitions and the overall accuracy score on \(\mathsf{Uniform}\) and \(\mathsf{Weighted}\) vs. on \(\mathsf{OverlapAware}\). This demonstrates to what extent evaluation results based on each overlap partition resemble those captured by the overall accuracy and how it differs when overlaps are controlled during sampling. If the correlation is small, it suggests that the performance on a particular overlap partition is largely independent of the others and should be evaluated independently.
When overlaps are not explicitly controlled, correlations are particularly strong for \(\mathsf{featsAttested}\) because this partition makes up a large majority of the test set (Table 3). These partitions are also the ones that tend to show the highest performance, which is then reflected in the overall accuracy. However, for \(\mathsf{OverlapAware}\), correlations are higher between overall accuracy and the challenging partitions: \(\mathsf{featsNovel}\), \(\mathsf{lemmaOnly}\), and \(\mathsf{neither}\). They are also higher not only for \(\mathsf{featsNovel}\), but also \(\mathsf{lemmaAttested}\), and \(\mathsf{lemmaNovel}\) even though these overlaps were not explicitly controlled. This demonstrates that \(\mathsf{OverlapAware}\) sampling better balances individual partitions in its overall accuracy scores and can be expected to produce
Figure 3: Accuracy on \(\mathsf{OverlapAware}\) splits for each partition/seed by training size, language, and model type. \(\mathsf{featsAttested}\) = both (green) and \(\mathsf{featsOnly}\) (gold). \(\mathsf{featsNovel}\) = \(\mathsf{lemmaOnly}\) (violet) and \(\mathsf{neither}\) (red).
a more challenging evaluation. However, all partitions should be evaluated regardless of sampling strategy.
Up to this point, we have considered all languages in the analysis. However, whether or not it is reasonable to expect a system to achieve high accuracy on featsNovel items varies typologically. For languages with highly regular and agglutinative morphologies, such as Swahili and Turkish, each feature in a feature set roughly corresponds to a single affix in a certain order with a limited number of allomorphs. For these languages, this dimension of generalization should often be straightforward. For languages with mixed systems, like Spanish and Arabic, and languages with fusional systems like English, the individual members of a feature set often do not have direct bearing on the inflected form. For these languages, generalization to a novel feature set is sometimes impossible when it cannot be inferred from its component features. The same problem applies to lemmas with erratic stem changes or suppletion.
Thus, if a model type can generalize to novel feature sets, one would expect that the accuracy gap between featsAttested and featsNovel would be lower for Swahili and Turkish than for the other languages. However, the gaps for these are actually larger than for German or Arabic. One would also expect the correlation between the proportion of featsAttested in the data and overall accuracy to be lower for Swahili and Turkish, however this is not borne out either. These findings, provided in Table 4, reveal that current leading inflection models do not necessarily generalize well to novel feature sets even in precisely the cases where they should be able to.
### Model Ranking
In this section, we analyze how performance varies across the four model types. We first compare model performance based on the average overall accuracy. Averaged across the six languages, cluzh-b4 ranks among the highest, while nonneur consistently achieves the lowest performance.
```
large:cluzh-b4 (78.32%)>chr-trm (78.07%)>cluzh-gr (76.17%)>nonneur (65.82%) small:cluzh-b4 (68.58%)>cluzh-gr (67.97%)>chr-trm (64.76%)>nonneur (58.97%)
```
Model rankings for individual languages are much more variable, especially for large training. There is not a single model ranking that holds for every language. While cluzh-b4 yields the best performance for three languages (German, Spanish, and Turkish), chr-trm outperforms other model types for Arabic and Swahili, and nonneur leads to the highest accuracy for English. There is less variation in model rankings for small training; the same model ranking was observed for German, English, and Spanish (nonneur \(>\)cluzh-b4 \(>\)cluzh-gr \(>\)chr-trm). Notably, for each individual language, the model rankings were always inconsistent between the two training sizes.
Several trends emerge in model rankings by overlap partition. First, the model rankings based on the overall accuracy do not hold for the overlap partitions except for Arabic and Swahili large training. Second, within each overlap partition, model rankings are more stable across languages for small train than large. Third, on average, cluzh-b4 outperforms the other model types on partitions with feature overlap whereas chr-trm leads on partitions without feature overlap. These tendencies resonate with our proposal in Section 2: future models of morphological inflection should be evaluated based on alternative metrics in addition to
\begin{table}
\begin{tabular}{l|c|c|c}
**Train** & **Language** & **Avg. Score** & **featsAttested** \\
**Size** & **Strategy** & **Difference** & \(\sim\)**Accuracy**\(\rho\) \\ \hline \hline Small & Arabic & 33.00\% & 0.57 \\ & Swahili & 40.04 & 0.63 \\ & German & 40.35 & 0.23 \\ & Turkish & 41.96 & 0.83 \\ & Spanish & 52.60 & 0.75 \\ & English & 74.10 & 0.66 \\ \hline Large & Arabic & 35.79\% & 0.44 \\ & German & 36.19 & 0.73 \\ & Swahili & 39.26 & 0.64 \\ & Turkish & 52.14 & 0.59 \\ & Spanish & 61.01 & 0.64 \\ & English & 80.17 & 0.82 \\ \end{tabular}
\end{table}
Table 4: Avg. score difference between featsAttested and featsNovel and correlation between proportion featsAttested and overall accuracy by language/training size, ranked by score difference.
\begin{table}
\begin{tabular}{l|c|c}
**Overlap Partition** & **Uncontrolled \(\rho\)** & **Controlled \(\rho\)** \\ \hline \hline featsAttested & 0.97 & 0.45 \\ featsNovel & 0.16 & 0.93 \\ \hline lemmaAttested & 0.84 & 0.88 \\ lemmaNovel & 0.78 & 0.82 \\ \hline both & 0.89 & 0.49 \\ featsOnly & 0.73 & 0.21 \\ lemmaOnly & 0.24 & 0.89 \\ neither & -0.04 & 0.85 \\ \end{tabular}
\end{table}
Table 3: Correlation between average accuracy for each overlap partition and average overall accuracy across the six languages. Uncontrolled = Weighted and Uniform. Controlled = OverlapAware.
overall accuracy. They also reveal difference generalization strengths across models.
When comparing performance by sampling strategy, we found lower variability for each language. For example, with Uniform large training, two model rankings turn out to be the most frequent, each observed in two languages. Among the models, cluzh-b4 and chr-trm achieve the best performance. For small training, one model ranking holds for three out of the six languages (cluzh-b4 \(>\) cluzh-gr \(>\) chr-trm \(>\) nonneur). Considering both training sizes, there are no noticeable differences in terms of the most frequent model ranking across the three sampling strategies. For Uniform and Weighted, the neural systems are always ranked among the highest for both training sizes; yet for OverlapAware with small training, nonneur achieves the highest performance for German, English, and Spanish.
### Variability across Random Seeds
Analysis so far relies on accuracy scores averaged across random seeds. The final component of our analysis investigates how much variation arises due to random data sampling. Given the five random seeds for each combination of language, sampling strategy, overlap partition, and model type, we calculated the _score range_, which is the difference between the lowest and the highest overall accuracy, as well as the standard deviation of the accuracy scores across the seeds, which we refer to as _random seed variability_.
We first considered the score range for overall accuracy for each language. For large training, the mean score range spans from 4.41% for Arabic, to 8.38% for English; the mean random seed variability follows the same trend (1.73% to 3.54%). For every language, the score range and random seed variability for the large training size are consistently larger than those derived from small training. In both cases, score ranges are non-negligible.
Next, for each language, we analyze the average score range for each sampling strategy and model type separately. Comparing results from the three sampling strategies in Table 5, OverlapAware sampling consistently yields the highest score range and random seed variability. This indicates that OverlapAware, despite exhibiting the least variability in overlap partition sizes, is also the most variable in terms of model performance. This likely suggests that it is not just feature set attestation in general, but also exactly which feature sets that happen to appear in train vs. test drive performance. Finally, when looking at results for each individual model type, cluzh-gr demonstrates the most variable performance. Its average score range (9.47% for large training, 7.94% for small) and its average random seed variability (4.03% for large training, 3.31% for small) end up being the highest.
## 6 Conclusions
We investigated the roles that sampling strategy, random seeds, and overlap types play in evaluating and analyzing the results of morphological inflection tasks and conclude that common practices leave much to be desired. We argue for frequency-weighted splitting to achieve more realistic train-test distributions and feature/lemma overlap-aware sampling for directly investigating the generalization abilities of different models. The high score range observed for overlap-aware sampling relative to other strategies suggests that which feature sets happen to appear in train vs. test play a major role in the ability of a model to generalize, though future work would need to confirm this.
Regardless of sampling strategy, evaluation items of each overlap type should be used in addition to an overall analysis. The evaluation in this work reveals that all model types under investigation struggle to generalize to unseen feature sets, even for languages where that should be possible, a fact that has been overlooked in prior studies. Finally, results drawn from one data split are unlikely to be representative, so multiple splits should be made with different random seeds and compared, particularly for shared tasks and leader boards where final model rankings matter.
## Limitations
Our suggested approaches have two primary practical limitations: First, Weighted sampling is
\begin{table}
\begin{tabular}{c|c|c|c}
**Train** & **Sampling** & **Score** & **Random Seed** \\
**Size** & **Strategy** & **Range** & **Variability** \\ \hline \hline Small & Uniform & 4.51\% & 1.84\% \\ & Weighted & 6.33 & 2.57 \\ & OverlapAware & 12.13 & 5.01 \\ \hline Large & Uniform & 3.99\% & 1.68\% \\ & Weighted & 4.08 & 1.66 \\ & OverlapAware & 13.06 & 5.50 \\ \end{tabular}
\end{table}
Table 5: Average score range and random seed variability across languages for each sampling strategy for both training sizes.
restricted to languages with available running text sources for extracting frequencies. A project on _extremely_ low-resource languages (e.g., Liu et al., 2022) may be restricted to Uniform and OverlapAware sampling. Second, as the number of seeds increases, so do requirements for training time and/or computing power. A shared task, for example, might limit itself to only a few seeds in order to assure on-time submissions. Future work would benefit from a wider selection of model architectures, along with more sampling strategies, and of course a wider sample of typologically diverse languages.
Notably, this work reproduces the effect observed in the SIGMORPHON 2022 shared task (Kodner et al., 2022), which found a substantial performance hit for featsNovel relative to featsAttested, but not lemmaNovel relative to lemmaAttested. However, both this work and the shared task fail to replicate the effect observed in Goldman et al. (2022), which reports a 95% performance hit on lemmaNovel vs. lemmaAttested. This may have something to do with differences in splitting algorithms, unmeasured feature overlap in Goldman et al. (2022), or choice of model architectures.
## Ethics Statement
To the best of our knowledge, all results published in this paper are accurate, and we have represented prior work fairly to the best of our abilities. All data sources are free and publicly available, except for the Penn Arabic Treebank (Maamouri et al., 2004), which is accessible through the LDC.7 No sensitive data was used which could violate individuals' privacy or confidentiality. Authorship and acknowledgements fairly reflect contributions.
Footnote 7: [https://catalog.ldc.upenn.edu/LDC2005T20](https://catalog.ldc.upenn.edu/LDC2005T20)
## Acknowledgements
We thank Charles Yang, Jeffrey Heinz, Mitch Marcus, and the audience at Stony Brook University AT-LaC for their helpful discussion. Experiments were performed on the SeaWulf HPC cluster maintained by RCC and the Institute for Advanced Computational Science (IACS) at Stony Brook University and made possible by National Science Foundation (NSF) grant No. 1531492. The second author gratefully acknowledges funding through the IACS Graduate Research Fellowship and the NSF Graduate Research Fellowship Program under NSF Grant No. 2234683.
|
2308.15813 | Knowledge-grounded Natural Language Recommendation Explanation | Explanations accompanied by a recommendation can assist users in
understanding the decision made by recommendation systems, which in turn
increases a user's confidence and trust in the system. Recently, research has
focused on generating natural language explanations in a human-readable format.
Thus far, the proposed approaches leverage item reviews written by users, which
are often subjective, sparse in language, and unable to account for new items
that have not been purchased or reviewed before. Instead, we aim to generate
fact-grounded recommendation explanations that are objectively described with
item features while implicitly considering a user's preferences, based on the
user's purchase history. To achieve this, we propose a knowledge graph (KG)
approach to natural language explainable recommendation. Our approach draws on
user-item features through a novel collaborative filtering-based KG
representation to produce fact-grounded, personalized explanations, while
jointly learning user-item representations for recommendation scoring.
Experimental results show that our approach consistently outperforms previous
state-of-the-art models on natural language explainable recommendation. | Anthony Colas, Jun Araki, Zhengyu Zhou, Bingqing Wang, Zhe Feng | 2023-08-30T07:36:12Z | http://arxiv.org/abs/2308.15813v1 | # Knowledge-grounded Natural Language Recommendation Explanation
###### Abstract
Explanations accompanied by a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user's confidence and trust in the system. Recently, research has focused on generating natural language explanations in a human-readable format. Thus far, the proposed approaches leverage item reviews written by users, which are often subjective, sparse in language, and unable to account for new items that have not been purchased or reviewed before. Instead, we aim to generate fact-grounded recommendation explanations that are objectively described with item features while implicitly considering a user's preferences, based on the user's purchase history. To achieve this, we propose a knowledge graph (KG) approach to natural language explainable recommendation. Our approach draws on user-item features through a novel collaborative filtering-based KG representation to produce fact-grounded, personalized explanations, while jointly learning user-item representations for recommendation scoring. Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation.
## 1 Introduction
Current approaches to natural language (NL) explainable recommendation focus on generating user reviews Chen et al. (2018); Wang et al. (2018); Li et al. (2020, 2021); Yang et al. (2021). Instead of providing a justification for the item recommendation, the models learn to output language that is commonly found in personal reviews. This reliance on reviews poses three problems: 1) The explanations are not objective, because users typically review items based on their sentiment Wu et al. (2018), 2) Reviews are often sparse, because they describe a user's own experience Asghar (2016), 3) Systems that rely on reviews cannot account for new items which have never been purchased before, nor can they provide justifications for item catalogs which may not have reviews available. Given this, it may be difficult for a user to reason as to why an item was recommended, hindering the user's experience Tintarev and Masthoff (2015). The user may then lose trust in such systems which do not provide objective and accurate explanations.
We propose **KnowRec**, a KG-grounded approach to natural language explainable recommendation which not only personalizes recommendations/explanations with user information, but also draws on facts about a particular item via its corresponding KG to generate objective, specific, and data-driven explanations for the recommended item. For example, given the movie "Paths of Glory", previous work aims to generate explanations such as "it's not the best military movie" and "good performances all around", which are subjective, not specific to a given movie, and relies on data from pre-existing reviews. Instead, by leveraging an item KG such as _<director, Stanley Kubrick>, <conflict, World War 1>_, _<country, France>_, a more objective and precise explanation can be produced such as: "A World War I French colonel defends three soldiers. Directed by Stanley Kubrick." The item features of 'World War I', 'colonel', and 'defends three soldiers' in the explanation objectively describe the movie, while they can implicitly reflect the user's preferences for war movies, based on his/her purchase history.
KnowRec is also more advantageous than prior work in terms of scalability to unpruchased items. Previously, KG-based recommendation systems have effectively addressed the cold-start problem by linking users and items through shared attributes Wang et al. (2019, 2020, 2021). Similarly, there exists a kind of cold-start problem for new items in recommendation explanation that rely on
reviews. KnowRec demonstrates KGs can help solve this problem through existing item-level features by adapting KG-to-text (Koncel-Kedziorski et al., 2019; Ke et al., 2021; Colas et al., 2022) elements into explainable recommendation, producing item-level explanations to justify a purchase. The KG-based approach is particularly important for recommendation scenarios in special domains where personal reviews are not available and the review-based approaches are impractical.
Our approach presents several algorithmic novelties. First, inspired by work on KG Recommendation (Wang et al., 2020) and KG-to-Text (Collas et al., 2022), we devise a novel user-item KG lexical representation, viewing the input through collaborative filtering lens, where users are graphically represented via their previous purchases and connected to a given item KG. Our representation differs from previous work on explainable NL generation which relies on ID and sparse keyword features. Previous work extracts keywords from reviews to represent the user and item, linearizing all such features to encode and produce an NL explanation (Li et al., 2020, 2022). Next, KnowRec adapts a graph attention encoder for the user-item representation via a new masking scheme. Finally, the encoded KG representation is simultaneously decoded into a textual explanation, while we innovatively dissociate the joint learned user-item representation to compute a user-item similarity for recommendation scoring.
To evaluate our approach, we first devise a method of constructing \((KG,Text)\) pairs from product descriptions as described in Section 5, where we extract entities and relations for the item KGs. We construct two such datasets from the publically available recommendation datasets to evaluate our proposed model for both the explanation and recommendation task and focus on natural language generation (NLG) metrics for the explanation task as in previous work. We adapt and compare previous baseline models for the recommendation explanation task as described in Section 6, where we substantially outperform previous models on explanation while achieving similar recommendation performance as models that rely on user and item ID-based features.
## 2 Related Work
### Explainable Recommendation
Previous works on NL explainable recommendation focus on generating user-provided reviews, where the output is typically short, subjective, and repetitive (Chen et al., 2018; Hou et al., 2019; Wang et al., 2018; Yang et al., 2021; Li et al., 2017, 2020, 2021; Hui et al., 2022). Extractive-based approaches have been proposed to score and select reviews as explanations (Chen et al., 2018; Li et al., 2019). Conversely, generative approaches (Yang et al., 2021; Li et al., 2017, 2020, 2021; Sun et al., 2020; Hui et al., 2022) leverage user/item features to generate new reviews as explanations. Currently, the task is still limited by review data, thus these models cannot adequately handle new items. Unlike previous work, we introduce KGs to the explainable recommendation task to provide objective, information-dense, specific explanations. Our approach can then handle new items which have not been reviewed yet.
Inspired by recent advancements in explainable recommendation models like (Li et al., 2021), we enhance BART (Lewis et al., 2020), renowned for graph-to-text tasks, to incorporate user-item knowledge graphs. This adaptation enables us to generate recommendation scores along with natural language explanations.
### Knowledge Graph Recommendation
Leveraging KGs for recommendation systems has gained increasing attention (Wang et al., 2019, 2020, 2021; Xie et al., 2021; Du et al., 2022). In neighborhood-based methods (Hamilton et al., 2017; Welling and Kipf, 2016; Velickovic et al., 2018), propagation is performed iteratively over the neighborhood information in a KG to update the user-item representation. While recent work has produced explanations via KGs, these works focus on structural explanations such as knowledge graph paths (Ma et al., 2019; Fu et al., 2020; Xian et al., 2019) and rules (Zhu et al., 2021; Chen et al., 2021; Shi et al., 2020), which are not as intuitive for users to understand. We focus on generating NL explanations, which has been shown to be a preferred type of explanation (Zhang et al., 2020). For a fair comparison, we compare to prior work that produces NL explanations. Unlike these works, we aim to generate NL explanations instead of using paths along the KG as explanations.
### Knowledge Graph-to-Text Generation
In KG-to-Text, pre-trained language models such as GPT-2 [11] and BART [12] have seen success in generating fluent and accurate verbalizations of KGs [13, 14, 15, 16]. We devise an encoder for user-item KGs and a decoder for both the generation and recommendation tasks. Specifically, we formulate a novel masking scheme for user-item KGs to structurally encode user and item features, while generating a recommendation score from their latent representations. Thus, our task is two-fold, fusing elements from the Graph-to-Text generation and KG recommendation domains.
## 3 Problem Formulation
Following prior work, we denote \(\mathcal{U}\) as a set of users, \(\mathcal{I}\) as a set of items, and the user-item interaction matrix as \(\mathbf{Y}\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{I}|}\), where \(y_{uv}=1\) if user \(u\in\mathcal{U}\) and item \(v\in\mathcal{I}\) have interacted. Here, we represent user \(u\) as the user's purchase history \(u=\{v_{ui}\}\), where \(v_{ui}\) denotes the \(i\)-th item purchased by user \(u\) in the past. Next, we define a KG as a multi-relational graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of entity vertices and \(\mathcal{E}\subset\mathcal{V}\times\mathcal{R}\times\mathcal{V}\) is the set of edges connecting entities with a relation from \(\mathcal{R}\). Each item \(v\) has its own KG, \(g_{v}\), comprising an entity set \(\mathcal{V}_{v}\) and a relation set \(\mathcal{R}_{v}\) which contain features of \(v\). We devise a set of item-entity alignments \(\mathcal{A}=\{(v,e)|v\in\mathcal{I},e\in\mathcal{V}\}\), where \((v,e)\) indicates that item \(v\) is aligned with an entity \(e\).
Given a user \(u\) and an item \(v\) represented by its KG \(g_{v}\), the task is to generate an explanation of natural language sentences \(E_{u,v}\) as to why item \(v\) was recommended for the user \(u\). As in previous multi-task explainable recommendation models, KnowRec calculates a rating score \(r_{u,v}\) that measures \(u\)'s preference for \(v\). By jointly training on the recommendation and explanation generation, our model can contextualize the embeddings more adequately with training signals from both tasks.
## 4 Model
Figure 1 illustrates our model with the user-item graph constructed through collaborative filtering signals, an encoder, and inference functions for explanation generation and rating prediction.
### Input
The input of KnowRec comprises a user \(u\) represented by the user's purchase history \(\{v_{ui}\}\) and an item \(v\) represented by its KG \(g_{v}\), as introduced in Section 3. Let \(v_{c}\) denote the item currently considered by the system. The item \(v_{c}\) is aligned with one of the entities through \(\mathcal{A}\) and becomes the center node of \(g_{v}\), as shown in Figure 1.
Because our system leverages a Transformer-based encoder, we first linearize the input into a string. For the user \(u=\{v_{ui}\}\), we initialize it by mapping each purchased item \(v_{ui}\) into tokens of the item's name. For the item \(v\) represented by \(g_{v}\), we decompose \(g_{v}\) into a set of tuples \(\{t_{vj}\}\), where \(t_{vj}=(v_{c},r_{vj},n_{vj})\), \(n_{vj}\in\mathcal{V}_{v}\), and \(r_{vj}\in\mathcal{R}_{v}\). We linearize each tuple \(t_{vj}\) into a sequence of tokens using lexicalized names of the nodes and the relation. We then concatenate all the user tokens and the item tokens to form the full input sequence \(x\). For example, suppose the current item \(v_{c}\) is the book _Harry Potter_, the KG has a single tuple (_Harry Potter_, _author_, _J.K. Rowling_), and the user previously purchased two books _The Lord of the Rings_ and _The Little Prince_. In this case, input sequence \(x=\)_The Lord of the Rings The Little Prince Harry Potter author J.K. Rowling_.
We map the tokens to randomly initialized vectors or pre-trained word embeddings such as those
Figure 1: Illustration of KnowRec. 1) The User’s Item KG Representation Module. 2) The Global and User-Item Graph Attention Encoder. 3) The Output Module for rating prediction and explanation.
in BART (Lewis et al., 2020), obtaining \(\mathbf{X}_{0}=[\dots;\mathbf{V}_{ui};\dots;\mathbf{T}_{vj};\dots]\) where \(\mathbf{V}_{ui}\) and \(\mathbf{T}_{vj}\) are word vector representations of \(v_{ui}\) and \(t_{vj}\), respectively. Unlike previous work on KG recommendation (Wang et al., 2020) where users/items are represented via purchase history and propagated KG information, our system infuses KG components to provide a recommendation and its natural language explanation. Our system also differs from prior studies on explainable recommendation in that while they focus on reviews and thus encode users/items as random vectors with additional review-based sparse token features as auxiliary information (Li et al., 2021), we directly encapsulate KG information into the input representation.
### Encoder
**Collaborative KG Representation**. Because KnowRec outputs a natural language explanation grounded on KG facts, as well as a recommendation score for the user-item pair, we need to construct a user-item-linked KG to represent an input through its corresponding lexical graph feature. To do so, we leverage collaborative signals from \(\mathbf{Y}\), combining \(u\) with \(v\) by linking previously purchased products \(v_{ui}\) to the current item \(v_{c}\) from \(g_{v}\), forming a novel lexical user-item KG. Additionally, we connect all previously purchased items together in order to graphically model collaborative filtering effects for rating prediction, as illustrated in Figure 1. Note that the relations between previously purchased items and the current items do require a lexical representation for our model. The resulting graph goes through the Transformer architecture, as described below.
**Global Attention**. Transformer architectures have recently been adopted for the personalized explainable recommendation task (Li et al., 2021). We similarly leverage Transformer encoder layers (Vaswani et al., 2017), referred to as Global Attention, to encode the input representation with self-attention as:
\[\mathbf{X}_{l} =\mathrm{Attn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax }\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d_{k}}}\right)\mathbf{V}, \tag{1}\] \[\mathbf{Q} =\mathbf{X}_{l-1}\mathbf{W}_{l}^{Q},\mathbf{K}=\mathbf{X}_{l-1} \mathbf{W}_{l}^{K},\] \[\mathbf{V} =\mathbf{X}_{l-1}\mathbf{W}_{l}^{V}\]
where \(\mathbf{X}_{l}\) is the output of the \(l\)-th layer in the encoder, and \(d_{k}\) is a tunable parameter. \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) represent the \(\mathbf{Q}\)uery, \(\mathbf{Key}\), and \(\mathbf{V}\)aluce vectors, respectively, each of which is calculated with the corresponding parameter matrix \(\mathbf{W}\) in the \(l\)-th layer. Note that the transformer encoder may be initialized via a pre-trained language model.
**User-Item Graph Attention**. We further propose User-Item Graph Attention encoder layers, which compute graph-aware attention via a mask to capture the user-item graph's topological information, which runs in parallel with the Global Attention encoder layers.
We first extract the mask \(\mathbf{M}_{g}\in\mathbb{R}^{m\times m}\) from the user-item linked KG, where \(m\) is the number of relevant KG components, i.e., nodes and edges that are lexically expressed in the KG (edges between \(v_{ui}\) and \(v_{c}\) not included). In \(\mathbf{M}_{g}\), each row/column refers to a KG component. \(M_{ij}=0\) if there is a connection between component \(i\) and \(j\) (e.g., "J.K. Rowling" and "author") and \(-\infty\) otherwise. In addition, we assume all item components, i.e., the previous purchases and the current item, are mutually connected when devising \(\mathbf{M}_{g}\).
For each layer (referred to as the \(l\)-th layer), we then transfer its input \(\mathbf{X}_{l-1}\) into a component-wise representation \(\mathbf{X}_{l-1}^{g}\in\mathbb{R}^{m\times d}\), where \(d\) is the word embedding size. Motivated by Ke et al. (2021), we perform this transfer by employing a pooling layer that averages the vector representations of all the word tokens contained in the corresponding node/edge names per relevant KG component. With the transferred input \(\mathbf{X}_{l-1}^{g}\), we proceed to encode it using User-Item Graph Attention with the graph-topology-sensitive mask as follows:
\[\mathbf{\tilde{X}}_{l}^{g} =\mathrm{Attn}_{M}(\mathbf{Q}^{\prime},\mathbf{K}^{\prime}, \mathbf{V}^{\prime}) \tag{2}\] \[=\mathrm{softmax}\left(\frac{\mathbf{Q}^{\prime}\mathbf{K}^{ \prime\top}}{\sqrt{d_{k}}}+\mathbf{M}_{g}\right)\mathbf{V}^{\prime}.\]
where query \(\mathbf{Q}^{\prime}\), key \(\mathbf{K}^{\prime}\), and value \(\mathbf{V}^{\prime}\) are computed with the transferred input and learnable parameters in the same manner as Equation (1).
Lastly, we combine the outputs of the Global Attention encoder and the User-Item Graph Attention encoder in each layer. As the two outputs have different dimensions, we first expand \(\mathbf{\tilde{X}}_{l}^{g}\) to the same dimension of \(\mathbf{X}_{l}\) through a _gather_ operation, i.e., broadcasting each KG component-wise representation in \(\mathbf{\tilde{X}}_{l}^{g}\) to every encompassing word of the corresponding component and connecting those representations. We then add the expanded \(\mathbf{\tilde{X}}_{l}^{g}\) to \(\mathbf{X}_{l}\) through element-wise addition, generating the \(l\)-th encoding layer's output:
\[\mathbf{\tilde{X}}_{l}=gather(\mathbf{\tilde{X}}_{l}^{g})+\mathbf{X}_{l} \tag{3}\]
Note, in this section, we illustrate the Global Attention encoder, User-Item Attention encoder, and their combination with single-head attention. In practice, we implement both encoders with multi-head attention as in Vaswani et al. (2017).
### Rating Prediction
For the rating prediction task, we first separate and isolate user \(u\) and item \(v\) features via masking. Once isolated, we perform a mean pool on all their respective tokens and linearly project \(u\) and \(v\) to perform a dot-product between the two new vector representations as follows:
\[\begin{split}\mathbf{\tilde{x}}_{u}&=pool_{mean}( \mathbf{\tilde{X}}_{L}+\mathbf{m}_{u})\mathbf{W}^{u}\\ \mathbf{\tilde{x}}_{v}&=pool_{mean}(\mathbf{\tilde {X}}_{L}+\mathbf{m}_{v})\mathbf{W}^{v}\\ \hat{r}_{u,v}&=dot(\mathbf{\tilde{x}}_{u},\mathbf{ \tilde{x}}_{v}),\end{split} \tag{4}\]
where \(\mathbf{m}_{u}\) and \(\mathbf{m}_{v}\) are the user and item masks that denote which tokens belong to the user and item, \(\mathbf{W}\)s are learnable parameters, and \(L\) refers to the last layer of the encoder.
### Explanation Generation
Before generating a final output text for our explanation, we pass the representation through a fully connected linear layer as the encoder hidden state and decode the representation into its respective output tokens through an auto-regressive decoder, following previous work Lewis et al. (2020).
### Joint-learning Objective
As previously noted, our system consists of two outputs: a rating prediction score \(\hat{r}_{u,v}\) and natural language explanation \(E_{u,v}\) which justifies the rating by verbalizing the item's corresponding KG. We thus perform multi-task learning to learn both tasks and manually define regularization weights \(\lambda\), as in similar multi-task paradigms, to weight the two tasks. Taking \(\mathcal{L}_{r}\) and \(\mathcal{L}_{e}\) to represent the recommendation and explanation cost functions, respectively, the multi-task cost \(\mathcal{L}\) then becomes:
\[\mathcal{L}=\lambda_{r}\mathcal{L}_{r}+\lambda_{e}\mathcal{L}_{e}, \tag{5}\]
where \(\lambda_{r}\) and \(\lambda_{e}\) denote the rating prediction and explanation regularization weights, respectively.
We define \(\mathcal{L}_{r}\) using Mean Square Error (MSE) in line with conventional item recommendation and review-based explainable systems:
\[\mathcal{L}_{r}=\frac{1}{|\mathcal{U}||\mathcal{I}|}\sum_{u\in\mathcal{U}/ \nu\in\mathcal{I}}(r_{u,v}-\hat{r}_{u,v})^{2}, \tag{6}\]
where \(r_{u,v}\) denotes the ground-true score.
Next, as in other NLG tasks Lewis et al. (2020); Zhang et al. (2020), we incorporate Negative Log-Likelihood (NLL) as the explanation's cost function \(\mathcal{L}_{e}\). Thus, we define \(\mathcal{L}_{e}\) as:
\[\mathcal{L}_{e}=\frac{1}{|\mathcal{U}||\mathcal{I}|}\sum_{u\in\mathcal{U}/\nu \in\mathcal{I}}\frac{1}{|E_{u,v}|}\sum_{t=1}^{|E_{u,v}|}-\log p_{t}^{e_{t}} \tag{7}\]
where \(p_{t}^{e_{t}}\) is the probability of a decoded token \(e^{t}\) at time step t.
## 5 Dataset
Although KG-recommendation datasets exist, they do not contain any supervision signals to NL descriptions. Thus, to evaluate our explainable recommendation approach in a KG-aware setting and our KnowRec model, we introduce two new datasets based on the Amazon-Book and Amazon-Movie datasets He and McAuley (2016): (1) Book KG-Exp and (2) Movie KG-Exp.
Recall that our task requires an input KG along with an NL explanation and recommendation score. Because it is more efficient to extract KGs from text, rather than manually annotate each KG with text, we take a description-first approach, automatically extracting KG elements from the corresponding text. Given the currently available data, we leverage item descriptions as a proxy for the NL explanations, while constructing a user-item KG from an item's features and user's purchase history.
We first extract entities from a given item description via DBpedia Spotlight Mendes et al. (2011), a tool that detects mentions of DBpedia Auer et al. (2007) entities from NL text. We then query for each entity's most specific type and use those types as relations that connect the item to its corresponding entities. We construct a user KG via their purchase history, e.g. \([Purchase_{1},Purchase_{2},...Purchase_{n}]\), as a complete graph where each purchase is connected. Finally, we connect all the nodes of the user KG to the item KG, treating each user purchase as a one-hop neighbor of the current item. To ensure the KG-explanation correspondence, we filter out any sentences in the explanation in which no entities were found. To measure objectivity, we calculate the proportion of a given KG's entities that appear in the explanation, called entity coverage (EC) (defined in Appendix B.3). We summarize our dataset statistics in Table 1 and present a more comprehensive comparison in Appendix A.2.
## 6 Experiments
### Evaluation Metrics
We assess explainable recommendation following prior work: 1) on the recommendation performance and 2) on the explanation performance. For the explanation generation task, we employ standard natural language generation (NLG) metrics: BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). We measure diversity and the detail-oriented features of the generated sentences using Unique Sentence Ratio (USR) (Li et al., 2020, 2021), and use EC, instead of feature coverage ratio, for coverage due to our non-review-based explanations.
### Implementation
We train our newly proposed KnowRec model on two settings of the Book and Movie KG-Exp datasets, a full training set and a few-shot setting, where 1% of the data is used. Because our method provides item-level explanations based on KGs, we split the datasets based on their labeled description/explanation, and as such, we experiment in a setting where items in the test set can be unseen during training. By doing so, we handle a unique case that has not been considered in previous research relying on item reviews. The train/validation/test sets are split into 60/20/20. For KnowRec, we use BART as our pre-trained model, with a Byte-Pair Encoding (BPE) vocabulary (Radford et al., 2019). We compare our approach to available explanation generation baselines, including those that leverage user and item IDs, and those which utilize word-level features. We adapt the baselines to use the KG information and detail them in Appendix B.1. For more details regarding our experimental settings please see Appendix B.2.
## 7 Results and Analysis
### Explanation Results
In Table 2, we evaluate the models' text reproduction performance using BLEU and ROUGE metrics, while also examining their _explainability_ through USR and EC analysis.
For BLEU and ROUGE, KnowRec consistently outperforms all baselines, achieving a BLEU-4 score of 10.71 and ROUGE-L F1 score of 27.71 on Movie KG-Exp and a BLEU-4 score of 12.60 and ROUGE-L F score of 28.29 on Movie KG-Exp. This suggests that previous baselines, designed for review-level explanation, are inadequate to produce longer and more objective explanations. Specifically, of the baselines, PETER which utilizes the whole lexical input, adapts best. However, KnowRec makes use of user-item graph encodings, which may lead to better generation of the item KG features mentioned in the ground truth texts. While PEPLER (Li et al., 2022)'s pretrained approach aids in fluent sentence generation, KnowRec excels in generating contextually relevant words around feature-level terms. Unlike PEPLER, which creates concise reviews based on user-item IDs, KnowRec utilizes graph attention to interconnect related components for comprehensive NL text explanations.
In terms of explainability, KnowRec also generates much more diverse sentences (USR), especially compared to models that do not leverage pre-trained models. Note that while PEPLER has a comparable USR score to KnowRec on the Book KG-Exp dataset, it does not similarly produce high-quality and related sentences according to the NLG metrics. Our results show that while the ground truth is based on item-level features, the generated output is still personalized as further discussed in Section 7.5. Also note the high discrepancy in EC, where the entity-level features are generated in the output text. As our goal is to generate objective and specific explanations, the EC can help real-world users understand what a certain recommended product is about and how it compares to other products. Therefore, it is crucial that explainable models capture these features when producing justifications for recommendations.
### Few-shot Explanation Results
Real-world recommendation systems may face low-resource problems, where only a small amount of training data with few item descriptions is avail
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Name & \#Users & \#Items & \#Interactions & KG & \#Es & \#Rs & \#Triples & EC & Desc. Words/Sample \\ \hline _Book KG-Exp_ & _396,114_ & _95,733_ & _2,318,107_ & _Yes_ & _195,110_ & _392_ & _745,699_ & _71.45_ & _Yes_ & _99.96_ \\ _Movie KG-Exp_ & _131,375_ & _18,107_ & _788,957_ & _Yes_ & _59,036_ & _363_ & _146,772_ & _71.32_ & _Yes_ & _96.35_ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of our Book KG-Exp and Movie KG-Exp benchmark datasets.#_Es_, #_Rs_, and _Desc_. denote number of entities, number of relations, and if the dataset contains parallel descriptions.
able but an item database exists. To reflect this practical situation, we also evaluate a few-shot setting where the training data is 1% of its total size. As in previous experiments, we set the user-item size for KnowRec to 5. We show the results of this few-shot experiment in Table 3. KnowRec consistently and significantly outperforms other explainable baselines on both the Book and Movie datasets in terms of text quality, sentence diversity (USR), and entity representation (ER), showing our approach is effective even in data-scarce scenarios. Like KnowRec, PEPLER also leverages a pre-trained model, namely GPT-2. However, unlike KnowRec, the model does not adapt well to generating item-specific explanations. The second best model, PETER, fully leverages the KG features in their approach. However, such a model does produce diverse sentences. Note that those models that completely rely on user and item IDs, fail to produce quality explanations, as noted by their respective BLEU and ROUGE scores, showing the task to be more complex than previous explanation tasks relying on repetitive, short, and already
\begin{table}
\begin{tabular}{l l r r r|r r r r} \hline \hline \multirow{3}{*}{Model} & \multicolumn{3}{c|}{Book KG-Exp} & \multicolumn{3}{c}{Movie KG-Exp} \\ & \multicolumn{3}{c|}{All} & \multicolumn{3}{c|}{Few} & \multicolumn{3}{c}{All} & \multicolumn{3}{c}{Few} \\ & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{M} & \multicolumn{1}{c|}{R} & \multicolumn{1}{c|}{M} & \multicolumn{1}{c|}{R} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{M} \\ \hline PMF & 3.50 & 3.35 & 3.50 & 3.35 & 3.31 & 3.08 & 3.32 & 3.08 \\ SVD++ & 1.03 & 0.80 & 1.01 & 0.64 & 1.20 & **0.79** & 1.25 & 0.98 \\ NRT & 0.98 & 0.74 & 1.07 & 0.73 & 1.17 & 0.93 & 1.23 & 0.97 \\ PETER & 1.01 & 0.79 & 1.03 & 0.82 & 1.24 & 1.03 & 1.24 & 1.00 \\ PEPLER & **0.96** & **0.27** & 1.07 & 0.72 & 1.14 & **0.91** & 1.27 & 0.96 \\ \hline KnowRec & 1.04 & 0.75 & 1.04 & 0.72 & 1.22 & 0.92 & **1.21** & **0.93** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison on the recommendation task with respect to RMSE and MAE, denoted as R and M on the table respectively.
\begin{table}
\begin{tabular}{l l r r r r r r r r r r} \hline \hline Dataset & Model & BLEU-1 & BLEU-4 & USR & R2-F & R2-R & R2-P & RL-F & RL-R & RL-P & EC \\ \hline \multirow{6}{*}{K-Exp} & Att2Seq & 8.86 & 0.39 & 0.30 & 2.08 & 1.41 & 8.47 & 8.07 & 11.65 & 9.49 & 0.44 \\ & NRT & 11.76 & 0.57 & 0.03 & 1.50 & 1.40 & 3.25 & 7.20 & 11.70 & 8.05 & 0.98 \\ Movie & Transformer & 8.67 & 0.18 & 0.33 & 1.21 & 0.91 & 6.55 & 6.58 & 9.54 & 9.69 & 0.82 \\ & PETER & 14.66 & 3.99 & 0.55 & 5.07 & 4.26 & 11.66 & 15.06 & 16.67 & 23.03 & 10.58 \\ & PEPLER & 11.68 & 0.13 & 0.46 & 0.56 & 0.63 & 0.54 & 8.90 & 10.92 & 9.53 & 0.78 \\ \cline{2-11} & KnowRec & **37.02** & **10.71** & **0.83** & **15.49** & **15.12** & **18.15** & **27.71** & **28.71** & **37.10** & **67.97** \\ \hline \multirow{6}{*}{Book} & Att2Seq & 19.51 & 1.85 & 0.43 & 5.08 & 3.76 & 12.15 & 12.98 & 16.55 & 20.89 & 0.86 \\ & NRT & 21.06 & 2.59 & 0.10 & 6.18 & 4.88 & 11.44 & 15.57 & 18.67 & 24.36 & 1.57 \\ & Transformer & 16.90 & 2.01 & 0.12 & 5.68 & 4.23 & 11.94 & 13.66 & 15.57 & 26.87 & 2.08 \\ KG-Exp & PETER & 27.93 & 8.39 & 0.71 & 11.94 & 10.36 & 18.68 & 21.24 & 23.30 & 28.02 & 17.39 \\ & PEPLER & 16.07 & 1.20 & 0.90 & 2.39 & 2.63 & 2.26 & 13.03 & 16.34 & 12.24 & 0.74 \\ \cline{2-11} & KnowRec & **38.53** & **12.60** & **0.92** & **19.78** & **19.44** & **23.22** & **28.29** & **29.43** & **35.28** & **69.50** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of neural generation models on the Movie KG-Exp and Book KG-Exp datasets.
\begin{table}
\begin{tabular}{l l r r r r r r r r r r} \hline \hline Dataset & Model & BLEU-1 & BLEU-4 & USR & R2-F & R2-R & R2-P & RL-F & RL-R & RL-P & EC \\ \hline \multirow{6}{*}{K-Exp} & Att2Seq & 2.63 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 2.73 & 4.25 & 2.63 & 0.01 \\ & NRT & 8.78 & 0.32 & 0.01 & 1.84 & 1.08 & 11.73 & 7.12 & 10.17 & 17.97 & 0.07 \\ & Transformer & 12.23 & 0.27 & 0.16 & 1.24 & 1.07 & 3.54 & 6.97 & 9.54 & 12.00 & 1.18 \\ & PETER & 12.28 & 0.68 & 0.36 & 2.33 & 1.45 & 12.49 & 12.00 & 13.18 & 18.03 & 5.44 \\ & PEPLER & 12.58 & 0.41 & 0.01 & 1.26 & 1.44 & 1.18 & 10.73 & 12.63 & 10.38 & 0.11 \\ \cline{2-11} & KnowRec & **33.89** & **7.53** & **0.87** & **13.41** & **12.60** & **17.67** & **24.48** & **25.63** & **35.66** & **63.92** \\ \hline \multirow{6}{*}{Book} & Att2Seq & 16.58 & 1.53 & 0.22 & 4.68 & 3.10 & 15.58 & 13.30 & 15.28 & 21.32 & 0.26 \\ & NRT & 19.12 & 2.19 & 0.01 & 6.11 & 4.36 & 13.99 & 15.18 & 20.47 & 16.78 & 1.19 \\ \cline{1-1} & Transformer & 12.69 & 1.22 & 0.08 & 3.60 & 3.16 & 8.65 & 9.77 & 15.64 & 10.58 & 1.57 \\ \cline{1-1} & PETER & 18.38 & 2.87 & 0.45 & 7.12 & 5.07 & 17.50 & 14.74 & 17.66 & 17.52 & 4.23 \\ \cline{1-1} & PEPLER & 7.96 & 0.26 & 0.02 & 0.67 & 0.63 & 0.83 & 7.59 & 10.07 & 7.04 & 0.54 \\ \cline{1-1} \cline{2-11} & KnowRec & **28.93** & **7.94** & **0.93** & **17.28** & **16.05** & **22.45** & **24.84** & **25.19** & **36.60** & **60.46** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of neural generation models on the Movie KG-Exp and Book KG-Exp datasets.
\begin{table}
\begin{tabular}{l r r r r|r r r r} \hline \hline & \multicolumn{3}{c|}{BLEU-4\(\uparrow\)} & USR\(\uparrow\) & RL-F\(\uparrow\) & \multicolumn{3}{c}{RMSE\(\downarrow\)} & MAE\(\downarrow\) \\ \hline KnowRec & 7.94 & **0.93** & 24.84 & **1.04** & **0.78** \\ - Recomm. & **8.32** & **0.93** & **24.90** & -
existing user reviews.
### Recommendation Performance
Table 4 shows the recommendation performance on all KG Explanation datasets. We report the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) metrics to evaluate the recommendation task. As shown, all results except PMF are relatively close. PMF significantly underperforms due to the cold start problem presented on new items. KnowRec achieves performance comparable to other strong baselines, despite KnowRec being the only model that uses lexical features for the recommendation task, while the other models learn the task through user/item IDs. Thus, KnowRec may need more data to learn these parameters. Additionally, because we learn the recommendation task through lexical features, our model provides an interpretable solution that could be directly compared to the produced NL explanations.
### Ablation Study
We perform ablation studies to analyze the effects of the recommendation and user-item graph components on Book KG-exp as shown in Table 5. Due to computational resources, we performed the study on the few-shot setting. We first examine the results of KnowRec without the recommendation module in the second row (- _Recomm._). By removing the 'Recomm' component, the performance on the NLG metrics improves, as the task is now a single-objective generative task instead of a multi-objective. We next study the effects of the User-Item Attention encoders on KnowRec's explainability and recommendation performance (- _UIG Att_). As shown by - _UIG Att._, even with a smaller training dataset of 1% of the full data, by removing this component, we observe a slight decrease in the NLG metrics, BLEU and ROUGE, and less diverse sentences (USR). The representation and attention masking on the user-item graph, which connects and encodes the local item information, may therefore give a better representation of the input which is in turn decoded to produce an explanation. This may be further expressed within larger datasets. Furthermore, from the NLG metric results, we can infer from Table 5 that our rating module does not significantly hinder the performance of the generation component of KnowRec.
### Qualitative Analysis
To grasp KnowRec's effectiveness, we analyze explanations from Movie/Book KG-Exp test sets. These explanations are both grammatically smooth and adept at (1) integrating robust item features for factual insights and (2) tailoring personalized content based on diverse user purchase histories (examples in Appendix C, Table 7).
Consider the first two rows of the table, pertaining to the movie _Journey to the Center of the Earth_. We can see two different (but syntactically similar) generated explanations for two different users. In one case, the user has bought mystery and fantasy movies such as _Stitch in Crime_, _Columbo_, and _The Lord of the Rings_, and the output integrates related words such as _investigates_ and _mysterious_ to personalize the explanation. The second case mentions _classic_ and _novel_, possibly because the second user's purchase history involves _Disney_ classics and movies based on novels such as _The Hardy Boys_ and _Old Yeller_. While the input KG does not explicitly state that _Journey to the Center of the Earth_ is a novel, such information may be inferred from the KG's relation and supported through the user's related purchases. In both cases the output closely matches the ground truth, verbalizing item features from the KB such as _Jules Verne_ and _magnetic storm_, suggesting that our model is robust in describing the explanation content, while still implicitly reflecting the user's purchase history.
## 8 Conclusion
We propose KnowRec, a Knowledge-aware model for generating NL explanations and recommendation scores on user-item pairs. To evaluate KnowRec, we devise and release a semi-supervised large-scale KG-NL recommendation dataset in the book and movie domain. Extensive experiments on both datasets demonstrate the suitability of our model compared to recently proposed explainable recommendation models. We hope that by proposing this KG-guided task, we will open up avenues to research focused on detailed, objective, and specific explanations which can also scale to new items and users, rather than the current review-focused work. In future work, we plan to incorporate user-specific KGs and other pre-trained language models into our model in order to verbalize both user and item-level feature explanations.
## 9 Limitations
While our approach generates objective, descriptive explanations while implicitly capturing personalized aspects of a user's purchase history, currently our dataset labels are limited to item-specific explanations, with the book-related KGs typically containing author-related information, and thus more information dense than the movie-related KGs. These limitations are due to the currently available datasets, and future work can explore constructing a more personalized user-item KG for explainable recommendation. Furthermore, in our approach, we represent users through their item purchase history. Therefore, while we handle the zero-purchase case for items (items that have not been purchased before), the zero-purchase case for users (users without a purchase history) is outside the scope of our work. In the future, we will extend our approach to user-attributed datasets to handle such cases.
## 10 Ethics Statement
All our experiments are performed over publicly available datasets. We do not use any identifiable information about crowd workers who provide annotations for these datasets. Neither do we perform any additional annotations or human evaluations in this work. We do not foresee any risks using KnowRec if the inputs to our model are designed as per our procedure. However, our models may exhibit unwanted biases that are inherent in pre-trained language models. This aspect is beyond the scope of the current work.
|
2305.12158 | Model-based adaptation for sample efficient transfer in reinforcement
learning control of parameter-varying systems | In this paper, we leverage ideas from model-based control to address the
sample efficiency problem of reinforcement learning (RL) algorithms.
Accelerating learning is an active field of RL highly relevant in the context
of time-varying systems. Traditional transfer learning methods propose to use
prior knowledge of the system behavior to devise a gradual or immediate
data-driven transformation of the control policy obtained through RL. Such
transformation is usually computed by estimating the performance of previous
control policies based on measurements recently collected from the system.
However, such retrospective measures have debatable utility with no guarantees
of positive transfer in most cases. Instead, we propose a model-based
transformation, such that when actions from a control policy are applied to the
target system, a positive transfer is achieved. The transformation can be used
as an initialization for the reinforcement learning process to converge to a
new optimum. We validate the performance of our approach through four benchmark
examples. We demonstrate that our approach is more sample-efficient than
fine-tuning with reinforcement learning alone and achieves comparable
performance to linear-quadratic-regulators and model-predictive control when an
accurate linear model is known in the three cases. If an accurate model is not
known, we empirically show that the proposed approach still guarantees positive
transfer with jump-start improvement. | Ibrahim Ahmed, Marcos Quinones-Grueiro, Gautam Biswas | 2023-05-20T10:11:09Z | http://arxiv.org/abs/2305.12158v1 | Model-based adaptation for sample efficient transfer in reinforcement learning control of parameter-varying systems
###### Abstract
In this paper, we leverage ideas from model-based control to address the sample efficiency problem of reinforcement learning (RL) algorithms. Accelerating learning is an active field of RL highly relevant in the context of time-varying systems. Traditional transfer learning methods propose to use prior knowledge of the system behavior to devise a gradual or immediate data-driven transformation of the control policy obtained through RL. Such transformation is usually computed by estimating the performance of previous control policies based on measurements recently collected from the system. However, such retrospective measures have debatable utility with no guarantees of positive transfer in most cases. Instead, we propose a model-based transformation, such that when actions from a control policy are applied to the target system, a positive transfer is achieved. The transformation can be used as an initialization for the reinforcement learning process to converge to a new optimum. We validate the performance of our approach through four benchmark examples. We demonstrate that our approach is more sample-efficient than fine-tuning with reinforcement learning alone and achieves comparable performance to linear-quadratic-regulators and model-predictive control when an accurate linear model is known in the three cases. If an accurate model is not known, we empirically show that the proposed approach still guarantees positive transfer with jump-start improvement.
## I Introduction
Transfer learning in control seeks to reuse knowledge gained from past source tasks to speed up or improve performance on related, target tasks. This is an especially attractive proposition in applications where time or training samples are sparse. For example, consider data-driven fault-tolerant control, as surveyed by [1], where a controller needs to adapt quickly and sufficiently well to a changed environment.
However, the questions underlying transfer are: _what knowledge to transfer, and how do we transfer it well?_ Poorly related tasks may cause negative transfer when conventional transfer learning methods are applied, especially if the relationship between tasks is not considered in the transfer process [2, 3]. Fundamentally, the performance of transfer learning between tasks is dependent on the relationships between tasks. Tasks that are similar will have similar control policies, therefore will need lesser time and data for their policies to adapt to each other.
Thus far, a substantial body of work has addressed transfer in the data and algorithm space. Task similarity measures have been developed from the statistical properties of measurements across tasks. Stochastic algorithms have been proposed (sections III-A, III-B) that leverage model architectures, machine learning hyperparameters, and control policy parameter update rules to achieve faster, jump-start, or asymptotically higher performance improvement on the target task, all the while being agnostic to the underlying process dynamics.
This work makes contributions in a related direction. We address transfer in the space of process dynamics. While our approach is applicable to a broad class of dynamical systems, we demonstrate strong theoretical results for systems with linear, time-invariant dynamics. By modeling the relationships between source and target task dynamics as linear transformations, we develop a transformation of the source policy to transfer on the target task. We demonstrate conditions where the transformation will produce behavior optimal in the least squares sense. Generally, for approximately identified target tasks or locally optimal source policies, the transformation may be used as a policy initialization to get a jump-start improvement, prior to further optimization.
The following section describes the transfer learning problem for this work in the context of reinforcement learning. Following that, in section III, we review different approaches toward transfer. Section IV motivates cases for invariance of optimal policies and cases for the derivation of policy transforms. Finally, experiments are done on dynamical systems using stochastic and classical control approaches to demonstrate these concepts.
## II The transfer learning control problem
In this work, we discuss the transfer of control across systems modeled as Markov Decision Processes (MDPs). An MDP control problem is a task \(T\in\mathcal{T}\), characterized by the process dynamics \(P:X\times U\to X\), which map the current state \(x_{i}\) to the next state \(x_{i+1}\), given an input \(u_{i+1}\). Each state transition is assigned a reward based on the state and the action taken to reach there \(r:X\times U\to\mathbb{R}\). An episode is a sequence of interactions until a terminal or goal state is reached. The objective for \(T\) is to derive a policy \(\pi_{T}:X\to U\), such that each action picked maximizes expected future returns \(\mathbb{E}[G(x_{i})]\), where \(G(x_{i})=\sum_{j=1}\gamma^{j-i}r(x_{i},\pi_{T}(x_{i}))\) is the discounted sum of rewards starting at \(x_{i}\) following some policy. The discount factor \(\gamma\in(0,1]\) prioritizes the immediacy of feedback. Expected returns under an optimal policy are known as its value, \(V(x)=\max_{u}\mathbb{E}[G(x)]\). Then, \(\pi_{t}(x)\leftarrow\operatorname*{arg\,max}_{u}V(x)\).
The transfer problem is summarized as follows. A task \(T\) consists of the process dynamics \(P\) and the reward function \(r\). Given a target task \(T_{t}\) and a population of source tasks \(\mathcal{T}_{s}\in\mathcal{T}\), find a source task \(T_{s}\in\mathcal{T}_{s}\) and a transfer mechanism, such that the performance of its policy fine-tuned on \(T_{t}\) provides an optimal solution for \(T_{t}\). In other words,
\[T_{s}:\max_{T\in\mathcal{T}_{s}}G(x\sim T_{t}\mid\pi_{T\to T_{t}}) \tag{1}\]
\(T_{s}\) and \(T_{t}\) may differ on process dynamics and reward. For example, due to a fault or a changed control objective. Both affect evaluated returns \(G\). The objective in equation 1 can only be achieved retroactively, when candidate source tasks have been evaluated on the target task. However, exploring with transfer of each source task may violate time and safety constraints in specific applications. Therefore, the challenge is to _preemptively_ select a favorable source task and transfer mechanism. For this work, we assume homogeneous transfer: that the state and action spaces of all tasks are identical. Practically, this applies to cases where a single MDP's state transitions are disturbed due to faults, degradation etc. For the remainder of this work, subscripts \(*_{s},*_{t}\) refer to source and target parameters respectively.
The goal, ultimately, is to achieve high returns on the target task. This may be done by either picking a source task liable to transfer well, or by tweaking the transfer process such that the source policy converges swiftly to an optimum on the target task. Prior related work has addressed both of these approaches.
## III Related Work
This section reviews prior work done in this and related fields. The aforementioned goal of transfer learning for control overlaps with multi-task learning [4], reward shaping [5], and few-shot learning [6]. The body of research is divided into two broad categories: optimizing transfer learning informed by task similarity, and optimizing transfer once a source task is picked.
### _Selecting Similar Tasks for Transfer_
A multitude of similarity measures have been used in transfer learning looking at task similarity in classification and regression problems. Work by [7] builds on top Model-Agnostic Meta-Learning (MAML) [8] to develop task similarity-aware MAML. They represent task similarity as Euclidean distance between parameters of models trained on sampled tasks. Clusters are made for similar tasks, and the cluster closest to the target task is used during meta-initialization. [9] propose a general transfer approach, where task similarity is used to regularize the transferred model. [10] formalize the intuition that similar tasks have similar performances, and use the performance gap as a regularizer for transfer learning. Euclidean distance between model coefficients is used for this. Alternatively, [11] hypothesize that, given samples, two (classification) tasks are similar if the accuracy on the source task and the target tasks is high. [12] propose a reconstruction classifier, which attempts to reconstruct samples from the target task using samples from the source task. The assumption being that samples from \(T_{t}\cup\mathcal{T}_{s}\) lie on a subspace and can be modeled as combinations of each other. The sparser the coefficients for reconstruction, and the lower the error, the higher the similarity. Whereas [13] looks at the utility of explicitly including task similarity measures in the learning process. They use \(\mathcal{H}\) divergence and Wasserstein distance between probability distributions in adversarial multi-task learning for classification tasks. Recently, [14] develop physics-guided models to develop an initial reinforcement-learned control policy under inaccurate rewards, which in turn is transferred to the target process. The initialization of the source policy reduces the sample size needed to adapt to the target task.
### _Optimization of the Learning Process_
Another class of approaches has addressed the learning process itself, once a source task is selected. Such methods are not restricted to transfer learning, but are applicable to learning algorithms in general. Relevant work in this area touches on meta-learning [15], neural architecture design [16, 17], and hyperparameter tuning [18, 19].
In recent work in classical control, [20] propose the design of a transferrable controller via system identification. They stimulate the target process with tailored exploratory actions from the same initial state to identify relationships with the source process. The source policy can then be transformed to the identified target process.
## IV Policy Transfer via Transformation
In this section, we derive a policy transformation for a category of tasks such that the source policy is as close to optimal in the target task in the least squares sense, assuming the same reward function.
The optimal policy \(\pi\) - the control law - is derived to reach the optimal set of states _and_ using optimal actions, since the reward is a function of both state and action. However, if the reward is only a function of performance (\(x\)), the optimal policy would depend only on the states traversed, not the effort (\(u\)) it took to traverse them.
\[r:X\rightarrow\mathbb{R}\implies\nabla_{u}r=0\] \[\pi:\arg\max_{\pi}\mathbb{E}_{x_{i}\sim X}\sum_{j=i}^{\infty} \gamma^{j-i}r\left(P(x_{i},\pi(x_{i}))\right)\] \[\nabla_{P}V(x)=0 \tag{2}\]
That is, the optimal states remain the same. The same set of states optimal under \(T_{s}\) are optimal under \(T_{t}\). This comes with one caveat: \(P_{t}\) is controllable for the same set of states ([21]). Meaning, the dynamics in \(P_{t}\) allow those optimal states to be reached by some actions. Therefore if \(\pi_{t}\) can bring about the same state change in \(P_{t}\) as \(\pi_{s}\) does in \(P_{s}\), \(\pi_{t}\) can be guaranteed to be optimal.
Reward being a function of state only is a strong assumption. However, it is possible in many cases. For instances, where the cost of actions is dwarfed by the incentive to drive
the state variables to a certain point. In real world scenarios such as vehicle navigation, the battery or fuel levels may be included in the state vector as a proxy for action magnitudes.
To that end, sequential MDPs are framed as deterministic, dynamical time-invariant processes from a control theory lens. A process \(P\) has a state \(x\in\mathbb{R}^{n}\), and inputs \(u\in\mathbb{R}^{m}\). The process itself is characterized by the rate of change of its state \(\dot{x}=F_{\dot{x}}:X\times U\to X\). For systems with linear dynamics, \(F_{\dot{x}}=Ax+Bu\), where \(A\in\mathbb{R}^{n\times n},B\in\mathbb{R}^{n\times m}\) are constant matrices. \(A\) is the response to internal state, and \(B\) is the response to external inputs.
\[\dot{x}_{i} =F_{\dot{x}}(x_{i},u_{i+1})\] \[x_{i+1} =P(x_{i},u_{i+1}\mid F_{\dot{x}})\] \[=x_{i}+\int_{t=i}^{t=i+1}\dot{x}_{i}\partial t\] \[\approx x_{i}+\delta t\cdot\dot{x}_{i} \tag{3}\]
Where \(\delta t\) is a discrete sampling interval. For linear systems, \(P\) can be approximated as a linear transformation,
\[\dot{x}_{i} =Ax_{i}+Bu_{i+1}\] \[x_{i+1} \approx P\cdot[x_{i},u_{i+1}]^{T}\] \[=[1+\delta tA,\delta tB][x_{i},u_{i+1}]^{T} \tag{4}\]
We assume \(F_{A}:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}^{n\times n},F_{B}:\mathbb{ R}^{n\times m}\rightarrow\mathbb{R}^{n\times m}\) are transformations of \(A_{s},B_{s}\) under a fault, and \(\pi_{s}\) is the control policy of the source task. The control policy \(\pi_{s}\) already has optimized inputs to change states to optimal positions. Since the optimal states remain invariant, we want a policy \(\pi_{t}\) such that the change of state under the target process is the same as the source process.
\[\dot{x}_{s} =A_{s}x+B_{s}u_{s}\] \[\dot{x}_{t} =F_{A}A_{s}x+F_{B}B_{s}u_{t}\] If \[\dot{x}_{t}=\dot{x}_{s}\], then \[\implies u_{t}=(F_{B}B_{s})^{-1}(I-F_{A})A_{s}x+(F_{B}B_{s})^{-1}B_{s}u_{s} \tag{5}\] \[\implies u_{t}=\big{(}(F_{B}B_{s})^{-1}(I-F_{A})A_{s}+(F_{B}B_{s})^{-1}B_{ s}\pi_{s}\big{)}\,x \tag{6}\]
Equation 5 represents a multiplicative transformation of the source policy by \((F_{B}B_{s})^{-1}B_{s}\), representing a change in the input's effect on state dynamics. And an additive correction by \((F_{B}B_{s})^{-1}(I-F_{A})A_{s}\), representing the changed internal dynamics of the system. Equation 6 factors out \(u_{s}\leftarrow\pi_{s}x\) to get an equivalent representation for \(u_{t}\). Figure 1 depicts how these transformations can be appended in series and parallel, respectively to an existing policy function \(\pi_{s}\).
The transformation will change the policy's response to each state. The range of new actions may be different from that of the source policy. If the target policy's range is a subset of the source's range, or if actions are unconstrained in the process, this will not be notable. For constrained actions, actions can be clipped. For linear systems, this will not affect their eventual dynamics. That is, a sequence of actions scaled down by a factor of \(k\in\mathbb{R}^{+}\) over \(k\) intervals will result in the same change in state as the unscaled action applied for one interval.
The transformation of state dynamics, \(F_{A}\) is not required to be invertible. The only functions needing inversion are the action dynamics \(B_{s}\) and their transformation \(F_{B}\). In cases where a perfect inverse does not exist, the Moore-Penrose Pseudo-inverse may be used as the closest approximation in the least squares sense. The approximation error of the pseudo-inverse may be a measure of the suitability of the transformed policy.
\[A_{s},B_{s},F_{A},F_{B}\]
are learned through system identification strategies. For the specific case where process dynamics \(P(x,u\mid A,B)\) and their transformations \(F_{A},F_{B}\) can be linearized in the region of interest, they can be learned from measured data by solving a least squares problem:
\[X =[x_{0},x_{1},...],X^{+}=[x_{1},x_{2},...],U=[u_{1},u_{2},...]\] \[P =\big{[}X^{+}\cdot[X;U]^{T}\big{]}\cdot\big{(}[X;U]\cdot[X;U]^{T} \big{)}^{-1} \tag{7}\]
Then, \(P\) can be decomposed and solved for \(A,B\) as in equation 4. If \(A_{s},A_{t}\) are known, then \(F_{A}=A_{t}\cdot A_{s}^{-1}\). Similarly, for \(F_{B},B_{s},B_{t}\). Since \(B\) is not necessarily square, \(F_{B}=(B_{t}\cdot B_{s}^{T})\cdot(B_{s}\cdot B_{s}^{T})^{-1}\). The whole approach is outlined in algorithm 1.
For non-linear transformations of the linear system matrices \(A,B\), non-linear basis functions such as neural networks can be used to approximate the target transformation. Since
Fig. 1: A schematic of the policy transformation. The nominal action \(u_{s}\) is transformed as \(u_{t}\) to bring about the same change in state in the target system that was considered optimal by \(\pi_{s}\) in the source system. The source policy can be any control algorithm for e.g. LQR, MPC, RL etc.
requires inversion, a monotonic constraint should be put on their learned models (by constraining hidden layer weights to be positive, for example). Or, probabilistic models which learn a posterior distribution of a variable, given the output of a function [22] can be used, from which the likely inverse of the function can be sampled.
In summary, if reward is a function of only the state, and if optimal states in \(P_{s}\) are reachable in \(P_{t}\), then optimal change in state remains invariant. Thus \(\pi_{t}\) can be derived as a transformation of \(\pi_{s}\) in terms of \(P_{s}\) and \(P_{t}\) to bring about the same change in state to optimize the target task.
The worst-case computational cost of our approach is lower than that of approaches that will evaluate states and actions _anew_ on the target task, such as RL and MPC. Our approach relies on system identification to transform \(\pi_{s}\), which is already known. As an illustration, we consider deterministic, continuous MDPs, assuming unique actions lead to unique states. Identifying an arbitrary system \(P\) requires sampling each state transition once to learn that mapping \(x_{i+1}\gets P(x_{i},u_{i+1})\). The complexity of identification then is the space \(X\in\mathbb{R}^{n}\times U\in\mathbb{R}^{m}\). The Bellman equation ([23]), which is foundational to RL and MPC approaches, traverses all possible state _trajectories_ to evaluate states, where state transitions may be traversed multiple times, thus giving a higher computational complexity. Therefore, learning the target dynamics to transform the source policy is computationally cheaper than evaluating states in the target task. Even when a source RL policy is re-used and fine-tuned on the target task, it may yet need to sample every state trajectory in the worst case. The source policy has learned to drive towards valuable states under \(P_{s}\). It needs to re-sample the transformed trajectories under \(P_{t}\) to reevaluate each state. The amount of trajectories needing revision will measure, and depend on, the similarity between \(P_{s}\) and \(P_{t}\).
## V Experiments
This approach is demonstrated using linear and non-linear systems. We demonstrate our case with Linear Quadratic Regulator (LQR) ([24]), Model-Predictive Control (MPC), and RL. For RL, the Proximal Policy Optimization (PPO) algorithm is applied. The reward \(r\) and cost \(c\), where \(r=-c\), are specified as quadratic functions of state \(x\) with weights \(Q\), where \(Q\) is a diagonal matrix. The function minimizes cost and maximizes reward around \(x=\hat{0}\), which is the desired state vector. However, the system can be driven to some other point \(x_{0}\) by substituting \(x\gets x-x_{0}\) without a loss in generality. For the sake of comparison with LQR, we introduce a small action weight \(R=10^{-5}\) in reward, which otherwise does not affect other approaches. Similarly, to accommodate LQR, the optimization assumes unconstrained actions. However, during testing, the actions are clipped to \([-1,1]\) for each time interval.
\[c=-r=x^{T}Qx+u^{T}Ru \tag{8}\]
A simple one-dimensional temperature (\(x\)) regulation system is first used as a test bed, where positive and negative actions control a heating or cooling element (\(u\)). System parameters are set as \(a=-0.1,b=1,Q=I\). Faults represent a change in conductivity \(a\), and a reversal in action polarity \(b\), such that the nominal action of increasing heat will now cool the system.
\[x,u,a,b,F_{A},F_{B}\in\mathbb{R}\] \[\dot{x}_{temp}=ax+bu \tag{9}\]
A higher dimensional, but still linear, spring-mass system is described in equation 10, with \(Q=I\), and actions \(u\) as forces. The dynamics are governed by the mass \(m=1\), spring constant \(k=10\), and dynamic friction \(k_{f}=0.2\).
\[x\in\mathbb{R}^{2},\ u\in\mathbb{R},\ k,k_{f},m\in\mathbb{R}^{+ },F_{A},F_{B}\in\mathbb{R}^{2\times 2}\] \[\dot{x}_{spring}=\begin{bmatrix}0&1\\ -\frac{k}{m}&\frac{k_{f}}{m}\end{bmatrix}x+\begin{bmatrix}0\\ \frac{1}{m}\end{bmatrix}u \tag{10}\]
A yet more complex non-linear, continuous system is a pendulum in equation 11, with \(Q=I\), and actions \(u\) as torques. The dynamics are governed by the mass \(m=0.1\), pendulum length \(l=1\), gravitational acceleration \(g=10\), and dynamic friction \(k_{f}=0.02\).
\[x\in\mathbb{R}^{2},\ u\in\mathbb{R},\ g,l,k_{f},m\in\mathbb{R}^{ +},F_{A},F_{B}\in\mathbb{R}^{2\times 2}\] \[\dot{x}_{pendulum}=\begin{bmatrix}0&1\\ -\frac{g\sin(\cdot)}{l}&-\frac{k_{f}}{ml^{2}}\end{bmatrix}x+\begin{bmatrix}0 \\ \frac{1}{ml^{2}}\end{bmatrix}u \tag{11}\]
Finally, a cartpole system is used to demonstrate a more complex non-linear case. The state vector \(x\) comprises of the angle of the pole \(x_{1}\), its angular velocity \(x_{2}\), the position of the cart \(x_{3}\), and the cart's velocity\(x_{4}\). Actions \(u\) are forces applied to the cart. The system is parametrized by the cart and pole masses, \(m_{c}=0.5,m_{p}=0.1\), the length of the pole \(l=1\), gravitational acceleration \(g=10\), and the coefficient of friction \(k_{f}=0.01\). The state weights for reward are \(Q=[[1,0,0,0],[0,0.1,0,0],[0,0,10^{-5},0],[0,0,0,0.1]],R=10^{-5}\). For the reinforcement learning controller, each time step the pole is upright gains a constant reward of \(1\). The state equations are factored into terms that are functions of \(x\) and \(u\), and \(F_{A},F_{B}\in\mathbb{R}^{4\times 4}\) are applied as disturbances.
\[\dot{x_{1}} =x_{2}\] \[\dot{x_{2}} =\frac{g}{l}\sin x_{1}-\frac{k_{f}x_{2}}{ml^{2}}+\frac{\dot{x_{4} }\cos x_{1}}{l}\] \[\dot{x_{3}} =x_{4}\] \[\dot{x_{4}} =\frac{m_{p}\sin x_{1}\left(g\cos x_{1}-{l\dot{x_{1}}}^{2}\right) +u(t)}{m_{c}+m_{p}-m_{p}\cos^{2}x_{1}} \tag{12}\]
Experiments are carried out by obtaining a source policy \(\pi_{s}\) on the nominal process \(P_{s}\). Then, a fault, denoted by a parametric change in the equations of state, is introduced. The change is characterized by \(F_{A}\), a positive definite matrix, and \(F_{B}\), a negative definite matrix. A buffer of measurements \(\mathcal{D}_{t}\) is collected to estimate the target process \(P_{t}\). The transformed
policy \(\pi_{t}\) is derived from \(\pi_{s}\) by approximating both \(P_{s}\) and \(P_{t}\) as linear systems about the buffer.
We evaluate _jumpstart improvement_, _asymptotic improvement_, and _time to threshold_. These metrics describe short and long term advantages of our approach and its computational complexity. Jumpstart improvement is the immediate difference in rewards when a policy interacts with a new task. Asymptotic improvement is the limit of accumulated rewards as the policy continues to learn. And time to threshold is the time taken for accumulated rewards to reach an acceptable level of performance.
Two sets of experiments are carried out. First, LQR and MPC are used with our approach. They are model-based, deterministic, and have complete knowledge of task dynamics and the relationship to the target task. Thus, variables such as hyperparameter selection, system identification, and policy function formulation in RL are removed to make results of this work more apparent. \(F_{A},F_{B}\) are provided instead of estimated. The results are tabulated in table I. As both tables Ia and Ib show, the episodic rewards from \(\pi_{t}\) (our transformation) are within a standard deviation of, if not better than, the benchmark \(\pi_{qr}\) and \(\pi_{mpc}\).
In the second set of experiments, RL is applied (table II). For RL, \(\pi_{s}\) is obtained by running PPO algorithm until the episodic rewards converge. The policy transformation applied to \(\pi_{s}\) can be further fine-tuned via gradient descent. We represent the transformed policy as \(\pi_{t}\), the source policy fine-tuned on \(P_{t}\) as \(\pi_{s}^{*}\), the policy fine-tuned with transformation parameters as \(\pi_{t}^{+}\), and the one excluding parameters as \(\pi_{t}^{-}\). In the first subset of experiments, the \(F_{A},F_{B}\) are known _a priori_ (table IIa). Then, the same experiments are repeated but where the transformations \(F_{A},F_{B}\) have to be learned from measured data (table IIb). For both cases, \(\mathcal{D}_{s},\mathcal{D}_{t}\) use at most five episodes, amounting no more than \(2,500\) interactions with the system. This will notably contrast with the time steps taken by RL to converge to a policy.
Figure 2 shows the effect the fault has on accumulated rewards, and how our policy transformation causes a jumpstart improvement as the RL policy tunes control after the system has changed. A parametric fault is introduced once RL has converged to a policy on \(P_{s}\). There is an abrupt fall in rewards on \(P_{t}\). Using RL iteratively to learn on \(P_{t}\) is slow, and sometimes unable to recover at all. However, the policy transformation leads to a jumpstart improvement in rewards. For simple linear systems such as temperature, the transformation is instantly optimal. For non-linear systems like cartpole, there is a smaller jumpstart improvement, along with a faster time to convergence. For both sets of experiments involving RL, the results show that the transformed source policy, derived from the identified target system, is able to achieve comparable, if not better, performance than the source policy fine-tuned directly on the target task.
## VI Discussion and Conclusion
The results demonstrate several key points. Our approach gets a jump-start improvement in peformance after a parametric fault. Secondly, during RL, if not already converged, rewards are faster to reach convergence threshold (figure 2). Thirdly, when knowledge of transformation and dynamics is known, the source policy's transformation gives results similar to LQR and MPC being trained on \(P_{t}\) (tables I, IIa). However, unlike MPC, an optimization problem does not need to be solved recurrently when applying control, saving computational complexity. Finally, when knowledge of transformation and dynamics is not known, an approximate transformation using measured samples and fine-tuning using RL gives similar, albeit marginally lesser, results (table IIb).
Therefore, our approach may lend itself as an initialization strategy for data-driven controllers to mitigate sample inefficiency. After the adaptation step, the controller can proceed with further reinforcement learning to fine-tune parameters.
We looked at transforming control policies by reasoning about task dynamics as a means of adaptive control, instead of the statistical properties of parameters as in machine learning. Our main contribution was the transformation of a nominal control policy that leverages system identification. It is applicable to a host of control algorithms, and tasks where the objective function is agnostic to actions. The transformation is such that a source policy would transfer positively on the target process with a higher sample efficiency than reinforcement learning.
There are several interesting venues of future research. First, using the error in policy transformation, which may use pseudo-inverses, as a measure and guarantee of the quality of transfer. Secondly, extending the transformation to a broader class of MDPs with non-linear disturbances. Thirdly, using parameter estimation, by relying on fault identification, instead of system identification to further reduce the data samples to adapt to a new task.
|
2306.13716 | Nonlocal phase modulation of multimode, continuous-variable twin beams | We investigate experimentally the nonlocal phase modulation of
multiple-frequency-mode, continuous-variable entangled twin beams. We use a
pair of electro-optical phase modulators to modulate the entangled probe and
conjugate light beams produced by four-wave mixing in hot Rb vapor. A single
phase modulator in either one of the twin beams reduces the two-mode squeezing
signal, and we find that the modulations interfere nonlocally to modify the
beam correlations. The nonlocal modulation of the beams can produce quantum
correlations among frequency modes of the multimode fields. | Zhifan Zhou, Luıs E. E. de Araujo, Matt DiMario, B. E. Anderson, Jie Zhao, Kevin M. Jones, Paul D. Lett | 2023-06-23T18:01:03Z | http://arxiv.org/abs/2306.13716v1 | # Nonlocal phase modulation of multimode, continuous-variable twin beams
###### Abstract
We investigate experimentally the nonlocal phase modulation of multiple-frequency-mode, continuous-variable entangled twin beams. We use a pair of electro-optical phase modulators to modulate the entangled probe and conjugate light beams produced by four-wave mixing in hot Rb vapor. A single phase modulator in either one of the twin beams reduces the two-mode squeezing signal, and we find that the modulations interfere nonlinearly to modify the beam correlations. The nonlocal modulation of the beams can produce quantum correlations among frequency modes of the multimode fields.
pacs: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here here: Valid PACS appear here here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here: Valid PACS appear here here: Valid PACS appear here: Valid PACS appear here here: Valid PACS appear here
then
\[\begin{split}\langle\overline{X_{-}^{2}}\rangle=&(G^{2}+g ^{2})(1-\eta)+\eta-\\ & 2gG(1-\eta)J_{0}\left(m\sqrt{2+2\cos\phi}\right).\end{split} \tag{2}\]
If both EOMs are turned off (\(m=0\)), and there are no losses (\(\eta=0\)), then \(\langle\overline{X_{-}^{2}}\rangle=(G-g)^{2}\). Then for any \(G^{2}>1\), \(\langle\overline{X_{-}^{2}}\rangle<1\), so that the noise power is below shot noise and squeezing is observed. In deriving Eq. (2), we made no assumptions regarding the spatial separation of the probe and conjugate field modes. Equation (2) implies that the maximum squeezing between the two fields is obtained when the two phase modulators are off. Turning the modulators on will, in general, reduce the degree of squeezing. For a high enough modulation index, squeezing may be eliminated as the quadrature noise will exceed the shot noise.
Three cases are particularly of interest: the EOMs are driven (in-phase) with a relative phase of \(\phi=0^{\circ}\) and (out-of-phase) with \(\phi=180^{\circ}\) and \(\phi=120^{\circ}\). When \(\phi=180^{\circ}\), Eq. (2) clearly shows that the modulation imparted to one of the twin beams cancels the modulation experienced by the other twin beam; and two-mode squeezing is recovered at the same level as obtained with the EOMs off. Comparing Eqs. (1) and (2), we see that, when the two EOMs are driven in phase, they produce the same amount of squeezing as only one modulator operating at twice the modulation index (\(m_{\rm p}=0\) and \(m_{\rm c}=2m\), or vice-versa). And for \(\phi=120^{\circ}\), Eq. (2) predicts that the two EOMs should behave with respect to the two-mode squeezing signal as a single modulator driven at a modulation index of \(m\). More generally, the effect of two phase modulators on the joint quadrature noise is similar to that of a single modulator operating at an effective modulation index of \(\sqrt{m_{\rm p}^{2}+m_{\rm c}^{2}+2m_{\rm p}m_{\rm c}\cos\phi}\). In other words, with respect to the two-mode squeezing signal, the modulators act cumulatively to determine the effective modulation index, analogously to the DV case [26; 27]. This cumulative effect is also nonlocal. That is, it is independent of the distance between the EOMs.
Figure 1 shows the experimental setup and a level diagram of the double-lambda 4WM scheme. The setup is similar to the one described in [19]. A 12 mm-long Rb vapor cell is heated to 123 \({}^{\circ}\)C. A single pump beam with 700 mW of power and 650 \(\mu\)m beam diameter is detuned 0.8 GHz blue from the \({}^{85}\)Rb \(D_{1}\) line and is split equally into two beams: one beam goes through the vapor cell along with a probe seed beam to generate the local oscillator (LO) beams, and the other beam generates the two-mode squeezed vacuum states, as in [19]. The LO probe seed is derived from the pump beam by double passing a small portion of the pump through a 1.5 GHz acoustic-optic modulator. Pump and probe intersect inside the cell at an angle of 7 mrad. A double-lambda 4WM process uses the \(\chi^{(3)}\) nonlinearity of the Rb vapor to convert two pump photons into one probe photon and one conjugate photon. (The squeezed signal beams are vacuum seeded.) The probe beam experiences a typical gain of 3. The non-degenerate probe and conjugate beams are at the same angle on opposite sides of the pump and in a two-mode squeezed state. Probe and conjugate beams pass through identical EOMs driven with a 200 kHz sine wave. The EOMs are driven synchronously by separate outputs of the same function generator, and their relative phase can be adjusted by the function generator. The modulated beams are sent separately to two balanced homodyne detectors, one for the probe and another for the conjugate.
In the homodyne detectors, the probe and conjugate beams are mixed with LO beams on 50/50 beamsplitters with fringe visibilities \(>97\,\%\). The relative phases \(\theta_{p,c}\) between the LOs and the probe/conjugate fields are adjusted by mirrors mounted on piezoelectric transducers (PZT) in order to select the quadrature to be detected in each beam. The outputs of the homodyne detectors are directly measured with matched photodiodes with quantum efficiencies \(>95\,\%\). The path length from the vapor cell to the optical detectors for the probe and conjugate beams are approximately matched. Due to the different group velocities of the probe and conjugate beams in the atomic vapor [31], the two fields are optically delayed by amounts that differ by approximately 10 ns. To compensate for this delay, we add an electronic delay line after detection by adjusting the cable lengths. The photocurrents are amplified and then measured with a 1 GHz digital sampling oscilloscope. The measured time traces are digitally post processed to determine the power spectra and generalized quadrature noise powers discussed below.
The homodyne signal as a function of local oscillator phase gives a generalized quadrature: \(\hat{X}_{i}\cos\theta_{i}+\hat{P}_{i}\sin\theta_{i}\)
Figure 1: Experimental setup and energy level diagram (inset) of the 4WM process in \({}^{85}\)Rb. The pump beam (P) generates twin probe (Pr) and conjugate beams (C) in a two-mode squeezed state. BS are 50/50 nonpolarizing beamsplitters; EOMs are electro-optical phase modulators; PZT is a piezoelectric transducer; DSO is a 1 GHz digital sampling oscilloscope; and LO are local oscillator fields for the homodyne detection schemes.
If we subtract the homodyne signals, we measure the noise power of the joint quadrature \(\hat{X}_{\theta}=\hat{X}_{\theta_{\mathrm{p}}}-\hat{X}_{\theta_{\mathrm{e}}}\), where \(\theta=\theta_{\mathrm{p}}+\theta_{\mathrm{c}}\). A typical noise spectrum as a function of the phase \(\theta\) is shown in Fig. 2. Squeezing is observed when \(\theta=0\) (point I in the figure). A frequency-dependent squeezing spectrum is observed by locking the LO phase \(\theta\) to point I by a noise locking technique [32]. (Locking the phase to point III, gives us the quadrature \(\hat{P}_{\theta}=\hat{P}_{\theta_{\mathrm{p}}}+\hat{P}_{\theta_{\mathrm{e}}}\).) Because the temporal modulation imparted by the EOMs to the beams may disturb the locking signal, we pulse the driving signal from the function generator to the EOMs at 40 Hz. The driving pulses are square pulses with a width of 12.5 ms. The signals from the probe and conjugate homodyne detectors consisted of \(10^{6}\) values sampled over 10 ms captured during the time the EOMs were on. The measurement window is intentionally smaller than the pulse width in order to avoid possible transient edge effects in the data. We used Welch's method [33] to obtain the final power spectrum of the measured noise.
Typical two mode squeezing spectra (\(\langle\overline{X_{-}^{2}}\rangle\) vs. frequency) taken with the LO phases locked at point I are shown in Fig. 3. When the EOMs are off (Fig. 3a), the squeezing spectrum extends over a bandwidth of approximately 15 MHz. Turning one EOM on (on either the probe or conjugate beam) with \(m=0.1\pi\) reduces squeezing at all frequencies. At twice the modulation index, squeezing is eliminated, with the noise well above the shot noise. When both EOMs are on with the same modulation index, the effect on the entangled signal depends on their relative phase (Fig. 3b). When the EOMs are driven in phase, they act together to reduce the squeezing signal, producing a spectrum similar to that of a single EOM with twice the modulation index acting on only one of the beams (Fig. 3a). When the EOMs are driven \(180^{\circ}\) out of phase, their effect on the two-mode squeezing cancels. With the EOMs driven at a relative phase of \(120^{\circ}\), the squeezing spectrum is similar to that seen with only one EOM on. These results are in agreement with the predictions of our model. They are the CV analog of the nonlocal modulation effect reported in Ref. [27] in the DV regime.
Full characterization of the two-mode squeezed states requires determining the covariance matrix \(C\) of the fields. In the ordered basis \((\hat{X}_{\mathrm{p}},\hat{X}_{\mathrm{c}},\hat{P}_{\mathrm{p}},\hat{P}_{ \mathrm{c}})\):
\[C=\begin{bmatrix}C_{XX}&C_{XP}\\ (C_{XP})^{T}&C_{PP}\end{bmatrix}, \tag{3}\]
where \(C_{XX}\), \(C_{XP}\), and \(C_{PP}\) are \(2\times 2\) matrices. The covariance matrix of the two-mode squeezed state is symmetric. \(C_{XX}\) and \(C_{PP}\) are associated with the amplitude \(X\!X\) and phase \(PP\) joint quadratures of the twin beams, respectively, while \(C_{XP}\) is the mutual correlation matrix between their \(X\) and \(P\) quadratures. When the EOMs are off, \(C_{XP}=0\), so the covariance matrix is block diagonal. Turning on the phase modulators couples the \(X\) and \(P\) quadratures of the fields, and \(C_{XP}\neq 0\).
We can gain further insight into the characteristics of the nonlocal modulation of the EOMs by measuring the \(X\!P\) quadrature of the covariance matrix for the twin
Figure 3: (a) Squeezing spectra obtained with both EOMs turned off (blue line) and with one EOM on, but at different modulation indexes: \(m\) (green line) and \(2m\) (black line); (b) Squeezing spectra for both EOMs running, at a modulation index \(m\), in phase (\(\phi=0^{\circ}\)) and out of phase (\(\phi=180^{\circ}\) and \(\phi=120^{\circ}\)). In all cases, \(m=0.1\pi\). Electronic noise was not subtracted and thus the shot noise (red line) does not appear to be independent of frequency here [30].
Figure 2: Noise power of the sum (dashed blue line) and difference (solid black line) of the quadratures measured by the homodyne detectors as the phase \(\theta\) is varied. In both cases, the noise is analysed at a frequency of 1 MHz. By locking the phase to points I (\(\theta=0\)), II (\(\theta=\pi/4\)) or III (\(\theta=\pi/2\)), we can measure the joint quadratures \(X\!X\), \(XP\) or \(PP\), respectively, of the twin beams.
beams. For that, we lock the joint quadrature phase to \(\theta=\pi/4\) (point II in Fig. 2). In this way, we measure different quadratures (\(X\) of the probe and \(P\) of the conjugate) of the two beams. The locking scheme and how we recover the _XP_ covariance matrix from the acquired time traces are detailed in [30].
Figure 4a shows the measured _XP_ covariances. When the beams are not modulated, their \(X\) and \(P\) quadratures are not coupled. We also do not observe any correlations when the EOMs are operated at 180\({}^{\circ}\) phase difference due to the nonlocal phase modulation as shown in Fig.3(b). However, at 0\({}^{\circ}\) phase difference, the \(X\) and \(P\) quadratures of the probe and conjugate beams are coupled, and positive correlations can be seen. The double diagonal structure in the covariance matrix corresponds to the frequency sidebands introduced by the EOMs and demonstrates the multimode nature of the phase-modulated joint field quadratures. A single EOM, driven at twice the modulation index produces similar correlations to those produced by two in-phase EOMs. The _XP_ measurements are phase sensitive. Not only the relative phase of the EOMs is important, but the phase of the data windows with respect to the EOM drive also matters. In all cases, while changing the driving phase of the conjugate EOM, we kept the phase of the probe EOM fixed at 0\({}^{\circ}\).
In all results presented so far, both EOMs were placed in the path of the twin beams. This configuration is of interest for quantum information processing applications, such as the production of cluster states or quantum key distribution. An equivalent measurement can be made with the EOM in a local oscillator beam. This does not create an entangled state, but the measurement result is the same as if it did. Recent proposals pointed out that multi-mode homodyne detection can realize compact Gaussian quantum computation by selecting appropriate LO measurement choices [34, 35, 36], which includes digital post-processing. In Fig. 4(b), we show the results obtained when both EOMs are placed in the LO paths. It is clear that the effect of the EOMs on the measured beam correlations is the same as the one observed with the EOMs placed in the beams, except for a change of sign in the correlations. Placing one EOM in the probe beam and the other EOM in the LO of the conjugate beam causes a different effect on the measured correlations, as shown in Fig. 4(c). In this case, the EOMs cancel each other when they are in phase and couple the \(X\) and \(P\) quadratures when they are out of phase.
One can view the arrangement in Fig. 1 as a "truncated" version of the SU(1,1) interferometer [37] that requires homodyne detection to read out the phase. This configuration can be seen to be a pair of interferometers, each comprised of a (very noisy) signal beam, plus a LO beam. These interferometers will, however, have quantum-correlated signals, and any phase shift written onto one of the beams can be detected at a sub-shot-noise level in the difference signals. When one views the interferometers independently, it is clear that it does not matter if the phase shift is written onto the "signal" beam, or onto the LO beam - either way, the homodyne output contains the signal. Because of the geometry, however, a similar phase shift written on the LO will appear as a phase shift of the opposite sign to one written onto the signal beam. If the signal and the local oscillator have the same phase shifts, the detector will not see it.
In conclusion, we have studied the effects of electro-optical phase modulation on two-mode squeezing of multi-frequency-mode, continuous variable twin beams. The probe and conjugate modulations interfere nonlocally to modify the beam correlations, which are controlled by adjusting the relative driving phase of the modulators. We found that the modulators acted cumulatively to determine the effective modulation index. We believe that our setup is a potential platform for further experimental studies on cluster state generation, quantum erasing, and quantum sensing. The ability to manipulate twin beam correlations via nonlocal phase modulation has important implications for those fields. For example, positioning the EOM in the local oscillator should allow the implementation of compressed sensing for quantum system characterization, thereby measuring the appropriate frequency mode combinations, rather than mixing the modes directly. Positioning the EOMs in the local oscillators can also bring an experimental advantage, since it avoids introducing additional losses in the signal beams, allowing for larger squeezing signals. A recent proposal for generating hypercubic cluster
Figure 4: Measured \(X_{p}P_{c}\) covariance blocks for (a) both EOMs in the twin beams, (b) both EOMs in the local oscillator beams and (c) one EOM in the probe beam and the other EOM in conjugate local oscillator beam. When both EOMs are on, their modulation index is \(m=0.1\pi\). The double diagonal structure of the correlations (magnefied inset) corresponds to the first-order frequency sidebands due to the periodic modulation of the beams by the EOMs. For each square, the horizontal and vertical axes correspond to 200 kHz frequency bins spanning the range 200 kHz to 10 MHz [30].
states [21] suggests using an EOM to couple different frequency qumodes of two-mode entangled beams and is a natural next step for the present experiments. We have found that in generating entangled states, it is completely equivalent to use a single EOM in one of the twin beams or to have an EOM in each beam. A complex waveform may be required, but the nonlocal nature of the modulation allows all of the modulation to take place in one beam. Because the bandwidth of our squeezed light is limited, we are able to digitize the measurements across the entire spectrum. This would allow one to implement the direct approach to measurement-based computing suggested in [34; 35; 36].
This work was supported by the Air Force Office of Scientific Research (FA9550- 16-1-0423). Luis E. E. de Araujo acknowledges the financial support of grant #2019/24743-9, Sao Paulo Research Foundation (FAPESP). We acknowledge Alessandro Restelli for the help with the lock circuits. This research was performed while Matthew DiMario held a National Research Council Research Associateship at NIST.
|
2302.10930 | Active galactic nuclei with high-resolution X-ray spectroscopy | The imminent launch of XRISM will usher in an era of high-resolution X-ray
spectroscopy. For active galactic nuclei (AGN) this is an exciting epoch that
is full of massive potential for uncovering the ins and outs of supermassive
black hole accretion. In this work, we review AGN research topics that are
certain to advance in the coming years with XRISM and prognosticate the
possibilities with Athena and Arcus. Specifically, our discussion focuses on:
(i) the relatively slow moving ionised winds known as warm absorbers and
obscurers; (ii) the iron emitting from different regions of the inner and outer
disc, broad line region, and torus; and (iii) the ultrafast outflows that may
be the key to understanding AGN feedback. | Luigi C. Gallo, Jon M. Miller, Elisa Costantini | 2023-02-21T19:00:01Z | http://arxiv.org/abs/2302.10930v2 | # Active galactic nuclei with high-resolution X-ray spectroscopy
###### Abstract
The imminent launch of XRISM will usher in an era of high-resolution X-ray spectroscopy. For active galactic nuclei (AGN) this is an exciting epoch that is full of massive potential for uncovering the ins and outs of supermassive black hole accretion. In this work, we review AGN research topics that are certain to advance in the coming years with XRISM and prognosticate the possibilities with Athena and Arcus. Specifically, our discussion focuses on: (i) the relatively slow moving ionised winds known as warm absorbers and obscurers; (ii) the iron emitting from different regions of the inner and outer disc, broad line region, and torus; and (iii) the ultrafast outflows that may be the key to understanding AGN feedback.
## 1 Introduction
Active galactic nuclei (AGN) are unlike any other class of astronomical object. They cannot be described by a single, dominating process. Instead, AGN radiate energy over the entire electromagnetic spectrum, and are the sites of pair-production, cosmic rays, and gravitational waves. Radiation is created through multiple processes, |
2303.11250 | The continuum of metastable conical states of monoaxial chiral
helimagnets | At low temperature and zero applied magnetic field, besides the equilibrium
helical state, monoaxial chiral helimagnets have a continuum of helical states
differing by the wave number of the modulation. The wave number of these states
in units of the equilibrium state wave number is denoted here by p, and
accordingly the corresponding states are called the p-states. In this work we
study in detail the metastability of the p-states. The application of an
external magnetic field in the direction of the chiral axis has a double
effect: on one hand, it introduces a conical deformation of the p-states, and
on the other hand it destabilizes some of them, shrinking the range of p in
which the p-states are metastable. If a polarized current is applied along the
chiral axis, the p-states reach a steady moving state with a constant velocity
proportional to the current intensity. Besides this dynamical effect, the
polarized current also induces a conical deformation and reduces the range of
stability of the p-states. The stability diagram in the plane applied field -
applied current intensity has interesting features that, among other things,
permit the manipulation of p-states by a combination of applied fields and
currents. These features can be exploited to devise processes to switch between
p-states. In particular there are p-states with negative p, opening the
possibility to helicity switching. The theoretical feasibility of such
processes, crucial from the point of view of applications, is shown by
micromagnetic simulations. Analogous $p$-states exists in cubic chiral
helimagnets and therefore similar effects are expected in those systems. | V. Laliena, S. A. Osorio, D. Bazo, S. Bustingorry, J. Campo | 2023-03-20T16:34:54Z | http://arxiv.org/abs/2303.11250v1 | # The continuum of metastable conical states of monoaxial chiral helimagnets
###### Abstract
At low temperature and zero applied magnetic field, besides the equilibrium helical state, monoaxial chiral helimagnets have a continuum of helical states differing by the wave number of the modulation. The wave number of these states in units of the equilibrium state wave number is denoted here by \(p\), and accordingly the corresponding states are called the \(p\)-states. In this work we study in detail the metastability of the \(p\)-states. The application of an external magnetic field in the direction of the chiral axis has a double effect: on one hand, it introduces a conical deformation of the \(p\)-states, and on the other hand it destabilizes some of them, shrinking the range of \(p\) in which the \(p\)-states are metastable. If a polarized current is applied along the chiral axis, the \(p\)-states reach a steady moving state with a constant velocity proportional to the current intensity. Besides this dynamical effect, the polarized current also induces a conical deformation and reduces the range of stability of the \(p\)-states. The stability diagram in the plane applied field - applied current intensity has interesting features that, among other things, permit the manipulation of \(p\)-states by a combination of applied fields and currents. These features can be exploited to devise processes to switch between \(p\)-states. In particular there are \(p\)-states with negative \(p\), opening the possibility to helicity switching. The theoretical feasibility of such processes, crucial from the point of view of applications, is shown by micromagnetic simulations. Analogous \(p\)-states exists in cubic chiral helimagnets and therefore similar effects are expected in those systems.
## I Introduction
Noncollinear magnetic textures such as magnetic helices, domain walls, vortices, or skyrmions are very promising for spintronic applications due to the possibility to control them using different external stimuli, like magnetic fields or polarized electric currents [1; 2; 3; 4; 5; 6]. To be useful, these magnetic textures have to be (meta)stable in some part of the relevant parameter space. Noncollinear magnetic textures appear, in particular, as equilibrium states at low temperature in chiral magnets, which are characterized by the presence of a sizable Dzyaloshinskii-Moriya interaction (DMI). The most studied systems of this kind are cubic chiral helimagnets and films with interfacial DMI, which host skyrmions and skyrmion lattices [5; 6; 7; 8; 9]. Monoaxial chiral helimagnets, in which the DMI propagates only along a single direction (the chiral axis), have received comparatively less attention. Besides the archetypal CrNb\({}_{3}\)S\({}_{6}\), other known monoaxial chiral helimagnets are MnNb\({}_{3}\)S\({}_{6}\), CrTa\({}_{3}\)S\({}_{6}\), CuB\({}_{2}\)O\({}_{4}\), CuCsCl\({}_{3}\), Yb(Ni\({}_{1-x}\)Cu\({}_{x}\))\({}_{3}\)Al\({}_{9}\), and Ba\({}_{2}\)CuGe\({}_{2}\)O\({}_{7}\)[10; 11; 12; 13; 14; 15; 16; 17; 18].
Not surprisingly, monoaxial chiral helimagnets pass also a strong uniaxial magnetic anisotropy (UMA) along the chiral axis, which is of easy-plane type in CrNb\({}_{3}\)S\({}_{6}\). The competition of the exchange interaction, the DMI, the UMA and the applied field determines the equilibrium state at low enough temperature, where thermal fluctuations are only a minor effect. At zero external field the equilibrium state is a magnetic helix with wave vector along the chiral axis and wave number determined by the competition of the exchange interaction and the DMI. If a low enough external field is applied in a direction perpendicular to the chiral axis the equilibrium state is a Chiral Soliton Lattice (CSL)[19; 20; 21; 22]. If instead the magnetic field is applied in the direction of the chiral axis, the equilibrium state is a conical state [23; 24; 25; 26; 27]. These two magnetic textures, the CSL and the conical state, are of different nature: the CSL is solitonic while the conical state is helical. If the field direction is neither perpendicular nor parallel, the equilibrium state is a one-dimensional modulated texture which connects smoothly the two limiting cases as the direction of the magnetic field is varied from perpendicular to parallel to the chiral axis [26]. In all cases, for sufficiently large magnetic fields the equilibrium state is the forced ferromagnetic state (FFM), which has a uniform magnetization pointing in the direction of the external field. The different nature of the CSL and the conical states is manifested in the transition to the FFM state: in the former case it is of nucleation type and in the latter of instability type, in de Gennes's terminology [28]. These two different kinds of phase boundaries are separated by tricritical points in the temperature - applied field phase diagram [27; 29]. The phase diagram of monoaxial chiral helimagnets and the nature of the phase boundaries in the temperature-applied magnetic field space, determined experimentally by several groups [24; 30; 31; 32; 33], agree well with these theoretical predictions.
It was shown in [34] that, at low temperature, be
sides the conical equilibrium state a continuum of conical states differing by the wave number and the magnetization component along the chiral axis are local minima of the energy functional of monoaxial chiral helimagnets with external magnetic field applied along the chiral axis. Similar local minima of the energy are present in cubic chiral helimagnets [35]. These conical states, called here \(p\)-states, include states with helicity opposite to the helicity of the equilibrium state, which, although energetically disfavoured by the DMI, remain as metastable states in some range of the applied magnetic field. It is also remarkable that among these continuum of metastable states there are some which are ferromagnetic, with the uniform magnetization pointing to a direction determined by the competition between the UMA and the applied field.
In this work we analyze in detail the properties of these \(p\)-states of monoaxial chiral helimagnets, clarifying their role as metastable states and studying their behavior under the action of polarized electric currents. One conclusion of this analysis is the possibility of switching between metastable conical states with different wave vectors, including the possibility of helicity reversing. This is clearly of great interest for applications. Indeeed, it has been argued recently that controlled switching among magnetic states with opposite helicity might be used for memory applications [36].
## II A continuum of conical states
Consider a monoaxial chiral helimagnet, such as CrNb\({}_{3}\)S\({}_{6}\), with chiral axis along \(\mathbf{z}\) (we shall use \(\mathbf{x},\mathbf{y},\mathbf{z}\) as the orthonormal vector triad in space). At low enough temperature the local magnetization is given by \(M_{\rm S}\mathbf{n}\), where \(\mathbf{n}\) is a unit vector field that describes the magnetization direction at each point of the material and the constant \(M_{\rm S}\) is the saturation magnetization. The magnetic energy is given by the functional \(E[\mathbf{n}]=\int d^{3}r\,e(\mathbf{r})\), with
\[e(\mathbf{r})=A\sum_{i}(\partial_{i}\mathbf{n})^{2}-D\mathbf{z}\cdot(\mathbf{n}\times\partial _{z}\mathbf{n})-K(\mathbf{z}\cdot\mathbf{n})^{2}-M_{\rm S}B\mathbf{z}\cdot\mathbf{n}. \tag{1}\]
In the above equation the index \(i\) runs over \(\{x,y,z\}\), \(A\), \(D\), and \(K\) stand for the exchange stiffness constant, and the DMI and UMA strength constants, respectively, and \(B\mathbf{z}\) is the applied magnetic field. We consider \(K<0\) to have an easy plane perpendicular to \(\mathbf{z}\). The DMI acts only along the \(\mathbf{z}\) axis, defining the chiral axis (notice that the external field is applied along the chiral axis). The sign of \(D\) is reversed if we reverse the direction of the \(\mathbf{z}\) axis, so that, with no loss of generality, we take \(D>0\). It is convenient to introduce the parameters
\[q_{0}=\frac{D}{2A},\quad\kappa=\frac{4AK}{D^{2}},\quad h=\frac{2AM_{\rm S}}{D^ {2}}B. \tag{2}\]
Notice that \(q_{0}\) has the dimensions of inverse length while \(\kappa\) and \(h\) are dimensionless. We do not include explicitly in the energy the magnetostatic energy, whose effect in an infinite system in which the magnetization depends only on \(z\) (as it is in this work) is completely absorbed in the UMA [37].
The dynamics of \(\mathbf{n}\) obeys the Landau-Lifschitz-Gilbert (LLG) equation
\[\partial_{t}\mathbf{n}=\gamma\mathbf{B}_{\rm eff}\times\mathbf{n}+\alpha\mathbf{n}\times \partial_{t}\mathbf{n}+\mathbf{\tau}, \tag{3}\]
where \(\alpha\) and \(\gamma\) are the Gilbert damping parameter and the gyromagnetic constant, respectively, and \(\mathbf{\tau}\) stands for some applied nonconservative torque not included in the energy (1). The effective field acting on \(\mathbf{n}\) is given by
\[\mathbf{B}_{\rm eff}\!=\!\frac{2A}{M_{\rm S}}\!\Big{(}\nabla^{2}\mathbf{n}-2q_{0}\mathbf{z }\times\partial_{z}\mathbf{n}+q_{0}^{2}\kappa(\mathbf{z}\cdot\mathbf{n})\mathbf{z}+q_{0}^{2}h \mathbf{z}\Big{)}. \tag{4}\]
In absence of external torque \(\mathbf{\tau}\) the equilibrium states are solutions of the static equation \(\mathbf{B}_{\rm eff}=\lambda\mathbf{n}\), where \(\lambda\) is a Lagrange multiplier enforcing the constraint \(\mathbf{n}^{2}=1\). For \(h\geq h_{\rm c}\), where \(h_{\rm c}=1-\kappa>1\) is the critical field, the equilibrium state is the homogeneous FFM state, with the magnetization pointing along the \(\mathbf{z}\) direction: \(\mathbf{n}=\mathbf{z}\). For \(h<h_{\rm c}\) the static equation admits solutions which are modulated states with the form of a conical helix propagating along the chiral axis. With the parametrization
\[\mathbf{n}=\sin\theta\cos\varphi\,\mathbf{x}+\sin\theta\sin\varphi\,\mathbf{y}+\cos\theta \,\mathbf{z}, \tag{5}\]
these modulated states are given by [38]
\[\cos\theta_{p}=\frac{h}{h_{\rm c}-(p-1)^{2}},\qquad\varphi_{p}(z)=pq_{0}z, \tag{6}\]
Figure 1: Energy density for \(h=0\) and \(h=2<h_{\rm c}\), as a function of \(p=q/q_{0}\). The minimum value corresponds always to the equilibrium state with \(p=1\). States indicated with continuous lines are stable against localized deformations, while dashed lines indicate unstable states, as shown in Sec. IV.1. The grey regions indicates the gap in \(p\) values where there are no states satisfying \(|\cos\theta_{p}|\leq 1\).
where \(p\) is the wave number in units of \(q_{0}\). It is convenient to label these states by \(p\), writing \(\theta_{p}\), \(\varphi_{p}\) and \(\mathbf{n}_{p}\). For the sake of brevity, these states will be referred to as the \(p\)-states, i.e. a \(p\)-state is a conical state with wave number \(q=pq_{0}\). Since \(|\cos\theta_{p}|\leq 1\), the range of \(p\) is limited to
\[1-\sqrt{h_{\rm c}-|h|}\leq p\leq 1+\sqrt{h_{\rm c}-|h|}. \tag{7}\]
Notice that \(h_{\rm c}>1\) because we consider easy-plane anisotropy. Hence, for \(|h|\) small enough \(p\) can be negative. These \(p\)-states have helicity against the DMI. In a range of \(h\) there is also a state with \(p=0\), which is a ferromagnetic state with the magnetization component along the chiral axis given by \(n_{z}=h/(h_{\rm c}-1)\). We shall comment on these rather unexpected states in Sec. IV.3.
The energy density of the \(p\)-states is given by
\[e(p)=Aq_{0}^{2}\left[(p-1)^{2}-1-\frac{h^{2}}{h_{\rm c}-(p-1)^{2}}\right]. \tag{8}\]
The minimum of the energy corresponds to \(p=1\) for all \(|h|\leq h_{\rm c}\), what means that the equilibrium states are those with \(p=1\) (wave number \(q_{0}\)). It is shown in Sec. IV.1 that for \(|h|<h_{\rm c}\) there exists a range of \(p\) around \(p=1\) in which \(p\)-states are metastable. This implies that these \(p\)-states are local minima of the energy functional and, therefore, the small perturbations around them are damped as they evolve according to the LLG equation.
The energy density of the \(p\)-states as a function of \(p\) is displayed in Fig. 1 for \(h=0\) and \(h=2<h_{\rm c}\), where \(h_{\rm c}=6\) approximately corresponds to CrNb\({}_{3}\)S\({}_{6}\). The state with minimum energy corresponds always to \(p=1\) (red dots). The metastable \(p\)-states are located in a finite range around \(p=1\) signaled by the continuous lines. Outside this range the \(p\)-states are unstable (dashed lines). Notice that for \(h\neq 0\) there is a gap in \(p\) values for which there are no states satisfying \(|\cos\theta_{p}|\leq 1\).
It is remarkable that, in spite of what the form of Fig. 1 may suggest, states with \(p\neq 1\) are metastable since the value of \(p\) cannot be changed by small perturbations. Indeed, consider a small variation \(\delta p\) of \(p\). A straightforward computation shows that for \(\delta p\to 0\)
\[\mathbf{n}_{p+\delta p}-\mathbf{n}_{p}\sim 2\sin\theta_{p}\sin\left(\frac{\delta pq_{0 }z}{2}\right)\mathbf{u}(z), \tag{9}\]
where
\[\mathbf{u}(z)=-\sin\big{(}(p+\delta p/2)q_{0}z\big{)}\mathbf{x}+\cos\big{(}(p+\delta p /2)q_{0}z\big{)}\mathbf{y} \tag{10}\]
is a unit vector. This means that a small change of \(p\) cannot be considered a small perturbation of \(\mathbf{n}_{p}\), since \(|\mathbf{n}_{p+\delta p}-\mathbf{n}_{p}|\) is not small for \(\delta pq_{0}z\) close to \(\pi\). This may be clearer in a bounded system, of length \(R\), with periodic boundary conditions: then the minimum \(\delta p\) is \(2\pi/q_{0}R\) and \(0\leq z\leq R\), so that for this minimum \(\delta p\) we have \(\delta pq_{0}z=\pi\) if \(z=R/2\). Summarizing, the situation is the following: 1) the metastability of a state is related to the behaviour of its energy under small perturbations; 2) a change of \(p\), however small, is not a small perturbation of the \(p\)-state; 3) it is incorrect to infer from Fig. 1 that the \(p\)-states are not metastable.
The discussion of the previous paragraph implies that although the energy density of the \(p\)-states corresponding to \(p\) and \(p+\delta p\) is close, if the stability ellipses of \(p\) and \(p+\delta p\) enclose the point \((h,\Gamma=0)\) these \(p\)-states are separated by energy barriers in the whole configuration space of \(\mathbf{n}\), for this value of \(h\). How long is the life time of the metastable \(p\)-states depends on these energy barriers. This is a question that cannot be tackled with the methods used in this work. In any case, we expect that the lifetime will increase by decreasing the temperature, a question that deserves further study.
## III Steady motion of the conical states under the action of a polarized current
In this section we study the response of the \(p\)-states to a polarized electric current applied along the chiral axis. If the current density is \(\mathbf{j}=-j\mathbf{z}\), the magnetic torque delivered by the current is given by
\[\mathbf{\tau}=-jb_{j}\big{(}\partial_{z}\mathbf{n}-\beta\mathbf{n}\times\partial_{z}\mathbf{ n}\big{)}, \tag{11}\]
with \(b_{j}=P\mu_{\rm B}/(|e|M_{\rm s})\), where \(P\) is the polarization degree of the current, \(e\) is the electron charge, and \(\mu_{B}\) is the Bohr magneton [39]. The first term is the reactive (adiabatic) torque and the second term the dissipative (non-adiabatic) torque, whose strength is controlled by the nonadiabaticity coefficient \(\beta\)[40].
We start by seeking for steady solutions of the LLG equation (3) which have the form of a state that moves rigidly with constant velocity, \(v\), along the \(\mathbf{z}\) direction. The general steady solution is characterized by two functions, \(\theta(w)\) and \(\varphi(w)\), of the variable \(w=q_{0}(z-vt)\). Inserting this _ansatz_ in the LLG equations we obtain the steady motion equations, which can be cast to the form
\[\theta^{\prime\prime}-(\varphi^{\prime}-1)^{2}\sin\theta\cos\theta +(h_{\rm c}\cos\theta-h)\sin\theta+\] \[\Omega\theta^{\prime}-\Gamma\sin\theta\varphi^{\prime}=0, \tag{12}\] \[\sin\theta\varphi^{\prime\prime}+2\cos\theta\theta^{\prime}(\varphi^ {\prime}-1)+\Gamma\theta^{\prime}+\Omega\sin\theta\varphi^{\prime}=0, \tag{13}\]
with the primes standing for derivatives with respect to \(w\) and
\[\Omega =\frac{\alpha q_{0}}{\omega_{0}}\left(v-\frac{\beta}{\alpha}b_{j} j\right), \tag{14}\] \[\Gamma =\frac{q_{0}}{\omega_{0}}\left(v-b_{j}j\right), \tag{15}\]
where the quantity \(\omega_{0}=2\gamma q_{0}^{2}A/M_{\rm S}\) has the dimensions of a frequency. Notice that the spin transfer torque, the Gilbert damping, the nonadiabaticity coefficient, and the steady velocity enter the equations of steady motion only through the parameters \(\Omega\) and \(\Gamma\).
The solutions of Eqs. (12) and (13) with constant \(\theta=\theta_{p}\) and \(\varphi^{\prime}=p\) correspond to steady moving \(p\)-states. In this case Eq. (12) is satisfied if
\[\cos\theta_{p}=\frac{h+p\Gamma}{h_{\rm c}-(p-1)^{2}}. \tag{16}\]
This steady moving \(p\)-state exists only if \(|h+p\Gamma|\leq h_{\rm c}-(p-1)^{2}\). The stability of these solutions is analyzed in Sec. IV.1.
To have a solution with constant \(\theta=\theta_{p}\) and \(\varphi^{\prime}=p\) Eq. (13) requires \(\Omega=0\), what provides the relation between the steady velocity and the intensity of the applied current,
\[v=\frac{\beta}{\alpha}b_{j}j, \tag{17}\]
and thus \(\Gamma\) becomes proportional to the current density,
\[\Gamma=\frac{(\beta-\alpha)q_{0}}{\alpha\omega_{0}}b_{j}j. \tag{18}\]
We see that the steady state velocity increases linearly with the current density, with a mobility \(m=(\beta/\alpha)b_{j}\) which is independent of the system parameters \(\kappa\) and \(h\). The same behavior occurs for domain walls [40], for \(360^{o}\) domain walls [41; 42], and for the isolated solitons and the chiral soliton lattice of monoaxial chiral helimagnets [43; 44; 45]. Therefore this relation between steady velocity and applied current density seems to be a universal feature of the response of one dimensional magnetic modulated states to polarized currents.
Equation (17) implies that \(v=0\) if \(\beta=0\), so that the steady moving solution is actually static if there is no dissipative torque. In this case, after applying the current the system reaches a different equilibrium state, a static \(p\)-state with cone angle given by equation (16). Notice also that the case \(\beta=\alpha\) is special, since then \(\Omega=0\) and \(\Gamma=0\), and therefore Eqs. (12) and (13) are independent of the applied current. This implies that in this case the \(p\)-state is rigidly dragged by the current, with velocity \(v=b_{j}j\), keeping the cone angle equal to its static value.
## IV Stability of the magnetic states
In this section we analyze the stability of magnetic states against small perturbations. The section is divided into three subsections: one dealing with the stability of the \(p\)-states, another one devoted to the the stability of the FFM state, and the last one in which we discuss the main features of the stability diagram.
### Stability of the conical states
We analyze here the stability of the generic steady moving \(p\)-state obtained for given \(h\) and \(\Gamma\). The static \(p\)-states discussed in Sec. II are the particular cases \(\Gamma=0\) of this general analysis. Here a \(p\)-state is a steady moving state if \(\Gamma\neq 0\) and a static state if \(\Gamma=0\).
Let \(\mathbf{n}_{p}\) be the (unitary) magnetization field of the steady moving \(p\)-state, with \(\theta_{p}\) described by (16) and \(\varphi_{p}=pq_{0}(z-vt)\), with \(v\) given by (17). A small perturbation of \(\mathbf{n}_{p}\) is given by two fields, \(\xi_{1}\) and \(\xi_{2}\), which depend on the three coordinates \(x\), \(y\), \(z\), and on time \(t\), so that, for small enough \(\xi_{1}\) and \(\xi_{2}\), the perturbed magnetization is given by
\[\mathbf{n}=\sqrt{1-\xi_{1}^{2}+\xi_{2}^{2}}\,\mathbf{n}_{p}+\xi_{1}\,\mathbf{e}_{1}+\xi_{2 }\,\mathbf{e}_{2}, \tag{19}\]
where \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{n}_{p}\}\) form a right-handed orthonormal triad. We take
\[\mathbf{e}_{1} =\cos\theta_{p}\cos\varphi_{p}\,\mathbf{x}+\cos\theta_{p}\sin\varphi_ {p}\,\mathbf{y}-\sin\theta_{p}\,\mathbf{z}, \tag{20}\] \[\mathbf{e}_{2} =-\sin\varphi_{p}\,\mathbf{x}+\cos\varphi_{p}\,\mathbf{y}. \tag{21}\]
We require that, for fixed \(t\), the fields \(\xi_{1}\) and \(\xi_{2}\) be square integrable functions of \((x,y,z)\), to ensure that the energy of the perturbation is finite.
The perturbed magnetization has to be a solution of the LLG equation. Inserting Eq. (19) into (3) we obtain the equations for the dynamics of the perturbation \(\xi=(\xi_{1},\xi_{2})^{T}\). Expanding in powers of \(\xi_{1}\) and \(\xi_{2}\), we have to linear order
\[\partial_{t}\xi=\frac{\omega_{0}}{(1+\alpha^{2})q_{0}^{2}}\,\mathcal{D}\xi, \tag{22}\]
where \(\mathcal{D}\) is a \(2\times 2\) operator matrix whose matrix elements are the linear differential operators
\[\mathcal{D}_{11} =\alpha(\nabla^{2}-a)+\big{[}\Delta-(1+\alpha\beta)b\big{]} \partial_{z}, \tag{23}\] \[\mathcal{D}_{12} =\nabla^{2}-\big{[}\alpha\Delta+(\beta-\alpha)b\big{]}\partial_{z},\] (24) \[\mathcal{D}_{21} =-\nabla^{2}+a+\big{[}\alpha\Delta+(\beta-\alpha)b\big{]} \partial_{z},\] (25) \[\mathcal{D}_{22} =\alpha\nabla^{2}+\big{[}\Delta-(1+\alpha\beta)b\big{]}\partial_{ z}, \tag{26}\]
with
\[a=q_{0}^{2}\,\big{(}h_{\rm c}-(p-1)^{2}\big{)}\sin^{2}\theta_{p}, \tag{27}\]
\[\Delta=q_{0}2(p-1)\cos\theta_{p},\qquad b=\frac{q_{0}\alpha}{\beta-\alpha}\Gamma, \tag{28}\]
where we assumed \(\alpha\neq\beta\). The case \(\alpha=\beta\) is special, as we said before, since then \(\Gamma=0\) for any value of the applied current. In this case \(b=q_{0}^{2}b_{j}j/\omega_{0}\).
Stability requires that the spectrum of \(\mathcal{D}\) lies in the complex half-plane with non negative real part. Since \(\mathcal{D}_{ij}\) are linear differential operators with constant coefficients, the spectrum of \(\mathcal{D}\) can be readily obtained by Fourier transform. Details on the calculations leading to the stability conditions are given in Appendix A. Here we collect the conclusions. A necessary condition for stability is \(a\geq 0\), what gives the following bounds for the \(p\) values of stable \(p\)-states:
\[1-\sqrt{h_{\rm c}}\leq p\leq 1+\sqrt{h_{\rm c}}. \tag{29}\]
It is shown in Appendix A that, having \(p\) within these bounds, the \(p\)-state is stable only in the region of the \((h,\Gamma)\) plane enclosed by the ellipse of equation
\[A(p)\Gamma^{2}+2B(p)\Gamma h+C(p)h^{2}=D(p), \tag{30}\]
where the functions \(A(p)\), \(B(p)\), \(C(p)\), and \(D(p)\) are independent of \(h\) and \(\Gamma\), and are given in Appendix A. The stability ellipses of the \(p\)-states are centered at \((0,0)\) and have the principal axes rotated with respect to the coordinate axes. The amount of rotation depends on \(p\).
The stability of the static \(p\)-states discussed in Sec. II is obtained by setting \(\Gamma=0\) in this general approach. Thus, the static \(p\)-state is stable in the range of \(h\) determined by the intersection of its stability ellipse with the \(\Gamma=0\) axis.
Figure 2 displays the stability ellipses for several values of \(p\) in the \((h,\Gamma)\) plane, for \(h_{\rm c}=6\), which approximately corresponds to CrNb\({}_{3}\)S\({}_{6}\). For each \(p\) value, the \(p\)-state is metastable for \((h,\Gamma)\) inside the corresponding ellipse, and unstable outside it.
The region of the \((h,\Gamma)\) plane in which there exists some stable steady moving \(p\)-state is bounded by the envelope of the one-parametric family of ellipses (parametrized by \(p\)) given by Eq. (30). The envelope can be readily found and it has four branches given by
\[\begin{split}\Gamma&=\sigma(h)2\left[\left(1\pm \sqrt{|h|-(h_{\rm c}-1)}\right],\right.\\ \Gamma&=-h\pm 2\sqrt{h_{\rm c}}\left[\sqrt{h_{\rm c }(h_{\rm c}-1)-\sqrt{h_{\rm c}h}}-h_{\rm c}\right],\end{split} \tag{31}\]
where \(\sigma(h)\) is the sign function: \(\sigma(h)=1\) if \(h\geq 0\) and \(\sigma(h)=-1\) if \(h<0\). The parametric equations of the envelop are given in Appendix A, Eqs. (13)-(16). We call the region enclosed by this envelope the _stability region of conical states_. No modulated state is stable outside this region.
The four branches of the envelope in the case \(h_{\rm c}=6\) are shown in red in Fig. 2. Along each branch \(p\) changes continuously within its bounds, from \(1-\sqrt{h_{\rm c}}\) to \(1+\sqrt{h_{\rm c}}\). Each ellipse, determined by a given value of \(p\), is tangent to the envelope at four points, one for each branch. These four points, which depend on \(p\), define the four pairs of functions shown in Fig. 3. The red points in Figs. 2 and 3 correspond to \(p=2\). The detailed features of the stability diagram will be further discussed in Sec. IV.3.
### Stability of the forced ferromagnetic state
The FFM state is the magnetic texture with a uniform magnetization aligned with the applied field, which in our case points to the direction of the chiral axis \(z\). Hence, the FFM is given by \(\mathbf{n}=\mathbf{z}\) if \(h\geq 0\) and \(\mathbf{n}=-\mathbf{z}\) if \(h<0\). It is the equilibrium state if \(|h|>h_{\rm c}\). The FFM state is insensitive to an applied current, since the torque (11) vanishes for uniform magnetization. However, it is destabilized by a sufficiently intense current. In this section we discuss the stability diagram of the FFM state in the applied field - applied current plane.
The perturbed FFM state has the form
\[\mathbf{n}=\sqrt{1-\xi_{1}^{2}-\xi_{2}^{2}}\,\sigma(h)\,\mathbf{z}+\xi_{1}\,\sigma(h) \,\mathbf{x}+\xi_{2}\,\mathbf{y}. \tag{32}\]
where \(\sigma(h)=1\) if \(h\geq 0\) and \(\sigma(h)=-1\) if \(h<0\). The dynamics of the perturbation is obtained by inserting the above expression in the LLG equation (3). Again, the linearized LLG equation is given by a linear differential operator with constant coefficients whose spectrum is obtained by Fourier transform. The stability of the FFM state requires that the spectrum lies in the complex plane with non positive real part. The details of the calculations are given in Appendix B. The stability condition leads to the inequality
\[\Gamma^{2}-4\sigma(h)\Gamma+4(h_{\rm c}-|h|)\leq 0, \tag{33}\]
where \(\Gamma\) is related to the current intensity, \(j\), by Eq. (18). The FFM state is stable in the region of the \((h,\Gamma)\) plane in which the above inequality holds.
Inequality (33) holds if and only if the two roots in \(\Gamma\) of the left hand side of the inequality are real, and \(\Gamma\) is between the two roots. Then \(|h|>h_{\rm c}-1\) and
\[2\big{(}1-\zeta(h)\big{)}\leq\sigma(h)\Gamma\leq 2\big{(}1+\zeta(h)\big{)}, \tag{34}\]
with \(\zeta(h)=\sqrt{1+|h|-h_{\rm c}}\). The above inequalities determine the region of stability of the FFM state in the \((h,\Gamma)\) plane, which is displayed in Fig. 2 for \(h_{\rm c}=6\).
It is remarkable that the boundary of the stability region of the FFM state coincides exactly with the left and right branches of the boundary of the stability region of conical states. This means that modulated states and the FFM state do not coexist in any region of the \((h,\Gamma)\) plane.
Figure 2: Stability diagram in the plane \((h,\Gamma)\) for \(h_{\rm c}=6\). Steady moving \(p\)-states are stable inside the region bounded by the red line. The FFM states are only stable within shaded regions, which are unbounded. Red dots corresponds to the stability limit of states with \(p=2\) lying in the boundary of the stability diagram, as those shown in Fig. 3.
### Outstanding features of the stability diagram
There are some characteristics in the stability diagram which have interesting consequences both from a theoretical and applied point of view. Below we enumerate these remarkable features and some of their consequences. Of especial relevance is the discussion of point 3 below.
_1. Destabilization of \(p\)-states_. At each point \((h,\Gamma)\) within the stability region of conical states the stable \(p\)-states are those whose stability ellipse encloses the point. Since all stability ellipses are centered at the origin in the \((h,\Gamma)\) plane, the only stable \(p\)-states are those which are metastable at \(h=0\) and \(\Gamma=0\), that is, which are metastable in absence of applied field and current. They are precisely those with \(p\) in the range (29). The application of a field and/or a current does not stabilize any other \(p\)-state, but it destabilizes some of them. As the point \((h,\Gamma)\) moves away from the origin, it crosses some ellipses, and the corresponding \(p\)-states become unstable. Outside the stability region of conical states, whose boundary is given by eqs. (31), no \(p\)-state is stable.
_2. Range of stable \(p\)-states_. One conclusion of the discussion of point 1 above is that at each point \((h,\Gamma)\) the stable \(p\)-states have \(p\) in a certain range \(p_{\rm min}(h,\Gamma)\leq p\leq p_{\rm max}(h,\Gamma)\). These two values, \(p_{\rm min}(h,\Gamma)\) and \(p_{\rm max}(h,\Gamma)\), are given by the two real roots of
\[A(p)\Gamma^{2}+2B(p)h\Gamma+C(p)h^{2}-D(p)=0, \tag{35}\]
that lie within the bounds given by Eq. (29). These two values, \(p_{\rm min}\) and \(p_{\rm max}\), approach each other as \((h,\Gamma)\) attains the stability boundary of conical states. Therefore, the closest \((h,\Gamma)\) is to this stability boundary, the narrower the range of \(p\) values of stable \(p\)-states.
_3. Manipulating conical states_. The discussion of point 1 above suggests a method to switch between metastable conical states with different wave number. For instance, suppose we start at \(h=0\) and \(\Gamma=0\) with some metastable \(p\)-state, say with \(p\approx 1\). If we apply a field and a polarized current such that \((h,\Gamma)\) corresponds to a point close to one of the red points of Fig. 2, the initial \(p\)-state becomes unstable and it will evolve to one of the \(p\)-states within the stability range at \((h,\Gamma)\). Since this point is close to one of the red points of Fig. 2, where the stability range is narrow, the final \(p\)-state will have \(p\approx 2\). Since this state is metastable also for \(h=0\) and \(\Gamma=0\), it will remain as the field and the current are switched off. Hence, the consequence of this process is to switch the \(p\)-state from \(p\approx 1\) to \(p\approx 2\). In Sec. V we will show using numerical simulations that these processes are feasible.
Therefore, a given \(p\)-state can be selected with high precision by approaching the appropriate point of the stability boundary. The values of \(h\) and \(\Gamma\) appropriate to select a conical state with wave number \(q\approx pq_{0}\) are those represented in Fig. 3.
_4. Helicity switching_. Since for the easy-plane anisotropy considered in this work \(h_{c}>1\), we have that \(1-\sqrt{h_{c}}<0\). This means that there are \(p<0\) within the stability range (29), and the corresponding \(p\)-states are stable within their stability ellipse. These \(p\)-states with \(p<0\) are conical states with helicity against the DMI. The metastability of these states opens the possibility of helicity switching in monoaxial chiral helimagnets through the action of a polarized current, by means of the process described in point 3 above.
_5. Ferromagnetic states_. For the same reason \(p=0\) is within the bounds (29). Hence, the \(p\)-state with \(p=0\) is metastable within its stability ellipse (the black ellipse in Fig. 2). These states are ferromagnetic, with a uniform magnetization which has a component \(n_{z}=\cos\theta_{p}=h/(h_{\rm c}-1)\) along the chiral axis. The magnetization component lying on the plane perpendicular to the chiral axis is undetermined, which means that these ferromagnetic states are highly degenerate. This degeneracy is tantamount to the translational degeneracy of the conical \(p\)-states. Notice that these ferromagnetic states are different from the FFM state obtained for a sufficiently large magnetic field, in which the magnetization is aligned with the field. Instead, they would be the equilibrium states
Figure 3: The values of (a) \(h\) and (b) \(\Gamma\) as a function of \(p\) along the different branches of the envelope of the stability region defined by Eqs. (31). For a given value of \(p\) the four branches (left, right, upper, lower) are presented, corresponding to those shown in Fig. 2. Red dots correspond to those shown in Fig. 2. Square symbols indicate \((h,\Gamma)\) points for which states with \(p=0\) and \(p=2\) are obtained in Sec. V.
in absence of DMI, which survive as metastable states when the DMI is present.
_6. Supercritical modulated states_. The fact that the stability region of the FFM state is convex (see Fig, 2) implies that there are steady moving \(p\) states for supercritical applied fields (\(|h|>h_{\rm c}\)). This means that if we start with the FFM state with \(h\) in an appropriate range, such that \(|h|>h_{\rm c}\), and apply a polarized current of appropriate intensity the FFM state will be destabilized and will evolve to attain a steady moving \(p\)-state: a modulation will be created by the polarized current at a supercritical field. If the current is switched off, the FFM state will be recovered. One process of this kind is illustrated by micromagnetic simulations in Sec. V.
## V Manipulation of the conical states
The peculiarities of the stability diagram suggest a method to manipulate the conical states described in Sec. IV.3, point 3. In this section we illustrate that these ideas are sound by solving the LLG equation with appropriate initial conditions and time dependent applied field and current, by means of micromagnetic simulations.
The micromagnetic numerical simulations are performed using the MuMax3 code [46; 47] in which a monoaxial DMI was implemented [43] to model the system given by Eqs. (1) and (11), with material parameters appropriate for CrNb\({}_{3}\)S\({}_{6}\)[43]: \(A=1.42\)\(\mathrm{pJ/m}\), \(D=369\)\(\mu\)J/m\({}^{2}\), \(K=-124\)\(\mathrm{kJ/m^{3}}\), and \(M_{\mathrm{S}}=129\)\(\mathrm{kA/m}\) (see Osorio _et al._[48] for further details). With these parameters the equilibrium pitch of the helical state is \(L_{0}=2\pi/q_{0}\approx 50\) nm and the critical magnetic field \(h_{\rm c}\) corresponds to \(B_{c}\approx 2300\) mT. The simulations are performed for a one-dimensional system of linear size \(R=500\) nm, with a mesh size \(\Delta R=1\) nm, and periodic boundary conditions. We set \(\alpha=0.01\) and \(\beta=0.02\). Notice that in a finite system with periodic boundary condition only a discrete number of \(2\pi\) rotations can be attained. We denote by \(Q\) the winding number (see Ref. [48] for the definition of \(Q\)), and \(Q_{0}=R/L_{0}=10\) is the equilibrium winding number, which corresponds to the \(p\)-state with \(p=1\). Hence \(p=L_{0}/L=Q/Q_{0}\) takes only discrete values with step size \(1/Q_{0}=1/10\).
Let us start showing the stability diagram corresponding to the equilibrium \(p\)-state (\(p=1\)) obtained from numerical simulations. Given a point (\(h,\Gamma\)) of the stability diagram, the initial state is either the \(p\)-state with \(p=1\) if \(|h+\Gamma|\leq h_{\rm c}\) (this is the existence condition of the \(p=1\) state), or the FFM state, otherwise. A perturbation of small intensity, \(M_{\mathrm{S}}/10\), and random orientation is added to the magnetization of the initial state. Then, the current is applied for \(20\) ns. Figure 4 displays the stability region and the corresponding stability ellipse (see. Fig. 2). The stability limits of the FFM are also shown in the figure. The stability boundaries are in good agreement with the analysis in Sec. IV.2.
In order to show how the system can be manipulated to obtain different targeted \(p\)-states we use the following
Figure 4: Stability region for the equilibrium \(p=1\) state (\(q=q_{0}\)) for applied magnetic field \(B\) and current \(j\), as obtained by numerical simulations. The regions where the FFM states are stable are also shown. The dashed lines correspond to the stability limits.
Figure 5: Evolution of the system when a simultaneous (\(h_{a},\Gamma_{a}\)) square pulse is applied during \(20\) ns: the red and blue curves represent the net magnetization along \(z\) and the winding number, respectively. Initially the system is at (\(h=0,\Gamma=0\)) (yellow point in the inset) and is characterized by a winding number \(Q_{0}=10\). The values of (\(h_{a},\Gamma_{a}\)) (red points in the inset) are in one semi-axis of the shown ellipses corresponding to positive \(p=2\) in (a) and negative \(p=-0.5\) in (b).
protocol. The system is initialized at \((h=0,\Gamma=0)\) in a state with a winding number \(Q_{0}=10\), which corresponds to the equilibrium state with \(p=1\). A small random perturbation is added to the three components of the magnetization, so that the initial state is actually a perturbed \(p\)-state. Then a simultaneous square pulse of magnetic field and polarized current with \((h=h_{a},\Gamma=\Gamma_{a})\) is applied during 20 ns, afterwards the system is let to relax to some metastable state at \((h=0,\Gamma=0)\). The evolution of the system can be followed by monitoring the time evolution of the magnetization along the chiral axis, which accounts for the conical distortion, and the winding number \(Q\propto p\). A small pulse length is necessary to destabilize the initial state. We expect the final state to depend on the initial perturbation and the shape of the pulse, but not on the length of the pulse, since ones a new state has been reached it is metastably retained.
Figure 5 shows how different \(p\)-states can be stabilized following the main diagonals of the ellipses. In Fig. 5(a) results are presented for \((h_{a}<0,\Gamma_{a}>0)\), while results with \((h_{a}>0,\Gamma_{a}>0)\) are shown in Fig. 5(b). In both cases, \((h_{a},\Gamma_{a})\) lies outside the stability region of the \(Q_{0}\) state. In Fig. 5(a) a final state with \(Q=24\) (\(p=2.4\)) is obtained, showing that a combined pulse of magnetic field and current can be used to modify the winding number (\(p\)-state) in the system. In (b), the values of \((h_{a}>0,\Gamma_{a}>0)\) are such that the only stable \(p\)-states are those with \(p<0\). Numerical results show that in this case a helicity switching can be forced with the final state metastably retained against the DMI favored rotation direction. This shows how the magnetic configuration can be destabilized in favor of new \(p\)-states by pushing the system outside the ellipse corresponding to the initial state.
In order to select a \(p\)-state with a targeted \(p\) value, \(h_{a}\) and \(\Gamma_{a}\) should be chosen at the stability boundaries, such that \(p_{\rm min}\leq p\leq p_{\rm max}\) with \(p_{\rm min}\) and \(p_{\rm max}\) very close to each other. Figure 6 present results using \((h_{a},\Gamma_{a})\) close to the stability boundaries and such that (a) \(p=2\) and (b) \(p=0\) are expected (see square symbols in Fig. 3). Numerical results show that these targeted \(p\)-states can be readily selected. In Fig. 6(a) \(Q=22\) is obtained, which corresponds to \(p=2.2\), close to the targeted \(p=2\) state. Since we use \((h_{a},\Gamma_{a})\) exactly at the point where the ellipse for \(p=2\) touches the stability boundary, the \(p=2\) state is in its stability limit and states very close to \(Q=20\) can be stabilized, in this case \(Q=22\). In Fig. 6(b), after fluctuating around \(Q=\pm 1\), the final ferromagnetic state with \(Q=0\) is obtained. Note that this state is initially (when \(h=h_{a}\)) oriented along a random direction within the cone with \(n_{z}=\cos\theta_{p}=h_{a}/(h_{c}-1)\), depending on the initial perturbation of the system. When \(h=0\) the obtained ferromagnetic state is contained in the easy-plane (\(xy\)) defined by the magnetic anisotropy. This ferromagnetic state can be metastably retained, as opposed to the FFM state.
For \(\Gamma=0\), going beyond \(h_{\rm c}\) erases the \(p\)-state and the FFM state is stabilized. It is important to note that in this case, for a field value \(h>h_{\rm c}\), an applied current can be used to stabilize a \(p\)-state, as shown in Fig. 7. The system is initialized with \(Q_{0}=10\) at \((h=0,\Gamma=0)\) and then is set in a FFM state using \((h=h_{a}>h_{\rm c},\Gamma=0)\). Applying then \(\Gamma=\Gamma_{a}\) inside the stability region, a state with \(Q=21\) is obtained. This state remains when going back to \((h=0,\Gamma=0)\). That is, some \(p\)-states can be created by means of a two step process: first, the current \(p\)-state is erased by applying a field higher than \(h_{\rm c}\) and afterwards the FFM state is destabilized by applying an appropriate current. The system evolves to some steady moving stable \(p\)-state which remains after the field and the current are switched off.
The numerical results shown in this section illustrates how the stability diagram obtained in Sec. IV.1 can be used to manipulate the conical states.
## VI Conclusions
Let us summarize the findings reported in this work. Besides the equilibrium state, at low temperature and
Figure 6: Evolution of the system when a simultaneous \((h_{a},\Gamma_{a})\) square pulse is applied during 20 ns: the red and blue curves represent the net magnetization along \(z\) and the winding number, respectively. Initially the system is at \((h=0,\Gamma=0)\) (yellow point in the inset) and is characterized by a winding number \(Q_{0}=10\). The values of \((h_{a},\Gamma_{a})\) (red points in the inset) are exactly at the point where the ellipses for (a) \(p=2\) and (b) \(p=0\) touch the upper branch of the stability boundary, as shown in the insets.
zero applied field monoaxial chiral helimagnets have a continuum of helical states differing by the wave number of the modulation [38], which can be written as \(pq_{0}\), where \(q_{0}\) is the wave number of the equilibrium state and \(p\) is a dimensionless number. These states are called here the \(p\)-states. For an infinite system, their energy is a continuum function of \(p\) which is minimized by the equilibrium state, corresponding to \(p=1\). These states are local minima of the energy for \(p\) in a neighborhood of \(p=1\)[38]. We argued here (Sec. II) that, in spite of what the curve energy _versus_\(p\) may suggest (Fig. 1), the \(p\)-states are metastable in that range.
The application of a magnetic field parallel to the chiral axis has two effects: first, it introduces a conical deformation of the \(p\)-states; and second, it shrinks the interval of metastability. For applied fields of strength higher than the critical field no \(p\)-state is stable, and the equilibrium state is the FFM state.
Analogously, the application of a polarized current along the chiral axis has three effects on the \(p\)-states: first, they reach a steady moving state with a velocity proportional to the intensity of the applied current; second, they suffer a conical deformation similar to that introduced by the application of an external field in the direction of the chiral axis; and third, some \(p\)-states are destabilized, and therefore the stability interval is schrhrled.
The most remarkable fact of the stability diagram of \(p\)-states in the applied magnetic field - applied current intensity plane (Sec. IV.3) is that for each \(p\) in the stability range at zero field there are points in the stability diagram at which the interval of stability is very narrow and contains such \(p\). This fact allows us to devise processes to select a given \(p\)-state. For instance, if we start with some metastable \(p\)-state at zero current and apply appropriate magnetic field and current we end with a steady moving \(p\)-state with wave number within a narrow interval around the targeted \(p\). These new \(p\)-state is metastable at zero field and zero current and therefore it would remain as the field and the current are switched off. The feasibility of these processes, which is extremely important from the point of view of applications, is shown by micromagnetic simulations (Sec. V).
Switching between \(p\)-states opens the possibility of their application in spintronic devices. In particular, there are metastable \(p\)-states with negative \(p\), and therefore helicity switching is possible in monoaxial chiral helimagnets. It has been argued that controlled switching among magnetic states with opposite helicity might be used for memory applications [36]. Current induced helicity switching has been discussed before in a non-chiral monoaxial helimagnet [49] and in isolated skyrmions in frustrated magnetic systems [50]. In both cases magnetic textures of pure exchange origin were studied, while we report here helicity switching in a monoaxial _chiral_ helimagnet, where chirality is due to the presence of DMI.
The \(p\)-states exist also in cubic chiral magnets [35]. In that case they are characterized not only by the wave number, but also by the orientation of the wave vector. The dynamics and stability of the \(p\)-states of cubic chiral helimagnets under the action of a polarized current have been recently studied by Masell _et al._[51; 52] in the zero applied field case. These authors showed that, under the action of the current, the \(p\)-states reach a steady motion state with a velocity proportional to the current density and, at the same time, they suffer a uniform conical deformation with an angle determined by the current. This behaviour is the same found here for monoaxial chiral helimagnets. Masell _et al._ found also a critical current which destabilizes the \(p\)-states. Their analysis of the longitudinal Fourier modes, whose wave vector is parallel to the \(p\)-state wave vector, gives a destabilizing current which exactly coincides with the result reported here for monoaxial helimagnets, in the particular case \(h=0\), \(h_{\rm c}=1\), as it must be since by ignoring the transverse fluctuations the cubic chiral helimagnet becomes the monoaxial chiral helimagnet without single-ion anisotropy (UMA). In addition, they found that the \(p\)-states are destabilized by any current, however small, applied perpendicularly to the wave vector of the \(p\)-state. This means that the \(p\)-states tend to propagate along the direction of the applied current. The situation becomes more interesting if there is also an applied field, since in this case the propagation direction of the \(p\)-state tends to be aligned with the field. Thus, the interplay between the magnitude and relative orientation of the applied field, the applied current, and the \(p\)-state wave vector promises a complex and rich stability diagram of \(p\)-states in cubic chiral helimagnets.
The essential question of the lifetime of metastable \(p\)-states cannot be addressed with the methods of this work. The \(p\)-states are separated in the magnetic configuration
Figure 7: Switching to a metastable state at zero field using independent field and current pulses. First, a field pulse of intensity \(h>h_{\rm c}\) is used to drive the system to the FFM state (yellow point in the inset). Then, a \(t=0\) a current pulse is applied using a negative \(\Gamma\) value (red dot in the inset) to stabilize the \(p\)-state. After 20 ns the system is let to relax to a metastable state (the obtained \(p\)-state) at \((h=0,\Gamma=0)\). The red and blue curves represent the net magnetization along \(z\) and the winding number, respectively.
space by energy barriers (they are local minima of the energy functional) and their lifetime depends on the height of such barriers. Thus, it is clear that the lifetime will increase by decreasing the temperature and, therefore, the presence of metastable \(p\)-states will be more easily detected at low temperature.
The above discussion on lifetimes is related to the experimental signals of the \(p\)-states. To address these questions it is necessary a careful analysis of the experimental data at low temperature to seek for anomalies attributable to \(p\)-states. We have already remarked that \(p\)-states exist also in cubic chiral helimagnets [35]. In these systems the continuum of \(p\)-states is richer than in monoaxial chiral helimagnets since, besides the wave number, the \(p\)-states differ also in the orientation of their wave vectors. The low temperature anomalies reported recently for the cubic chiral helimagnet MnSi [53] may be due to the presence of metastable \(p\)-states.
###### Acknowledgements.
Grant Number PGC2023XM4 funded by MCIN/AEI/10.13039/501100011033 supported this work. Grants OTR02223-SpINS from CSIC/MICIN and DGA/M4 from Diputacion General de Aragon (Spain) are also acknowledged. This work was also supported by the Grant No. PICT 2017-0906 from the Agencia Nacional de Promocion Cientifica y Tecnologica, Argentina.
## Appendix A Stability of the conical states
A necessary condition for the stability of a \(p\)-state is that the spectrum of the \(2\times 2\) matrix operator \(\mathcal{D}\), whose matrix elements are the linear differential operators given by Eqs. (23)-(26) lies in the complex half-plane with non positive real part. The \(p\) dependence is hidden in the parameters \(a\), \(\Delta\), and \(b\) given by Eqs. (27) and (28). Since the coefficients of those linear operators are constants, the spectrum is easily obtained by Fourier transform.
Denoting the wave vector of the Fourier modes by \(\mathbf{k}\), and setting \(k=|\mathbf{k}|\), the spectrum is given by two complex functions of \(\mathbf{k}\), denoted by \(\lambda_{\pm}(\mathbf{k})\), given by
\[\begin{split}\lambda_{\pm}(\mathbf{k})=&-\frac{\alpha} {2}(2k^{2}+a)\\ &+\mathrm{i}\big{(}\Delta-(1+\alpha\beta)b\big{)}k_{z}\pm\sqrt{a_ {r}+\mathrm{i}\,a_{i}}.\end{split} \tag{38}\]
where
\[a_{r}=\frac{\alpha^{2}a^{2}}{4}-k^{2}(k^{2}+a)+\big{(}\alpha \Delta+(\beta-\alpha)b\big{)}^{2}k_{z}^{2}, \tag{39}\] \[a_{i}=-\big{(}\alpha\Delta+(\beta-\alpha)b\big{)}(2k^{2}+a)k_{z}. \tag{40}\]
We only need the real parts, which are given by
\[\mathrm{Re}\,\lambda_{\pm}(\mathbf{k})=-\frac{\alpha}{2}(2k^{2}+a)\pm\sqrt{\frac{ \sqrt{a_{r}^{2}+a_{i}^{2}}+a_{r}}{2}}, \tag{41}\]
Since \(\mathrm{Re}\lambda_{+}(\mathbf{k})\geq\mathrm{Re}\lambda_{-}(\mathbf{k})\), stability requires \(\mathrm{Re}\lambda_{+}(\mathbf{k})\leq 0\), which, with simple algebraic manipulation, it can be shown to be equivalent to
\[\big{(}\alpha\Delta+(\beta-\alpha)b\big{)}^{2}k_{z}^{2}\leq\alpha^{2}k^{2}(k^ {2}+a). \tag{42}\]
Since the left hand side of this inequality is non negative, and since it must hold for any \(k\), and in particular for \(k\to 0\), we get \(a\geq 0\). This relation sets the bounds for the \(p\) values of stable \(p\)-states given by the inequalities (29).
Since \(a\geq 0\), the right hand side of (42) increases with \(k_{x}^{2}+k_{y}^{2}\), and therefore the inequality is satisfied if and only if it is satisfied for \(k_{x}^{2}+k_{y}^{2}=0\). Thus we set \(k^{2}=k_{z}^{2}\), and then we have
\[k_{z}^{2}\left(k_{z}^{2}+a-(\Delta+q_{0}\Gamma)^{2}\right)\geq 0, \tag{43}\]
Noticing that with \(a\geq 0\) inequality (42) holds for all \(\mathbf{k}\) if and only if it holds for \(\mathbf{k}=k_{z}\mathbf{z}\), the stability condition reduces to \((\Delta+q_{0}\Gamma)^{2}\leq a\). To arrive to this inequality Eq. (18) was used. Substituting the expressions for \(a\), \(\Delta\) and \(\cos\theta_{p}\) in this inequality we get the following expression for the stability condition:
\[A(p)\Gamma^{2}+2B(p)\Gamma h+C(p)h^{2}\leq D(p), \tag{44}\]
where
\[A(p) =2(p-1)^{3}+3(h_{\mathrm{c}}+1)(p-1)^{2}+\] \[\quad 6h_{\mathrm{c}}(p-1)+h_{\mathrm{c}}(h_{\mathrm{c}}+1), \tag{45}\] \[B(p) =(p-1)^{3}+3(p-1)^{2}+3h_{\mathrm{c}}(p-1)+h_{\mathrm{c}},\] (46) \[C(p) =h_{\mathrm{c}}+3(p-1)^{2},\] (47) \[D(p) =\big{(}h_{\mathrm{c}}-(p-1)^{2}\big{)}^{3}. \tag{48}\]
Notice that \(D(p)\geq 0\) for \(p\) satisfying the bounds (29). The inequality (44) determines a region in the \((h,\Gamma)\) plane limited by a conic section. The discriminant of the left hand side of (44) is
\[B(p)^{2}-A(p)C(p)=-D(p)\leq 0. \tag{49}\]
Therefore, the conic section is actually an ellipse centered at \((0,0)\) with the principal axes rotated with respect to the coordinate axes. The amount of rotation depends on \(p\). The steady moving \(p\)-state is stable within the region of the \((h,\Gamma)\) plane enclosed by the corresponding ellipse. The stability the static \(p\)-states discussed in Sec. II is obtained as a particular case of this general approach, setting \(\Gamma=0\). The static \(p\)-state is thus stable in the range of \(h\) determined by the intersection of its stability ellipse with the \(\Gamma=0\) axis.
The region of the \((h,\Gamma)\) plane in which there exists some stable steady moving \(p\)-state is bounded by the envelope of the one-parametric family of ellipses given by Eq. (30). The envelope can be readily found and it has four branches determined by the parametric equations
\[\left\{\begin{array}{l}h=-\big{[}(p-1)^{2}+2(p-1)+h_{\rm c}\big{]}\\ \Gamma=2(p-1)\end{array}\right. \tag{43}\]
\[\left\{\begin{array}{l}h=(p-1)^{2}+2(p-1)+h_{\rm c}\\ \Gamma=-2(p-1)\end{array}\right. \tag{44}\]
\[\left\{\begin{array}{l}h=-\left[(p-1)^{2}+2h_{\rm c}(p-1)+h_{\rm c}\right]/ \sqrt{h_{\rm c}}\\ \Gamma=\left[(p-1)^{2}+h_{\rm c}\right]/\sqrt{h_{\rm c}}\end{array}\right. \tag{45}\]
\[\left\{\begin{array}{l}h=\left[(p-1)^{2}+2h_{\rm c}(p-1)+h_{\rm c}\right]/ \sqrt{h_{\rm c}}\\ \Gamma=-\left[(p-1)^{2}+h_{\rm c}\right]/\sqrt{h_{\rm c}}\end{array}\right. \tag{46}\]
with \(p\) is in the range given by Eq. (29).
The parameter \(p\) can be eliminated in each of these four pairs of equations and then the equations of the envelope in the form (31) are obtained. This envelope bounds the region of the \((h,\Gamma)\) plane where some (steady moving) \(p\)-state is stable, which in Sec. IV.1 is called the _stability region of conical states_.
## Appendix B Stability of the FFM
To linear order, the dynamics of perturbations, \(\xi\), of the FFM state obey Eq. (22), in this case with
\[\mathcal{D}_{11}=\alpha(\nabla^{2}-a)-\big{(}2q_{0}+(1+\alpha \beta)b\big{)}\partial_{z}, \tag{47}\] \[\mathcal{D}_{12}=\nabla^{2}-a+\big{(}\alpha 2q_{0}-(\beta-\alpha)b \big{)}\partial_{z},\] (48) \[\mathcal{D}_{21}=-\nabla^{2}+a-\big{(}\alpha 2q_{0}-(\beta- \alpha)b\big{)}\partial_{z},\] (49) \[\mathcal{D}_{22}=\alpha(\nabla^{2}-a)-\big{(}2q_{0}+(1+\alpha \beta)b\big{)}\partial_{z}, \tag{50}\]
where now
\[a=q_{0}^{2}(|h|+\kappa),\quad b=\sigma(h)\frac{q_{0}^{2}b_{j}j}{\omega_{0}}=q_ {0}\frac{\alpha}{\beta-\alpha}\sigma(h)\Gamma, \tag{51}\]
with \(\sigma(h)=1\) if \(h\geq 0\) and \(\sigma(h)=-1\) if \(h<0\).
If the FFM is stable the spectrum of \(\mathcal{D}\) lies on the complex half-plane with non positive real part. Again, the spectrum of \(\mathcal{D}\) is easily obtained by Fourier transform. If, as before, \(\mathbf{k}\) is the wave vector of the Fourier mode, the spectrum is given by the complex functions \(\lambda_{\pm}(\mathbf{k})\), whose real parts are
\[\operatorname{Re}\lambda_{\pm}(\mathbf{k})=-\alpha(k^{2}+a)\pm\big{(}\alpha 2q_{0}-( \beta-\alpha)b\big{)}k_{z}. \tag{52}\]
Now, \(\operatorname{Re}\lambda_{\pm}(\mathbf{k})\leq 0\) if and only if
\[\alpha k_{z}^{2}\pm\big{(}\alpha 2q_{0}-(\beta-\alpha)b\big{)}k_{z}+\alpha a \geq 0, \tag{53}\]
for all real \(k_{z}\). This means that the two roots in \(k_{z}\) of the left hand side of the above inequality must be either complex or equal, that is, the discriminant of the quadratic polynomial in \(k_{z}\) given by the left hand side of the above inequality must be non positive:
\[\big{(}\alpha 2q_{0}-(\beta-\alpha)b\big{)}^{2}-4\alpha^{2}a\leq 0. \tag{54}\]
Inserting the values of \(a\) and \(b\) given by equation (51) and defining \(\Gamma\) by Eq. (18) we obtain
\[\Gamma^{2}-4\sigma(h)\Gamma+4(h_{\rm c}-|h|)\leq 0. \tag{55}\]
To have a non-empty solution of this inequality the two roots in \(\Gamma\) of its left hand side must be real, and then the inequality holds for \(\Gamma\) being between the two roots. Then we get the condition \(|h|>h_{\rm c}-1\) and, if this holds, the two roots are given by
\[\Gamma_{\pm}=\sigma(h)2\big{(}1\pm\zeta(h)\big{)}, \tag{56}\]
with \(\zeta(h)=\sqrt{1+|h|-h_{\rm c}}\). In this way we obtain that the stability region of the FFM state in the \((h,\Gamma)\) plane is determined by the inequalities (34). It is remarkable that the boundary of the stability region of the FFM state coincides exactly with two of the branches of the boundary of the stability region of conical states. As stressed at the end of Sec. IV.2, this means that conical states never coexist with the FFM state.
|
2310.11368 | VECHR: A Dataset for Explainable and Robust Classification of
Vulnerability Type in the European Court of Human Rights | Recognizing vulnerability is crucial for understanding and implementing
targeted support to empower individuals in need. This is especially important
at the European Court of Human Rights (ECtHR), where the court adapts
Convention standards to meet actual individual needs and thus ensures effective
human rights protection. However, the concept of vulnerability remains elusive
at the ECtHR and no prior NLP research has dealt with it. To enable future
research in this area, we present VECHR, a novel expert-annotated multi-label
dataset comprising of vulnerability type classification and explanation
rationale. We benchmark the performance of state-of-the-art models on VECHR
from both prediction and explainability perspectives. Our results demonstrate
the challenging nature of the task with lower prediction performance and
limited agreement between models and experts. Further, we analyze the
robustness of these models in dealing with out-of-domain (OOD) data and observe
overall limited performance. Our dataset poses unique challenges offering
significant room for improvement regarding performance, explainability, and
robustness. | Shanshan Xu, Leon Staufer, T. Y. S. S Santosh, Oana Ichim, Corina Heri, Matthias Grabmair | 2023-10-17T16:05:52Z | http://arxiv.org/abs/2310.11368v4 | VECHR: A Dataset for Explainable and Robust Classification of Vulnerability Type in the European Court of Human Rights
###### Abstract
Recognizing vulnerability is crucial for understanding and implementing targeted support to empower individuals in need. This is especially important at the European Court of Human Rights (ECtHR), where the court adapts convention standards to meet actual individual needs and thus to ensure effective human rights protection. However, the concept of vulnerability remains elusive at the ECtHR and no prior NLP research has dealt with it. To enable future work in this area, we present VECHR, a novel expert-annotated multi-label dataset comprised of vulnerability type classification and explanation rationale. We benchmark the performance of state-of-the-art models on VECHR from both the prediction and explainability perspective. Our results demonstrate the challenging nature of the task with lower prediction performance and limited agreement between models and experts. We analyze the robustness of these models in dealing with out-of-domain (OOD) data and observe limited overall performance. Our dataset poses unique challenges offering a significant room for improvement regarding performance, explainability, and robustness.
## 1 Introduction
Vulnerability encompasses a state of susceptibility to harm, or exploitation, particularly among individuals or groups who face a higher likelihood of experiencing adverse outcomes due to various factors such as age, health, disability, or marginalized social position [1, 16]. While it is impossible to eliminate vulnerability, society has the capacity to mitigate its impact. The European Court of Human Rights (ECtHR) interprets the European Convention of Human Rights (ECHR) to address the specific contextual needs of individuals and provide effective protection. This is achieved through various means, such as displaying flexibility in admissibility issues, and shifting the burden of proof [1].
However, the concept of vulnerability remains elusive within the ECtHR. While legal scholars have explored vulnerability as a component of legal reasoning [13], empirical work in this area remains scarce and predominantly relies on laborious manual processes. To address this challenge, NLP can offer valuable tools to assist experts in efficiently classifying and analyzing textual data. Besides high classification performance, the true utility of NLP in the legal field is its ability to identify relevant aspects related to vulnerability in court cases. These aspects can be extracted, grouped into patterns, and used to inform both litigation strategy and legal policy. Even so, a significant obstacle to progress in this area is the lack of appropriate datasets. To bridge these research gaps, we present the dataset VECHR1, which comprises cases dealing with allegation of Article 3 "Prohibition of torture" and is obtained from legal expert's empirical study2. Our proposed
Figure 1: Distribution changes of vulnerability types.
task is to identify which type of vulnerability (if any) is involved in a given ECHR case.
As model explainability is crucial for establishing trust, we extend the dataset with VECHR\({}_{\text{explain}}\), a token-level explanation dataset annotated by domain experts on a subset of VECHR. Its fine-grained token-level design mitigates performance overestimation of explainability when evaluated at the coarse paragraph level, as shown in previous works (Chalkidis et al., 2021; Santosh et al., 2022; Xu et al., 2023). Further, the understanding and application of vulnerability in court proceedings change over time, reflecting societal shifts and expanding to encompass a wider range of types (Fig 0(a)). The volume of cases also fluctuates significantly in response to social and political events (Fig 0(b)). To evaluate the model's robustness against distribution shifts, we further collect and annotate an additional out-of-domain (OOD) test set from cases involving non-Article 3 allegations, called VECHR\({}_{\text{challenge}}\).
We present comprehensive benchmark results using state-of-the-art (SOTA) models, revealing limited performance in vulnerability type classification in VECHR. We assess the models' alignment with expert explanations in VECHR\({}_{\text{explain}}\), and observe limited agreement. Experiment results on VECHR\({}_{\text{challenge}}\) indicate that, although incorporating description of the vulnerability type helps to improve the models' robustness, the performance remains low overall due to the challenges posed by the distribution shift. Our experiments underscore the difficulty of vulnerability classification in ECHR, and highlight a need for further investigation on improve model accuracy, explainability, and robustness.
## 2 Vulnerability Typology in ECHR
The inescapable and universal nature of vulnerability, as posited by Fineman (2016), underscores its significance in legal reasoning. For instance, the European Union has acknowledged the concept by establishing a definition for vulnerable individuals (Dir, 2013). However, it remains undefined within the context of ECHR. To facilitate an examination of vulnerability and its application within the ECHR, it is crucial to establish a typology recognized by the Court. Several scholars have endeavored to effectively categorize vulnerability for this purpose (Timmer, 2016; Limante, 2022). One notable study is conducted by Heri (2021), which provides a systematic and comprehensive examination of the concept of vulnerability under ECHR Article 3. Heri proposes a complete typology encompassing eight types: _dependency_, _state control_, _victimization_, _migration_, _discrimination_, _reproductive health_, _unpopular views_ and _intersections_ thereof. Tab 1 gives a description for each type.
## 3 Data Collection and Annotations
### Data Source and Collection Process
**VECHR** consists of 788 cases under Article 3, which were collected based on Heri's study of the Court's case law references of vulnerability. See App B for details on Heri's case sampling methodology and our post-processing procedures. We divided the dataset chronologically into three subsets: training (-05/2015, 590 cases), validation (05/2015-09/2016, 90 cases) and test (09/2016-02/2019, 108 cases).
**VECHR\({}_{\text{explain}}\)**: We selected 40 cases (20 each) from the val and test splits for the explanation
\begin{table}
\begin{tabular}{l l} \hline \hline
**Vulnerable Type** & **Description** \\ \hline \multirow{2}{*}{Dependency} & Including that of minors, the elderly, and those with physical, psychosocial \\ & and cognitive disabilities (i.e. mental illness and intellectual disability) \\ \hline State Control & Including that of detainees, military conscripts, and persons in state institutions \\ \hline \multirow{2}{*}{Victimisation} & Due to victimisation, including by domestic and sexual abuse, other violations, \\ & or because of a feeling of vulnerability \\ \hline Migration & In the migration context, applies to detention and expulsion of asylum-seekers \\ \hline \multirow{2}{*}{Discrimination} & Due to discrimination and marginalisation, which covers ethnic, political and \\ & religious minorities, LGBTQI people, and those living with HIV/AIDS \\ \hline Reproductive Health & Due to pregnancy or situations of precarious reproductive health \\ \hline Unpopular Views & Due to the espousal of unpopular views \\ \hline Intersection & Intersecting vulnerabilities \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of each vulnerability type. For more details, see App A.
dataset. Within each split, our sampling procedure involved two steps. First, we ensured coverage of all seven types by sampling one case for each type. Subsequently, we randomly selected an additional 13 cases to supplement the initial selection.
**VECHR\({}_{\text{challenge}}\)**: To test the model's ability to generalize across distribution shifts, we extend VECHR by collecting and annotating additional cases _not_ related to Article 3. Following Heri's method, we used the regular expression "vulne*" to retrieve all English relevant documents from the ECHR's public database HUDOC3 and exclude cases related to Article 3. We restricted the collection to the time span from 09/2016 (corresponding to start time of the test set) to 07/2022. In cases where multiple documents existed for a given case, we selected only the most recent document, resulting in a dataset consisting of 282 judgments. VECHR\({}_{\text{challenge}}\) can be regarded as an out-of-domain topical (OOD) scenario. The in-domain train/val/test of VECHR are all from the same text topic cluster of Article 3. The OOD VECHR\({}_{\text{challenge}}\) consists of non-Article 3 cases from different topic clusters (e.g. Article 10: freedom of expression), which involves different legal concepts and language usage.4
Footnote 3: [https://hudoc.echr.coe.int](https://hudoc.echr.coe.int)
Footnote 4: For example, the Court recognizes the vulnerability of an elderly woman and provides her with protection under Article 3 (prohibiting tortr) rather than Article 10 (freedom of expression)
### Vulnerability Type Annotation
We follow the typology and methodology presented by Heri 2021. She considered cases as "vulnerable-related", only when "vulnerability had effectively been employed by the Court in its reasoning". These cases are further coded according to the trait or situation (vulnerable type) giving rise to the vulnerability. In situations where the Court considered that multiple traits contributed to the vulnerability, she coded the case once for each relevant category. The resulting dataset comprises 7 labels5. Cases in which vulnerability was used only in its common definition, e.g. "financially vulnerability", were regarded as 'non-vulnerable' and were labelled none of the 7 types. See App C for more details of the definition of "vulnerable-related".
Footnote 5: See App D for our justification for excluding the type “intersectionality”.
For cases under Article 3, we adopted the labelling provided by Heri's protocol. For VECHR\({}_{\text{challenge}}\), we ask two expert annotators6 to label the case following Heri's methodology7. Each annotator has annotated 141 cases.
Footnote 6: See App E for annotators’ background and expertise.
Footnote 7: For the reason of why Heri confined her study to Article 3 and why the typology also applies to cases under other articles, please refer to App F.
**Inter-Annotator Agreement** To ensure consistency with Heri's methodology, we conducted a two-round pilot study before proceeding with the annotation of the challenge set (details in App G). In each round, two annotators independently labelled 20 randomly selected cases under Article 3, and we compared their annotations with Heri's labels. The inter-annotator agreement was calculated using Fleiss Kappa, and we observed an increase from 0.39 in the first round to 0.64 in the second round, indicating substantial agreement across seven labels and three annotators.
### Explanation Annotation Process
The explanation annotation process was done using the GLOSS annotation tool (Savelka and Ashley, 2018), see App H for details. Based on the case facts, the annotators was instructed to identify relevant text segments that indicate the involvement of a specific vulnerability type in the Court's reasoning. The annotators was permitted to highlight the same text span as an explanation for multiple vulnerable types.
## 4 Dataset Analysis
Tab 2 presents the key statistics of our dataset. VECHR comprises a total of 1,070 documents, with an average of 4,765 tokens per case (\(\sigma=4167\)). 788 and 282 cases fall under the Article 3 and non-Article 3 partitions, respectively. Among all, 530 documents are considered as "non-vulnerable", meaning they are not labelled as any of the seven vulnerable types. In the vulnerable
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Split & \(\#C\) & \(T/C\) & \(P/C\) & \(\#C_{-V}\) & \(L/C\) & \(L/C_{V}\) \\ \hline Train & 590 & 5140 & 83 & 325 & 0.70 & 1.55 \\ Validation & 90 & 5077 & 77 & 27 & 1.41 & 2.02 \\ Test & 108 & 3992 & 57 & 34 & 1.13 & 1.65 \\ Challenge & 282 & 4176 & 51 & 144 & 0.61 & 1.24 \\ \hline Total & 1070 & 4765 & 72 & 530 & 0.78 & 1.54 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset statistics for each split, with number of cases (\(C\)), number of non-vulnerable cases (\(C_{-V}\)), mean tokens (\(T\)) per case, mean paragraphs per case, mean labels (\(L\)) per case, and mean labels per case when only considering positive vulnerability cases (\(C_{V}\)).
related cases, the average number of labels assigned per document is 1.54.
We observe a strong label distribution imbalance within the dataset. The label "state control" dominates, accounting for 33% of the cases, while the least common label, "reproductive health", is present in only 3% of the cases. For more detailed statistics of our dataset, including details regarding the label imbalances in Tab 6, please refer to App I.
## 5 Experiments
### Vulnerability Type Classification
**Task:** Our objective is to predict the set of specific vulnerability type(s) considered by the Court based on the factual text of a case.
**Models:** We finetune pre-trained models _BERT_Devlin et al. (2019), _CaselawBERT_Zheng et al. (2021), _LegalBERT_Chalkidis et al. (2020): on our dataset with a multi-label classification head, truncating the input to the maximum of 512 tokens.
We finetune the _Longformer_ modelBeltagy et al. (2020) on our dataset that allows for processing up to 4,096 tokens, using a sparse-attention mechanism which scales linearly, instead of quadratically.
We further employ a _hierarchical_ variant of pretrained LegalBERT to deal with the long input limitation. We use a greedy input packing strategy where we merge multiple paragraphs8 into one packet until it reaches the maximum of 512 tokens. We independently encode each packet of the input text using the pretrained model and obtain representations (\(h_{[CLS]}\)) for each packet. Then we apply a non-pretrained transformer encoder to make the packet representations context-aware. Finally, we apply max-pooling on the context-aware packet representations to obtain the final representation of the case facts, which is then passed through a classification layer. Fig 1(a) illustrates the detailed architecture of the hierarchical model.
Footnote 8: Details and statistics on paragraphs are reported in App I.
For details on all models' configuration and training, please refer to App J.
**Evaluation Metrics:** we report micro-F1 (mic-F1) and macro-F1 (macF1) scores for 7+1 labels, where 7 labels correspond to 7 vulnerability types under consideration and an additional augmented label during evaluation to indicate non-vulnerable.
**Results:** Tab 3 reports the results of classification performance. We observe that legal-specific pre-training improved the performance over general
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Model & \multicolumn{2}{l|}{Classification} & Explanation \\ \hline & mac-F1 & mic-F1 & Kappa \\ \hline random & 19.02 & 25.07 & \(-0.11\pm 0.02\) \\ \hline BERT & 24.31 & 41.78 & \(0.02\pm 0.06\) \\ \hline CaselawBERT & 27.31 & 45.16 & \(0.04\pm 0.08\) \\ \hline LegalBERT & 27.34 & 42.47 & \(0.04\pm 0.07\) \\ \hline Longformer & 31.49 & 46.21 & \(0.11\pm 0.11\) \\ \hline Hierachical & 31.46 & 45.32 & \(0.10\pm 0.08\) \\ \hline \end{tabular}
\end{table}
Table 3: Classification and explanation results. We report F1s for classification performance and Kappa score with standard error for explanation agreement.
Figure 2: Visualization of Hierarchical and Concept-aware Hierarchical Model architectures.
pre-training. However, BERT models still face the input limitation constraint. Both Longformer and Hierarchical models improved compared to truncated variants and are comparable to each other. Overall, we see low overall performance across models, highlighting the challenging task.
### Vulnerability Type Explanation
We use Integrated Gradient (IG) Sundararajan et al. (2017) to obtain token-level importance from the model with respect to each vulnerable type under consideration. We max pool over sub-words to convert token-level IG scores into word-level scores, followed by a threshold-based binarization. Tab 3 reports explainability performance expressed as the average of Cohen's \(\kappa\) between the models' focus and the experts' annotations for the test instances. We observe that the low explainability scores among different models reflect their trend in classification scores and also echo the challenging nature of the task.
### Robustness to Distributional Shifts
We assess the robustness of models to distributional shift using the VECHR\({}_{\text{challenge}}\) and present the performance in Tab 4. Notably, we observe a drop in macro-F1 score on VECHR\({}_{\text{challenge}}\) compared to the test set. We attribute this to the models relying on suboptimal information about vulnerability types, which is primarily derived from the factual content rather than a true understanding of the underlying concept. To address this limitation, we propose a **Concept-aware Hierarchical** model that considers both the case facts and the description of vulnerability type to determine if the facts align with the specified vulnerability type9, inspired by Tyss et al. 2023a. We employ a greedy packing strategy as described earlier and use a hierarchical model to obtain the context-aware packet representations for each packet in the facts and concept description separately. Subsequently, we apply scaled-dot-product cross attention between the packet vectors of the facts (as Query) and concepts (as Keys and Values), generating the concept-aware representation of the facts section packets. A transformer layer is used to capture the contextual information of the updated packet vectors. Then we obtain the concept-aware representation of the case facts via max pooling and pass it through a classification layer to obtain the binary label. Fig 2b illustrates the detailed architecture of the concept-aware model. For more details, see App K.
Footnote 9: We cast the multi-label task into a binary classification setup by pairing the text with each vulnerability type. These binary labels are transformed into a multi-label vector for performance evaluation, to produce a fair comparison to multi-label models on the same metric.
The concept-aware model exhibits increased robustness to distributional shift and shows an improvement on the challenge set, owed to the incorporation of the vulnerability type descriptions. Overall, our results show promise for the feasibility of the task yet indicate room for improvement.
## 6 Conclusion
We present VECHR, an ECHR dataset consisting of 1,070 cases for vulnerability type classification and 40 cases for token-level explanation. We also release a set of baseline results, revealing the challenges of achieving accuracy, explainability, and robustness in vulnerability classification. We hope that VECHR and the associated tasks will provide a challenging and useful resource for Legal NLP researchers to advance research on the analysis of vulnerability within ECHR jurisprudence, ultimately contributing to effective human rights protection.
## Limitations
In our task, the length and complexity of the legal text require annotators with a deep understanding of ECHR jurisprudence to identify vulnerability types. As a result, acquiring a large amount of annotation through crowdsourcing is not feasible, leading to limited-sized datasets. Additionally, the high workload restricts us to collecting only one annotation per case. There is a growing body of work in mainstream NLP that highlights the presence of irreconcilable Human Label VariationPlank (2022); Basile et al. (2021) in subjective tasks, such as natural language inference Pavlick and Kwiatkowski (2019) and toxic language detection Sap et al. (2022). Future work should address this limitation
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model & \multicolumn{2}{c|}{VECHR\({}_{\text{challenge}}\)} \\ \hline & mac-f1 & mic-f1 \\ \hline random & 12.75 & 14.61 \\ \hline BERT & 20.51 & 43.48 \\ \hline CaselawBERT & 24.55 & 57.51 \\ \hline LegalBERT & 22.60 & 50.77 \\ \hline Longformer & 25.24 & 55.71 \\ \hline Hierarchical & 26.43 & 58.46 \\ \hline Concept-aware Hierarchical & **33.11** & 49.62 \\ \hline \end{tabular}
\end{table}
Table 4: Results on the challenge dataset.
and strive to incorporate multiple annotations to capture a more and potentially multi-faceted of the concept of vulnerability.
This limitation is particularly pronounced because of the self-referential wording of the ECHR (Fikfak, 2021). As the court uses similar phrases in cases against the same respondent state or alleging the same violation, the model may learn that these are particularly relevant, even though this does not represent the legal reality. In this regard, it is questionable whether cases of the ECHR can be considered "natural language". Moreover, the wording of case documents is likely to be influenced by the decision or judgement of the Court. This is because the documents are composed by court staff after the verdict. Awareness of the case's conclusion could potentially impact the way its facts are presented, leading to the removal of irrelevant information or the highlighting of facts that were discovered during an investigation and are pertinent to the result (Medvedeva et al.). Instead, one could base the analysis on the so-called "communicated cases", which are often published years before the case is judged. However, these come with their own limitations and only represent the facts as characterized by the applicant applicant and not the respondent state. There are also significantly fewer communicated cases than decisions and judgements.
One of the main challenges when working with corpora in the legal domain is their extensive length. To overcome this issue, we employ hierarchical models, which have a limitation in that tokens across long distances cannot directly interact with each other. The exploration of this limitation in hierarchical models is still relatively unexplored, although there are some preliminary studies available (e.g., see Chalkidis et al.2022). Additionally, we choose to freeze the weights in the LegalBERT sentence encoder. This is intended to conserve computational resources and reduce the model's vulnerability to superficial cues.
### Ethics Statement
Ethical considerations are of particular importance because the dataset deals with vulnerability and thus with people in need of special protection. In general, particular attention needs to be paid to ethics in the legal context to ensure the values of equal treatment, justification and explanation of outcomes and freedom from bias are upheld (Surden, 2019).
The assessment of the ethical implications of the dataset is based on the Data Statements by Bender and Friedman (2018). Through this, we aim to establish transparency and a more profound understanding of limitations and biases. The curation is limited to the Article 3 documents in English. The speaker and annotator demographic are legally trained scholars, proficient in the English language. "Speaker" here refers to the authors of the case documents, which are staff of the Court, rather than applicants. We do not believe that the labelling of vulnerable applicants is harmful because it is done from a legally theoretical perspective, intending to support applicants. The underlying data is based exclusively on the publicly available datasets of ECHR documents available on HUDOC10. The documents are not anonymized and contain the real names of the individuals involved. We do not consider the dataset to be harmful, given that the judgments are already publicly available.
Footnote 10: [https://hudoc.echr.coe.int](https://hudoc.echr.coe.int)
We are conscious that, by adapting pre-trained encoders, our models inherit any biases they contain. The results we observed do not substantially relate to such encoded bias. Nonetheless, attention should be paid to how models on vulnerability are employed practically.
In light of the aforementioned limitations and the high stakes in a human rights court, we have evaluated the potential for misuse of the vulnerability classification models. Medvedeva et al. (2020) mention the possibility of reverse engineering the model to better prepare applications or defences. This approach is, however, only applicable in a fully automated system using a model with high accuracy towards an anticipated decision outcome. As this is not the case for the models presented, we assume the risk of circumventing legal reasoning to be low. On the contrary, we believe employing a high recall vulnerability model could aid applicants and strengthen their legal reasoning. In a scholarly setting focused on vulnerability research, we do not think the model can be used in a detrimental way. Our research group is strongly committed to research on legal NLP models as a means to derive insight from legal data for purposes of increasing transparency, accountability, and explainability of data-driven systems in the legal domain.
There was no significant environmental impact, as we performed no pre-training on large datasets. Computational resources were used for fine-tuning
and training the models, as well as assessing the dataset. Based on partial logging of computational hours and idle time, we estimate an upper bound for the carbon footprint of 30 kg \(\mathrm{CO}_{2}\) equivalents. This is an insignificant environmental impact.
|
2302.06405 | An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of
Binary Neural Networks | Binary Neural Networks (BNNs) are increasingly preferred over full-precision
Convolutional Neural Networks(CNNs) to reduce the memory and computational
requirements of inference processing with minimal accuracy drop. BNNs convert
CNN model parameters to 1-bit precision, allowing inference of BNNs to be
processed with simple XNOR and bitcount operations. This makes BNNs amenable to
hardware acceleration. Several photonic integrated circuits (PICs) based BNN
accelerators have been proposed. Although these accelerators provide remarkably
higher throughput and energy efficiency than their electronic counterparts, the
utilized XNOR and bitcount circuits in these accelerators need to be further
enhanced to improve their area, energy efficiency, and throughput. This paper
aims to fulfill this need. For that, we invent a single-MRR-based optical XNOR
gate (OXG). Moreover, we present a novel design of bitcount circuit which we
refer to as Photo-Charge Accumulator (PCA). We employ multiple OXGs in a
cascaded manner using dense wavelength division multiplexing (DWDM) and connect
them to the PCA, to forge a novel Optical XNOR-Bitcount based Binary Neural
Network Accelerator (OXBNN). Our evaluation for the inference of four modern
BNNs indicates that OXBNN provides improvements of up to 62x and 7.6x in
frames-per-second (FPS) and FPS/W (energy efficiency), respectively, on
geometric mean over two PIC-based BNN accelerators from prior work. We
developed a transaction-level, event-driven python-based simulator for
evaluation of accelerators (https://github.com/uky-UCAT/B_ONN_SIM). | Sairam Sri Vatsavai, Venkata Sai Praneeth Karempudi, Ishan Thakkar | 2023-02-03T20:56:01Z | http://arxiv.org/abs/2302.06405v2 | # An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of Binary Neural Networks
###### Abstract
Binary Neural Networks (BNNs) are increasingly preferred over full-precision Convolutional Neural Networks (CNNs) to reduce the memory and computational requirements of inference processing with minimal accuracy drop. BNNs convert CNN model parameters to 1-bit precision, allowing inference of BNNs to be processed with simple XNOR and bitcount operations. This makes BNNs amenable to hardware acceleration. Several photonic integrated circuits (PICs) based BNN accelerators have been proposed. Although these accelerators provide remarkably higher throughput and energy efficiency than their electronic counterparts, the utilized XNOR and bitcount circuits in these accelerators need to be further enhanced to improve their area, energy efficiency, and throughput. This paper aims to fulfill this need. For that, we invent a single-MRR-based optical XNOR gate (OXG). Moreover, we present a novel design of bitcount circuit which we refer to as Photo-Charge Accumulator (PCA). We employ multiple OXGs in a cascaded manner using dense wavelength division multiplexing (DWDM) and connect them to the PCA, to force a novel Optical XNOR-Bitcount based Binary Neural Network Accelerator (OXBNN). Our evaluation for the inference of four modern BNNs indicates that OXBNN provides improvements of up to 62\(\times\) and 7.6\(\times\) in frames-per-second (FPS) and FPS/W (energy efficiency), respectively, on geometric mean over two PIC-based BNN accelerators from prior work. We developed a transaction-level, event-driven python-based simulator for evaluation of accelerators ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM))1.
Footnote 1: To Appear at IEEE ISQED 2023
## I Introduction
Convolutional Neural Networks (CNNs) have revolutionized the implementation of various artificial intelligence tasks, such as image recognition, language translation, and autonomous driving [1, 2], due to their high inference accuracy. However, the heavy computation and storage requirements of CNNs still limit their application in practice. Therefore, to improve the speed and efficiency of CNN inference, model compression techniques such as quantization are widely employed [3, 4, 5]. Quantization techniques create compact CNNs compared to their floating-point counterparts by representing the weights/inputs of CNNs with lower precision. The extreme end of the quantization is binarization, i.e., a 1-bit quantization, that allows only two possible values for both inputs and weights, either -1(0) or +1.
Binarization replaces the heavy floating-point vector-dot-product operations (which constitute convolution operations in CNNs) with simple bit-wise XNOR and bitcount operations [6]. Since bit-wise XNOR and bitcount are lightweight operations, binarized CNNs, referred to as binary neural networks (BNNs), provide efficient hardware implementations. Among the BNN hardware implementations from prior works, the silicon-photonic accelerators have shown great promise to provide unparalleled parallelism, ultra-low latency, and high energy efficiency [7, 8]. Prior work [7] utilizes microdisks to realize XNOR-Bitcount processing cores (XPCs) that process the input and weight vectors, whereas [8] uses Microring Resonators (MRRs) in its XPCs to perform XNOR-Bitcount operations. However, these prior works face two shortcomings. First, they use at least two MRRs or microdisks to achieve 1-bit XNOR operation, which increases their area and energy consumption. Second, because of the limited scalability of their XNOR and bitcount circuits, they are forced to decompose the input and weight vectors into a large number of smaller slices before processing them. This generates a large number of partial sums (_psums_). Accumulating such a large number of _psums_ to obtain the final result, using a _psum_ reduction network, can incur a very high latency overhead.
To address these shortcomings, this paper presents a novel Optical XNOR-Bitcount based Binary Neural Network Accelerator (OXBNN). OXBNN employs a novel design of optical XNOR gates (OXGs). Our OXG uses a single MRR to perform a 1-bit XNOR operation, thereby reducing the area and energy consumption compared to prior works. Moreover, OXBNN employs a novel bitcount circuit, referred to as Photo-Charge Accumulator (PCA), which inherently supports the accumulation of a very high number of _psums_, thereby eliminating the need of using external _psum_ reduction networks, to consequently reduce the overall latency and energy consumption of BNN processing.
Our key contributions in this paper are summarized below.
* We present our invented, novel BNN accelerator called OXBNN, which employs an array of single-MRR-based optical XNOR gates (OXGs) and highly scalable bitcount circuits called Photo-Charge Accumulators (PCAs);
* We perform detailed modeling and characterization of our invented OXGs and PCAs using photonics foundry-validated, commercial-grade, photonic-electronic design automation tools (Section III);
* We perform a scalability analysis for our OXBNN and describe a pertinent mapping scheme (Section IV);
* We implement and evaluate OXBNN at the system-level with our in-house simulator ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM)), and compare its performance with two well-known photonic BNN accelerators from prior works, for the inferences of four state-of-the-art BNNs (Section V).
## II Preliminaries
### _Binary Neural Networks (BNNs)_
BNNs are specific types of CNNs that employ quantization techniques [9] to quantize the weights and inputs to 1-bit values, reducing the storage requirements and computational effort for improved energy efficiency of model inference. With binary quantization, the weights and inputs can only assume two possible values, either -1 or 1 [6, 10]. In general, the _sign_ function is the most widely used binary quantization function (Q):
\[Q(x)=sign(x)=x\geq 0\?+1:-1 \tag{1}\]
Like for CNNs [11], a convolution operation for BNNs is also typically decomposed into multiple vector-dot-product (VDP) operations. Each VDP operation of a BNN occurs between two vectors, the individual elements of which are first binarized using Eq. (1). Then, the VDP operation between a binarized weight vector \(W\) and a binarized input vector \(I\) can be realized in two steps, in this given order: (i) element-wise (i.e., bit-wise) XNOR of \(I\) and \(W\) that produces an XNOR vector; (ii) bitcount of the XNOR vector. This VDP operation is captured in Eq. 2.
\[z=W\odot I=\sum_{i=1}^{S}W_{i}\odot I_{i} \tag{2}\]
Here, \(W_{i}\) and \(I_{i}\), respectively, are the individual bit-elements at index i of the binarized vectors \(W\) and \(I\) of size \(S\) each; \(\odot\) denotes the VDP operation (XNOR operation) between binarized vectors \(I\) and \(W\) (bit-elements \(W_{i}\) and \(I_{i}\)); \(\sum\) represents the bitcount operation.
**Using {0,1} instead of {-1,1}:** If binary value set {-1,1} is used, obtaining the activation values for the next BNN layer after a convolution operation requires \(sign(z)\) for each bitcount result \(z\). On the other hand, if binary value set {0,1} is used, obtaining the activation values for the next BNN layer after a convolution operation requires \(compare(z,0.5\times z_{max})\)=\(z>0.5\times z_{max}\)\(71:0\) for each bitcount result \(z\), where \(z_{max}\) is the size of the binarized vectors \(I\) and \(W\).
### _Processing of BNNs on Hardware_
Fig. 1(a) illustrates the convolution between a 3\(\times\)3 weight channel and a 5\(\times\)5 input channel. During the convolution, based on the stride parameter, the weight channel slides over the input channel and performs inner products with multiple input channel windows (e.g., four input channel windows are shown in Fig. 1(a) with red, blue, yellow, and green borders), generating one output value per input channel window. From Fig. 1(b), to perform one such inner product (i.e., corresponding to the input channel window highlighted in green in Fig 1(a)), the input channel window and weight channel are flattened into input and weight vectors of size _S_=9 each. Then, a bitwise XNOR circuit, with a total of _N_=_S_=9 XNOR gates, is employed to generate an XNOR vector. A bitcount circuit then counts the bits in the XNOR vector to evaluate the corresponding inner product output. However, the hardware size _N_\(\neq\)_S_ often. For example, in Fig. 1(c), _S_=9 and _N_=5. In this case, both the input and weight vectors (_S_=9 each) are decomposed into two slices each: Slice 1 with _S_=5 and Slice 2 with _S_=4. These slices are then mapped onto two bitwise XNOR circuits with _N_=5 each, as shown in Fig. 1(c), to consequently produce two XNOR vector slices. Applying bitcount on these XNOR vector slices generates two partial sums (_psums_), i.e., \(psum^{1}\) and \(psum^{2}\). \(psum^{1}\) and \(psum^{2}\) are then sent to a _psum_ reduction network to generate the corresponding inner product output. The addition of the _psums_ by the _psum_ reduction network incurs additional latency and energy overheads while processing BNNs.
### _Related Work on Optical BNN Accelerators_
To accelerate CNN inferences with low latency and low energy consumption, prior works proposed various accelerators
Fig. 1: (a) Illustration of a convolution between a weight and input channel in a Binary Neural Network. Bit-wise XNOR and bitcount operations between a flattened weight vector and input vector, (b) when _S_=_N_=9, and (c) when _N_=5, _S_=9; each input and weight vector of _S_=9 is split into two slices (Slice 1 with _S_=5 and Slice 2 with _S_=4). Binary value set {-1,1} is used in this example.
based on photonic integrated circuits (PICs) (e.g., [12, 13, 14]). These accelerators can be classified as incoherent (e.g., [11, 12, 13]) or coherent (e.g., [15, 16]). Because of the inherent advantages of incoherent accelerators [13, 17], the BNN-specific incoherent accelerators [8] and [7] were reported. _These optical BNN accelerators from prior works employ binary value set {0,1}_. [8] proposes broadcast and weight styled [18] XNOR-Bitcount circuits, which use heterogeneous MRRs to mitigate fabrication process variations. In contrast, the microdisk-based accelerator [7] proposes an all-optical XNOR-Bitcount circuit that uses optical XNOR gates, optical analog-to-digital converters (ADCs), and PCM-based racetrack memory to enable processing at a very high datarate. However, both [8] and [7] require at least two MRRs or microdisks to perform a 1-bit XNOR operation (in [7], one additional MRR/microdisk is required to modulate the optically applied input operand). Therefore, their XNOR circuits occupy high area and consume high energy. In addition, the bitcount circuits of these prior works can evaluate only one _psum_ at a time by counting the bits of one XNOR vector slice at a time. Therefore, these circuits have to store the individual _psum_s temporarily in memory. Once sufficient _psum_s are collected, they can be sent to a _psum_ reduction network to produce the final result. Thus, the bitcount circuits from prior works incur high memory footprint for storing _psum_s, and high latency and energy for processing _psums_. Our OXBNN accelerator addresses these shortcomings of prior works.
## III Our Proposed OXBNN Architecture
### _Overview_
The main processing unit of our OXBNN architecture is an XNOR-Bitcount Processing Core (XPC), which is illustrated in Fig. 2. Our XPC has an array of total \(N\) single-wavelength laser diodes (LDs), with each LD sourcing optical power of \(P_{\lambda i}^{in}\) amount at a distinct wavelength \(\lambda_{i}\). The total power from all \(N\) LDs (at wavelengths \(\lambda_{1}\) to \(\lambda_{N}\)) multiplex into a single photonic waveguide through wavelength division multiplexing (WDM). The optical power containing all these \(N\) wavelengths is split into \(M\) input waveguides, each of which connects to an XNOR-Bitcount Processing Element (XPE) (Fig. 2). An XPC contains a total of \(M\) XPEs.
### _XNOR-Bitcount Processing Element (XPE)_
From Fig. 2, an XPE in our OXBNN architecture contains two parts: _(i)_ an array of a total of \(N\) Optical XNOR Gates (OXGs) that generates an XNOR vector (or an XNOR vector slice) containing \(N\) optical bits, and _(ii)_ our invented Photo-Charge Accumulator (PCA) that performs bitcount on the generated XNOR vector (or XNOR vector slice). The value \(N\) here, which is equal to the number of wavelengths and number of OXGs per XPE, is referred to as the size of the XPE.
#### Iii-B1 Array of Optical XNOR Gates (OXGs)
In an XPE, an array of a total of \(N\) OXGs couples to an input waveguide as shown in Fig. 2. Each OXG operates upon a unique wavelength \(\lambda_{i}\) traversing the input waveguide. Each OXG in the array electrically receives two binary operands (i.e., input bit \(i_{1}^{N}\) and weight bit \(\text{w}_{1}^{N}\)) from its corresponding drivers (not shown in the figure). The array of OXGs performs a bit-wise logical XNOR between an _N_-bit input vector slice \(I_{1}\) = \(\{i_{1}^{1},i_{1}^{2},..,i_{1}^{N}\}\) and an _N_-bit weight vector slice \(W_{1}\) = \(\{w_{1}^{1},w_{1}^{2},..,w_{1}^{N}\}\) to produce a resultant _N_-bit XNOR vector slice. Each OXG in the array produces one bit of the resultant XNOR vector slice, and it imprints this bit on its corresponding \(\lambda_{i}\) (by modulating the optical transmission at \(\lambda_{i}\)) to be consequently guided to
Fig. 2: Schematic of an XNOR-Bitcount Processing Core (XPC) of our OXBNN accelerator. Our OXBNN employs binary value set {0,1}.
the bitcount circuit (i.e., PCA) via the output waveguide. As a result, the PCA receives the \(N\) individual optical bits of the _N_-bit XNOR vector slice concurrently on \(N\) distinct wavelengths. The PCA performs bitcount on these optical bits, as explained later. This entire processing step, from the bit-parallel application of the binary input and weight vector slices at the electrical input terminals of the array of \(N\) OXGs to the generation of the bitcount result by the PCA, takes very low latency because of the light-speed operation of the XPE. _We refer to this processing step mapped on an XPE as a_ PASS _and the corresponding latency as_ \(\tau\)_. Thus, our XPE can produce one bitcount result for one XNOR vector slice in every single PASS with_ \(\tau\) _latency. Since_ \(\tau\) _can be very low (as low as 20 ps), our XPE can achieve very high processing throughput by completing one PASS every_ \(\tau\) _period. For that, multiple input and weight vector slices_ \(\{I_{1},I_{2},..,I_{\alpha}\}\) _and_ \(\{W_{1},W_{2},..,W_{\alpha}\}\) _can be applied to the array of OXGs of an XPE in a serial manner at the predefined data rate (DR) of_ \(\frac{1}{\tau}\)_. The design and operation of an OXG and PCA are explained next._
_Design of an Optical XNOR Gate (OXG): The design of our invented Optical XNOR Gate (OXG) is illustrated in Fig. 3(a). It is an add-drop microring resonator (MRR), which has two operand terminals (realized as embedded PN-junctions) that can take two operand bits_ \(i\) _and_ \(w\) _as inputs for a predefined time-width (usually a little less than the_ \(\tau\) _period). Fig. 3(b) shows the passbands of the MRR for different operand inputs and temperature conditions. The MRR's temperature can be increased using the integrated microheater (Fig. 3(a)), to consequently tune its operand-independent resonance from its fabrication-defined initial position_ \(\eta\) _to its programmed position_ \(\kappa\) _(blue passband; Fig. 3(b)), relative to the input optical wavelength position_ \(\lambda_{in}\)_. For each bit combination at the operand terminals ((_i_,_w_) = (0,1), (1,0), or (1,1)), the MRR's resonance passband electro-refractively moves to an operand-driven position (red and magenta passbands in Fig. 3(b)). Based on the MRR resonance passband's programmed position_ \(\kappa\) _relative to_ \(\lambda_{in}\)_, the through-port transmission (T(_\(\lambda_{in}\)_)) of the MRR provides bit-wise logical XNOR operation between the input bits_ \(i\) _and_ \(w\)_._
_To validate the operation of our OXG, we performed the transient analysis, as shown in Fig. 3(c). For that, we modelled and simulated our OXG using the foundry-validated tools from Ansys/Lumerical's DEVICE, CHARGE, and INTERCONNECT suites [19]. Fig. 3(c) shows two input bit-streams \(I\) = \(\{i_{1}^{1},i_{2}^{1},..,i_{8}^{1}\}\) and \(W\) = \(\{w_{1}^{1},w_{2}^{1},..,w_{8}^{1}\}\) applied to the two PN junctions of our OXG at a DR = 10 GS/s. By looking at the output optical trace T(\(\lambda_{in}\)) in Fig. 3 (c), we can say T(\(\lambda_{in}\)) = \(\{i_{1}^{1}\odot w_{1}^{1},..,i_{8}^{1}\odot w_{8}^{1}\}\), which validates the functionality of our OXG as a logical XNOR gate. From our validation, our OXG has a full passband width at half maximum (FWHM) of 0.35 nm and it can operate at DR of up to 50 GS/s. Our XNOR gate consumes energy of 0.032nJ with an area footprint of 0.011mm\({}^{2}\)._
#### Iii-A2 Photo-Charge Accumulator (PCA)
_From Section III-A, the XNOR vector bits generated by an array of OXGs are guided to a PCA circuit, where a bitcount is performed on the XNOR vector bits to generate an output result. Our PCA circuit employs a photodetector and two time integrating receiver (TIR) circuits_ [20] _(one of the TIR1 and TIR2 circuits remains redundant, enabled by the demux and mux; Fig 4). The photodetector generates a current pulse for each optical logic '1' incident upon it. The amplitude of a current pulse generated for an optical logic '0' remains under the noise limit; therefore, a logic '0' remains statistically undetected. The current pulse generated by an optical logic '1' accumulates a certain statistically significant amount of charge on the capacitor of the active TIR circuit (e.g., the circuit with C1 capacitor); as a result, the TIR circuit outputs a detectable analog voltage level [20]. Hence, when more optical '1's are incident upon the photodetector, the total accumulated charge on the active capacitor (e.g., C1), and thus, the accrued output analog voltage level, grows proportionally to the total number of optical '1's that are incident [20]. This is because a current source (a sequence of current pulses) can charge a capacitor linearly following this equation: \(\delta V\)=\(\frac{i\delta t}{C}\), where \(i\) is an incident current pulse, \(\delta t\) is the time-width of the current pulse, \(C\) is the capacitance, and \(\delta V\) is the accrued voltage. The final analog voltage accrued at the TIR output, thus, represents the bitcount result (accumulation result) of the incident optical '1's. However, the number of '1's that can be accumulated in such a manner might be limited, as the output of the TIR circuit (Fig. 4) might saturate. Once the output of a TIR circuit saturates, the ongoing accumulation phase ends and the bitcount result (i.e., the final TIR output voltage) is passed through a comparator to generate the activation value for the next BNN layer (as explained in Section II-A). After one accumulation phase, a discharge of the active capacitor (e.g., C1) is needed to prepare the circuit for the next accumulation phase. While capacitor C1 is discharging, the redundant TIR2 circuit with capacitor C2 mitigates the discharge latency by allowing a continuation of a concurrent bitcount.
Fig. 3: (a) Schematic of our Optical XNOR Gate (OXG). (b) Spectral operation of OXG. (c) Transient analysis of OXG.
## IV Scalability Analysis and Mapping
### _Scalability of XNOR-Bitcount Processing Cores (XPCs)_
To determine the achievable size \(N\) for our XPC, we adopt scalability analysis equations (Eq. 3, Eq. 4, and Eq. 5) from [21] and [17]. Table I reports the definitions of the parameters and their values used in these equations. We considered Free Spectral Range (FSR=50nm) [21], FWHM=0.35nm (refer Section III-B), and inter-wavelength gap of 0.7nm. For these spectral conditions, we observed minimal crosstalk power penalty for the OXGs operating at DR=50GS/s (\(<\)1 dB penalty [22, 23, 24], which is accounted for as part of parameter \(IL_{penalty}\) in the equations (Table I)). Since the XPC of our OXBNN accelerator processes binarized vectors, it requires the bit precision of _B_=1-bit in the equations. We consider _M=N_ and first solve Eq. 3 and Eq. 4 for a set of DRs={3, 5, 10, 20, 30, 40, 50} GS/s, to find a corresponding set of \(P_{PD-opt}\). Then, we solve Eq. 4 for \(N\) with the obtained set of \(P_{PD-opt}\) values across the set of _DR_s. Table II reports the achievable \(N\) for our XPC across various _DR_s. As evident, the supported \(N\) value decreases from _N=66_ at 3 GS/s to _N=19_ at 50 Gs/s. This achievable \(N\) value defines the feasible number of OXGs per XPE; thus, this \(N\) also defines the maximum size of the XNOR vector slice that can be generated in our XPC. Because we consider FSR of 50nm and inter-wavelength gap of 0.7nm, we verify that the maximum _N_=66 can be supported within the FSR (i.e., _N_=66\(<\)(FSR/0.7nm)).
\[B=\frac{1}{6.02}\Bigg{[}20log_{10}(\frac{R_{s}\times P_{PD-opt}}{\beta\sqrt{ \frac{DR}{\sqrt{2}}}}-1.76\Bigg{]} \tag{3}\]
\[\beta=\sqrt{2q(R_{s}P_{PD-opt}+I_{d})+\frac{4kT}{R_{L}}+R_{s}^{2}P_{PD-opt}^{2 }RIN} \tag{4}\]
\[\begin{split} P_{Laser}=\frac{10^{\frac{\eta_{WG}(dB)|N(d_{OX}) +d_{elemental}}{10}}M}{\eta_{SMF}\eta_{EC}\eta_{WPE}IL_{i/p-OX}}\times\frac{P_{ PD-opt}}{IL_{penalty}}\\ \times\frac{1}{(OBL_{OXG})^{N-1}(EL_{splitter})^{log2 M}}\end{split} \tag{5}\]
Analysis of PCA's Accumulation Capacity: We modeled the photodetector (PD) of our PCA circuit using the INTERCONNECT tool from Ansys/Lumerical [19] for PD responsivity = 1.2 A/W across different \(P_{PD-opt}\) values corresponding to the \(N\) values in Table II. We extracted the current pulse values generated by the photodetector for the incident optical '1's and '0's corresponding to each \(P_{PD-opt}\). We then imported these values in our MultiSim [25] based model of the PCA with C1=C2=10pF [20], and the TIR gain=50. For these parameters, we simulated the analog output voltage at the PCA's TIR for different bitcount results (i.e., different values of the total number of accumulated '1's). From this analysis, we observed that the maximum number of '1's that can be accumulated by our PCA is limited by the available operating dynamic range of the TIR of our PCA. We considered the TIR's operating dynamic range to be 5V (0V to 5V) and evaluated our PCA's accumulation capacity \(\gamma\), which we define as the maximum number of '1's that can be accumulated by the PCA within the TIR's operating dynamic range. Our evaluated \(\gamma\) values, for each pair of \(N\) and corresponding \(P_{PD-opt}\), are reported in Table II. Since our PCA can accumulate a total of \(N\) bits and since each XNOR vector slice in our XPC has a total of \(N\) bits, our PCA can accumulate a total of \(\alpha\) XNOR vector slices, where \(\alpha\) = \(\frac{\gamma}{N}\). Table II also reports the values of \(\alpha\). As evident, the \(\gamma\) and \(\alpha\) values for our PCA can be very large, which provides several substantial benefits as discussed in Section IV-C.
### _Mapping Convolutions on an XPC_
As described in Section II-A, for processing a BNN convolution on hardware, both the weight and input channels are flattened into binarized vectors. For mapping of a binary
Fig. 4: Photo-Charge Accumulator (PCA) Circuit. \(V_{REF}\) is the threshold required in the \(compare()\) function discussed in Section II-A. Typically, \(V_{REF}\) = 25V because we consider the dynamic range of TIR to be 5V.
convolution on an XPC (or XPE), these binarized input and weight vectors are represented as matrices. For instance, the input matrix \(\mathbb{I}\)_(H, S)_ has \(H\) rows corresponding to \(H\) binarized input vectors of size \(S\) each. Similarly, the weight matrix \(\mathbb{W}\)_(H, S)_ can also be defined. These matrices \(\mathbb{W}\)_(H, S)_ and \(\mathbb{I}\)_(H, S)_ are mapped onto an XPC containing a total of \(M\) XPEs of size \(N\) each. Depending on the relation between \(S\) and \(N\), two cases drive the selection of the appropriate mapping. These cases and their corresponding mappings are illustrated in Fig. 5, _for M=2_, _H=2_, _N=9_, and two distinct values of \(S\). These cases are explained below:
**Case 1, S=15, S\(>\)_N, Fig. 5(a) and 5(b)**: Matrices \(\mathbb{I}\) and \(\mathbb{W}^{\prime}\) consist of two vectors each, \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\), respectively. To make the size _S=15_ of these vectors \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\) amenable to the XPE size _N_=9, each of these vectors is split into two slices to yield a set of input vector slices \(\{I_{1}^{1}\), \(I_{1}^{2}\), \(I_{2}^{1}\), \(I_{2}^{2}\)\(\}\) and a set of weight vector slices \(\{W_{1}^{1}\), \(W_{1}^{2}\), \(W_{2}^{1}\), \(W_{2}^{2}\)\(\}\). Since _M=2_ is less than the total number of vector slices (i.e., \(H\!\times\!ceil(S/N)\) = 4), multiple passes are required to complete the processing of these vector slices. Mappings of these vector slices differ between our PCA and the bitcount circuit from prior works [8] and [7], as discussed next.
**Mapping for the bitcount circuit from [8] and [7] (Fig. 5(a))**: Since M=2, there are two XPEs, namely XPE 1 and XPE 2. During PASS 1 of these XPEs (the definition of a PASS is given in Section III-B), we map \(\{I_{1}^{1}\), \(W_{1}^{1}\}\) onto XPE 1, and \(\{I_{1}^{2}\), \(W_{1}^{2}\}\) onto XPE 2. XPE 1 generates the corresponding XNOR vector, which is accumulated using the bitcount circuit to produce _psum_\(I_{1}^{1}\!\odot\!W_{1}^{1}\). Similarly, XPE 2 generates _psum_\(I_{1}^{2}\!\odot\!W_{1}^{2}\). The generated _psums_ are reduced (further accumulated) at the _psum_ reduction network, to produce Final Result 1. Similarly, during PASS 2, vector slices \(\{I_{2}^{1}\), \(I_{2}^{2}\), \(W_{2}^{1}\), \(W_{2}^{2}\)\(\}\) are mapped to generate corresponding _psums_, which are then sent to the _psum_ reduction network to produce Final Result 2. Thus, for the bitcount circuits from prior works, there is a need for employing a _psum_ reduction network, which leads to a high latency overhead.
**Mapping for our OXBNN with PCAs, Fig. 5(b)**: Our OXBNN maps all the slices of a particular vector to the same XPE. During PASS 1, OXBNN maps \(\{I_{1}^{1}\), \(W_{1}^{1}\}\) to XPE 1, and \(\{I_{2}^{1}\), \(W_{2}^{1}\}\) to XPE 2. XPE 1 charges its PCA's capacitor to generate an analog voltage level that represents _psum_\(I_{1}^{1}\!\odot\!W_{1}^{2}\), whereas XPE 2 charges its PCA's capacitor to generate an analog voltage level that represents _psum_\(I_{2}^{1}\!\odot\!W_{2}^{1}\). Because a PCA can accumulate a total of \(\alpha\) vector slices (Section III-B2), the PCAs of XPE 1 and XPE 2 can be made to hold the charge and analog voltage accrued during PASS 1. Then, during PASS 2, XPE 1 and XPE 2 can further grow these held analog voltage levels by the amounts proportional to \(I_{1}^{2}\!\odot\!W_{1}^{2}\) and \(I_{2}^{2}\!\odot\!W_{2}^{2}\), respectively. Thus, at the end of PASS 2, the total accrued analog voltage on the PCA of XPE 1 (XPE 2) would be proportional to \(I_{1}^{1}\!\odot\!W_{1}^{1}\) + \(I_{1}^{2}\!\odot\!W_{1}^{2}\) (\(I_{2}^{2}\!\odot\!W_{2}^{2}\) + \(I_{2}^{2}\!\odot\!W_{2}^{2}\)). Thus, the PCAs of our OXBNN can accumulate multiple _psums_ (a total of \(\alpha\)_psums_) inherently. This eliminates the need to employ _psum_ reduction networks to consequently yield substantial benefits, as further explained in Section IV-C.
**Case 2, S=9, S\(\leq\)_N, Fig. 5 (c)**: The size _S=9_ of the vectors \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\) matches with the XPE size _N_=9_. Thus, in a single pass (PASS 1), our OXBNN maps \(\{I_{1}\), \(W_{1}\}\) to XPE 1, and \(\{I_{2}\), \(W_{2}\}\) to XPE 2. XPE 1 and XPE 2 produce Final Result 1 and Final Result 2 corresponding to \(I_{1}\!\odot\!W_{1}\) and \(I_{1}\!\odot\!W_{1}\), respectively. In this case, the mapping is identical for our PCA and the bitcount circuits from prior work.
### _Latency and Energy Benefits of PCA_
Our PCA provides manifold benefits in terms of both the latency and energy consumption. The latency benefits accrue because our PCA eliminates the need of employing _psum_ reduction networks to temporarily store and accumulate _psums_.
Fig. 5: Example mappings and related operation of our XPC for various cases of the \(S\) and \(N\) values. A comparison of our PCA with the bitcount circuit from prior works is also illustrated.
From Section IV and Table II, our PCA can achieve \(\gamma\)=8503 and \(\alpha\)=447 at \(DR\)=50 GS/s, which means that our PCA, before it saturates, can accumulate a total of \(\gamma\)=8503 '1's across a total of \(\alpha\)=447 XNOR vector slices. As a result, if we operate the OXGs of our OXBNN at \(DR\)=50 GS/s, our PCA can inherently accumulate (perform bitcount on) any XNOR vector whose size \(S\) is less than \(\gamma\)=8503. Since the maximum XNOR vector size is observed to be \(S\)=4608 across all major modern CNNs (e.g., ResNet18, ResNet50, DenseNet121, VGG16, VGG19, GoogleNet, Inception_V3, EfficenficentNet_B7, NASNetMobile, MobileNet_V2, and ShuffleNet) [26], our PCA eliminates the need to employ dedicated _psum_ reduction networks in our OXBNN accelerator.
## V Evaluation
### _System-Level Implementation of OXBNN._
Fig. 6 illustrates the system-level implementation of our OXBNN accelerator. It consists of global memory that stores BNN parameters and a pre-processing and mapping unit. It has a mesh network of tiles. Each tile contains 4 XPCs interconnected (via H-tree) with an output buffer as well as pooling units.
### _Simulation Setup_
For evaluation, we model our OXBNN accelerator from Fig. 6 using our custom-developed, transaction-level, event-driven python-based simulator ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM)). We simulated the inference of four BNNs (batch size=1): VGG-small [9], ResNet18 [27], MobileNet_V2 [28], and ShuffleNet_V2 [29]. We binarized all the weights and inputs using the LQ-Nets technique [9]. We evaluate frames-per-second (FPS) and FPS/W (energy efficiency).
We compared our OXBNN with ROBIN [8] and LIGHTBULB [7]. ROBIN and LIGHTBULB operate at different DRs; therefore, we consider two variants of our OXBNN: (1) OXBNN_5 with DR=5GS/s (matching with ROBIN) and _N=53_ (Table II), (2) OXBNN_50 with DR=50GS/s (matching with LIGHTBULB) and _N=19_ (Table II). We consider two variants of ROBIN: ROBIN Energy-Optimized (ROBIN_EO) and ROBIN Performance-Optimized (ROBIN_PO) [8]. For fair comparison, we perform area proportionate analysis, wherein we altered the XPE count for each photonic BNN accelerator across all of the accelerator's XPCs to match with the area of OXBNN_5 having 100 XPEs. Accordingly, the scaled XPE counts of OXBNN_50 (_N=19_), ROBIN_PO (_N=50_), ROBIN_EO (_N=10_), and LIGHTBULB (_N=16_) are 1123, 183, 916, and 1139, respectively. Table III gives the parameters used for our evaluation.
### _Evaluation Results_
Fig. 7(a) compares FPS values (log scale). OXBINN_50 achieves 62\(\times\), 8\(\times\), and 7\(\times\) better FPS than ROBIN_EO, ROBIN_PO, and LIGHTBULB, respectively, on gmean across the BNNs. Similarly, OXBNN_5 also outperforms ROBIN_EO, ROBIN_PO, and LIGHTBULB by 54\(\times\), 7\(\times\), and 16\(\times\), respectively, on gmean across the BNNs. Similarly, our accelerator OXBNN_50 also outperforms ROBIN_EO, ROBIN_PO, and LIGHTBULB by 4.9\(\times\), 5.5\(\times\), and 1.5\(\times\), respectively, on gmean across the BNNs. The energy benefits of OXBNN_5 and OXBNN_50 are due to the novel OXGs. Due to their single-MRR design, these OXGs consume less energy and static power, compared to the OXGs (containing at least two MRRs or microdisks per OXG) from ROBIN and LIGHTBULB. Moreover, the elimination of the dedicated _psum_ reduction network (Section IV-C) also eliminates related high energy consumption. Thus, these benefits collectively render better FPS/W for OXBNN_5 and OXBNN_50.
Fig. 7: (a) FPS (log scale) (b) FPS/W for OXBNN versus ROBIN and LIGHTBULB accelerators.
Fig. 6: System-level overview of our OXBNN accelerator.
## VI Conclusions
In this paper, we present a single-MRR-based optical XNOR gate (OXG) and a novel bitcount circuit Photo-Charge Accumulator (PCA). We employ OXGs and PCAs to forge a novel accelerator, called OXBNN, to process the inferences of BNNs. We performed a comprehensive analysis to show the throughput and energy efficiency advantages of OXBNN. Our evaluation results show that OXBNN provides improvements of up to 62\(\times\) and 7.6\(\times\) in throughput (FPS) and energy efficiency (FPS/W), respectively, on geometric mean over two state-of-the-art photonic BNN accelerators from prior works.
## Acknowledgments
We thank the anonymous reviewers whose valuable feedback helped us improve this paper. We would also like to acknowledge the National Science Foundation (NSF) as this research was supported by NSF under grant CNS-2139167.
|
2301.10737 | Distributed Control of Partial Differential Equations Using
Convolutional Reinforcement Learning | We present a convolutional framework which significantly reduces the
complexity and thus, the computational effort for distributed reinforcement
learning control of dynamical systems governed by partial differential
equations (PDEs). Exploiting translational invariances, the high-dimensional
distributed control problem can be transformed into a multi-agent control
problem with many identical, uncoupled agents. Furthermore, using the fact that
information is transported with finite velocity in many cases, the dimension of
the agents' environment can be drastically reduced using a convolution
operation over the state space of the PDE. In this setting, the complexity can
be flexibly adjusted via the kernel width or by using a stride greater than
one. Moreover, scaling from smaller to larger systems -- or the transfer
between different domains -- becomes a straightforward task requiring little
effort. We demonstrate the performance of the proposed framework using several
PDE examples with increasing complexity, where stabilization is achieved by
training a low-dimensional deep deterministic policy gradient agent using
minimal computing resources. | Sebastian Peitz, Jan Stenner, Vikas Chidananda, Oliver Wallscheid, Steven L. Brunton, Kunihiko Taira | 2023-01-25T17:55:30Z | http://arxiv.org/abs/2301.10737v2 | # Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
###### Abstract
We present a convolutional framework which significantly reduces the complexity and thus, the computational effort for distributed reinforcement learning control of dynamical systems governed by partial differential equations (PDEs). Exploiting translational invariances, the high-dimensional distributed control problem can be transformed into a multi-agent control problem with many identical, uncoupled agents. Furthermore, using the fact that information is transported with finite velocity in many cases, the dimension of the agents' environment can be drastically reduced using a convolution operation over the state space of the PDE. In this setting, the complexity can be flexibly adjusted via the kernel width or by using a stride greater than one. Moreover, scaling from smaller to larger systems - or the transfer between different domains - becomes a straightforward task requiring little effort. We demonstrate the performance of the proposed framework using several PDE examples with increasing complexity, where stabilization is achieved by training a low-dimensional deep deterministic policy gradient agent using minimal computing resources.
## 1 Introduction
Distributed control of dynamical systems governed by partial differential equations (PDEs) is a challenging task - both from a control theoretical as well as a computational point of view - with numerous important applications, such as the control of chemical processes [1], turbulence control [2], and robotic systems [3]. Due to the dependency on both space and time, these control problems exhibit a very large number of degrees of freedom (DOF) for both the system state as well as the control input, which in particular renders real-time control difficult. Even offline computations can quickly become prohibitively expensive, and in terms of machine learning control, the large number of DOFs calls for an extremely large number of trainable parameters and substantial requirements regarding the training data, rendering learning expensive in terms of data and computation [4].
A plethora of data-driven and machine-learning approaches for PDE-constrained control have been proposed in recent years, most prominently in the field of fluid mechanics [5]. Among these, _reinforcement learning_ (RL) plays an increasingly important role [6, 7, 8, 9, 10]. However, the complexity of the dynamics and the very large number of parameters call for additional measures, by means of exploiting patterns within the dynamics or explicitly including system knowledge. In the first case, one can derive data-driven surrogate models of lower dimension using, e.g., the _Proper Orthogonal Decomposition_[11], _Dynamic Mode Decomposition_[12, 13, 14, 15] or models based on _deep neural networks_[16, 17]. In the latter case, we may make use of the governing equations in the form of _Physics-Informed Neural Networks_[18], or reduce the complexity by exploiting symmetries such as translational or rotational invariances. This can be useful both in the control setting (see, for instance, [19] for _model predictive control_ of ordinary differential equations with symmetries) as well as for prediction, see [20] for chaotic PDEs. In the latter paper, the authors also exploit the fact that mass and energy are transported with finite
velocity, which allows them to replace a global surrogate model by a set of identical, locally coupled agents. Similar considerations also allow for the construction of network models for the analysis of complex systems such as turbulent flows [21].
In the RL literature, multi-agent approaches have until now mainly been used for distributed control of interconnected systems [22, 23, 24], mostly focusing on individual agents and their interaction and less on training and data efficiency. The case of identical agents was addressed in [6] and [25] in terms of enhancing turbulence modeling for wall-bounded flows via RL, but not for control.
In this paper, we address the distributed control problem by a multi-agent RL approach of uncoupled, identical agents (see Fig. 1) with the details presented in Section 3. The locality and identity of the agents is ensured by applying a convolution operation to the PDE state in order to obtain a spatially confined environment state. The translational invariance then ensures that every agent is statistically identical and can choose its actions based on local information only. The advantages are
* a massive reduction of the agent's state and action dimensions, which allows for much smaller agents,
* a strong increase in the available training data, as all agents share the same parameters and training data set,
* the transfer of trained agents to various spatial domains in a simple plug-and-play manner.
We observe in several examples (Section 4) that we can train a stabilizing feedback controller using very limited computational resources. For instance, we can learn a stabilizing policy for the Kuramoto-Sivashinsky equation on a large domain of size \(L=500\) with \(P=200\) actuators within less than \(20\) minutes on a consumer grade laptop.
## 2 Reinforcement learning
We here only give a very brief overview of reinforcement learning (RL); a much more detailed introduction can be found in, e.g., [26]. The standard reinforcement learning setup consists of an _agent_ interacting with an _environment_ (stochastic or deterministic) in discrete time steps. At each time step \(k\), the agent receives an _observation_\(\tilde{y}_{k}\in\mathcal{Y}\), takes an _action_\(\tilde{u}_{k}\in\mathcal{U}\) and receives a _reward_\(R_{k}\in\mathbb{R}\). The environment may be only partially observed, and the observation may also consist of delay coordinates.
The agent's behavior is defined by a _policy_\(\pi\), which is a mapping from the state space to a probability distribution over the actions \(\pi:\mathcal{Y}\to P(\mathcal{U})\). The environment dynamics is described by a _Markov decision process (MDP)_ with a state space \(\mathcal{Y}\), action space \(\mathcal{U}\subseteq\mathbb{R}^{m}\), an initial state distribution \(p(\tilde{y}_{0})\), transition dynamics \(p(\tilde{y}_{k+1}|\tilde{y}_{k},\tilde{u}_{k})\) and a reward function \(r(\tilde{y}_{k},\tilde{u}_{k})\). The _return_ from a state, given a policy \(\pi\), is defined as the sum of discounted future rewards
\[G=\mathbb{E}_{\pi}\left[\sum_{i=k}^{p}\gamma^{i-k}R_{i}\right],\]
where \(\gamma\in[0,1]\) is the discount factor. The _goal_ in reinforcement learning is now to learn the policy that maximizes the expected return, which is intricately related to Bellman's principle of optimality and the identification of the associated value function.
As the central goal of this paper is to introduce a novel learning architecture which significantly simplifies the training by exploiting physical properties of the system, we will not focus on a specialized RL architecture. Instead, the method we use to train the individual agents is the well-known _Deep Deterministic Policy Gradient (DDPG)_[27] in its standard version (i.e., the implementation in the _Julia_ package _ReinforcementLearning.jl_). DDPG is a model-free _actor-critic_ algorithm based on the deterministic policy gradient [28] that can operate over continuous action spaces. Actor critic refers to a family of algorithms in which two networks are trained. The actor approximates the policy and decides which action to take, whereas the critic assesses the quality of the action taken, i.e., it approximates the value function. Both actor and critic are approximated using deep feedforward neural networks, and a second set of _target networks_ are used, which are responsible for the interactions with the environment and which are updated periodically from the former set of networks which are continuously trained from replay buffers.
## 3 Convolutional reinforcement learning
Our objective is to solve a distributed control problem governed by a partial differential equation (PDE). The state \(y:\Omega\times[0,T]\rightarrow\mathbb{R}^{n}\) is a function of time \(t\in[0,T]\) and space \(x\in\Omega\), and the dynamics is described by a nonlinear partial differential operator \(\mathcal{N}\), i.e.,
\[\frac{\partial y}{\partial t}=\mathcal{N}(y,u),\]
Figure 1: The convolutional reinforcement learning framework. The coupling in the network model on the right refers only to shared state information; the decision making process is entirely decoupled.
with \(u:\Omega\times[0,T]\to\mathbb{R}^{m}\) being the _spatially distributed, time-dependent control input_. Throughout the paper, we assume that the dynamics are invariant under translation in most cases, which is the case for all systems where the position \(x\) does not explicitly appear in the PDE or in an inhomogeneous source term.
In order to deal with the discrete-time nature of RL, we directly introduce a partial discretization in time with constant time step \(\Delta t=t_{k+1}-t_{k}\), \(k=0,1,\ldots,p\), i.e.,
\[\Phi(y_{k},u_{k})=y_{k}+\int_{t_{k}}^{t_{k+1}}\mathcal{N}(y(\cdot,t),u_{k})\, \mathrm{d}t=y_{k+1},\]
where \(u(\cdot,t)=u_{k}\) is constant over the interval \([t_{k},t_{k+1})\). Using the above considerations, the control task can be formalized in an optimal control problem of the following form:
\[\min_{u}J(y,u) =\min_{u}\sum_{k=0}^{p}\ell\left(y_{k},u_{k}\right) \tag{1}\] \[\mathrm{s.t.}\quad y_{k+1} =\Phi(y_{k},u_{k}),\qquad k=0,1,2,\ldots,p-1,\]
where \(J\) is the objective functional over the time horizon \(T=p\Delta t\), and \(\ell\) is the _stage cost_, e.g., a tracking term (with regularization, including penalties on the control cost)
\[\ell\left(y_{k},u_{k}\right) =\left\|y_{k}-y_{k}^{\mathsf{ref}}\right\|_{L^{2}}^{2}+\lambda \left\|u_{k}\right\|_{L^{2}}^{2}\] \[=\int_{\Omega}\left(y_{k}(x)-y_{k}^{\mathsf{ref}}(x)\right)^{2} +\lambda\left(u_{k}(x)\right)^{2}\,\mathrm{d}x.\]
In terms of RL, the stage cost \(\ell\) can be seen as the negative reward. Note that at each discrete point in time \(k\Delta t\), the corresponding \(u_{k}\) is still a function of space.
### Transformation into a multi-agent control problem
We now introduce a convolution operator \(\psi\) into the objective functional \(\ell\), which introduces a new spatial variable \(c\) (e.g., the center of a Gaussian; for a detailed introduction to convolutional neural networks (CNNs), see [29]):
\[\hat{\ell}\left(y_{k}(c),u_{k}(c)\right) =\int_{\Omega}\left[\psi(x-c)\left(y_{k}(x)-y_{k}^{\mathsf{ref}}( x)\right)\right]^{2}\] \[+\lambda\left[\psi(x-c)u_{k}(x)\right]^{2}\,\mathrm{d}x. \tag{2}\]
**Remark 1**.: _The convolution operation in (2) can be realized in many ways in practice, see Fig. 2 for a few examples in the case of one-dimensional spatial domains (higher-dimensional cases are analogous). For instance, we can use Gaussians, but also discontinuous kernels which are simpler to use for spatially discretized systems. In the discretized setting, the kernel cells may contain the values of individual grid nodes, but also pooled values such as spatial averages (see Fig. 1). Finally, we may also consider additional knowledge (such as dominant directions) by introducing non-symmetric kernels._
We can now follow the standard approach for CNNs and define discrete convolution kernels with multiple cells (such as the one indicated in Fig. 1) that move over the state space with a fixed stride. This yields a set of \(M\) convolutions \(\psi\) with centers \(c_{1},\ldots,c_{M}\), which allows us to split the integral over the domain \(\Omega\) in the objective functional \(\ell\) of problem (1). If the kernels are indicator functions (as illustrated in Fig. 2 in blue and red) with disjoint support, then we obtain
\[\int_{\Omega}1\,\mathrm{d}x =\sum_{i=1}^{M}\int_{\Omega}\psi(x-c_{i})\,\mathrm{d}c\] \[\Rightarrow \sum_{i=1}^{M}\hat{\ell}\left(y,u\right) =\ell\left(y,u\right).\]
If the individual convolution operators overlap - e.g., when using a 3-dimensional convolution kernel with stride 1 - then the summation yields the value of \(\ell\) multiplied by a scalar.
Applying the convolution for each kernel center results in \(n\times M\) (convolved) state variables and \(m\times M\) optimization parameters at each time instance \(k\),
\[\tilde{y}_{k,i} =\int_{\Omega}\psi(x-c_{i})y_{k}(x)\,\mathrm{d}x,\quad i=1, \ldots,M\] \[\text{and}\quad\tilde{u}_{k,i} =\int_{\Omega}\psi(x-c_{i})u_{k}(x)\,\mathrm{d}x,\quad i=1, \ldots,M.\]
Based on this convolution, we can now formulate a control problem closely related to (1) in the new variables \(\tilde{y}\) and \(\tilde{u}\). Using the above summation for the objective function and introducing individual dynamics \(\tilde{\Phi}_{i}\) for the \(M\) subsystems located at \(c_{1},\ldots,c_{M}\), we obtain the network control problem
\[\min_{\tilde{u}\in\mathbb{R}^{M\times p}}\sum_{k=0}^{p}\sum_{i=1}^ {M}\hat{\ell}\left(\tilde{y}_{k,i},\tilde{u}_{k,i}\right) \tag{3}\] \[\mathrm{s.t.}\quad\tilde{y}_{k+1,i}=\tilde{\Phi}_{i}(\tilde{y}_{k },\tilde{u}_{k}),\qquad\begin{array}{l}k=0,\ldots,p-1\\ i=1,\ldots,M\end{array}.\]
**Remark 2**.: _If we use dirac delta functions for \(\psi\), then problem (3) is equivalent to the spatial discretization of problem (1) using finite differences. For other kernels, this can still be interpreted as a Galerkin-type discretization in space._
Figure 2: Examples of 1D convolution kernels.
Note that formally, the right-hand sides of the \(M\) systems \(\tilde{\Phi}_{1},\ldots,\tilde{\Phi}_{M}\) still depend on the state and control of all systems such that we have a fully connected network. We can not decouple the calculation of the \(\tilde{u}_{i}\) unless making more assumptions, which will be discussed in the following section.
### Complexity reduction
In order to obtain a significant simplification of problem (3), we exploit two important physical properties:
* in many physical processes, information is transported with finite velocity,
* due to the translational invariance in the dynamics, all local systems are identical, i.e., \(\tilde{\Phi}_{1}=\ldots=\tilde{\Phi}_{M}=\overline{\Phi}\).
The first point effectively means that the dynamics are local, to some extent, and state values at locations far away do not have an immediate impact on the local evolution of \(y\); however, they may have an effect after finite time, so that the time step \(\Delta t\) must be sufficiently small, analogous to the CFL condition in computational fluid dynamics [30]. The second point will allow us to obtain a training task with much lower dimension. Both facts have been exploited in [20] for the efficient construction of predictive models for distributed systems using a network of surrogate models, each of which makes their prediction using local information only. For our control problem, this means that we introduce the assumption that the right-hand side of the dynamics \(\tilde{\Phi}_{i}\) in problem (3) no longer depends on the entire state \(\tilde{y}\), but on a subset \(\tilde{y}_{\mathcal{I}}\) localized around \(c_{i}\), where \(\mathcal{I}_{i}\subset\{1,\ldots,N\}\). In combination with the fact that all local systems are identical, we obtain
\[\tilde{y}_{k+1,i}=\overline{\Phi}(\tilde{y}_{k,\mathcal{I}_{i}},\tilde{u}_{k} ),\qquad i=1,\ldots,M.\]
The entries considered in \(\mathcal{I}_{i}\) are determined by the choice of the kernel. For instance, the kernel shown in Fig. 1 results in a five-dimensional set \(\mathcal{I}_{i}\) consisting of \(\tilde{y}_{i}\) as well as the direct neighbors in both spatial directions (i.e., the standard finite-difference stencil).
**Remark 3**.: _Even more generally, the connectivity between the agents could also be encoded in a sparse connectivity matrix. When considering neighboring agents only, this would result in a sparse multi-band matrix, but other patterns are possible as well. For instance, one could devise a connectivity pattern that is specifically tailored to the dynamics, that is, to the interaction between different regions within the domain [21]._
### The reinforcement learning problem
As a result of the discussion above, we obtain a sparsely connected network in problem (3) instead of a fully connected one, which could in principle be solved using techniques from the area of _cooperative control_, see, e.g., [31, 32]. However, (a) the control parameters \(\tilde{u}_{i}\) are still not decoupled and (b) the longer the horizon \(p\) in our control problem, the stronger the coupling effectively becomes. The latter is due to the fact that the states \(\tilde{y}_{i}\) are part of multiple agents' environments such that the action an agent takes has also an impact on the neighboring environments. Thus, for high-dimensional PDEs, the problem is still very challenging. Finally, if we do not choose a dirac kernel (see Remark 2), then we cannot rely on finite differences as the model for \(\overline{\Phi}\), hence we will not even have a computational model. As a remedy, we add another assumption which is also based on the observation of locality. This assumption is that the control also only has a local influence over short time horizons, which means that we do not have to optimize over all inputs at the same time.
Following these considerations, we define the following RL framework:
* The agent's environment \(\mathcal{Y}\) only consists of the states within the convolution kernel, i.e., \[\mathcal{Y}_{i}=\{\tilde{y}_{\mathcal{I}_{i}}\}.\] Denoting the dimension of the kernel by \(|\mathcal{I}|=S\), we find that \(\dim(\mathcal{Y}_{i})=n\times S\ll n\times M\). _Remark: One can in principle include additional information such as time delays._
* The action space \(\mathcal{U}\) only consists of the input at the \(i\)-th cell, i.e., it is of dimension \(m\) (instead of \(m\times M\)).
* The reward should mainly rely on local information, as this is required for assessing the agent's local performance. To a small extent, global contributions can be used to take unactuated areas into account, as well as to have an averaging effect over all spatial realizations of the agent: \[r_{i}(\tilde{y}_{\mathcal{I}_{i}},\tilde{u}_{\mathcal{I}_{i}})=-\sum_{j\in \mathcal{I}_{i}}\hat{\ell}\left(\tilde{y}_{j},\tilde{u}_{j}\right)-\alpha\ell \left(\tilde{y}_{j},\tilde{u}_{j}\right),\] with some fixed value \(\alpha\geq 0\). We believe that the global information becomes more important if we cannot apply a control to the entire domain, but only a subset of \(\Omega\). However, the aspect of global rewards is left to future investigations.
We comment on the specifics of these choices in the examples in Section 4.
**Remark 4**.: _The question of the global reward can be crucial, not only in terms of finding a good policy, but also from a conceptual point of view. If we want to control parts of the domain that we can not see, then we do no longer have an MDP, but only a partially observable MDP (POMDP) [26], which is considerably more challenging to control._
### Training using parameter and data sharing
As the system is invariant under translation, we can use the identical agent at all \(M\) locations, which has the two advantages that (a) we only have to train one model that is not even particularly large due to the small dimensions of the environment and action space, and (b) we can use the data acquired in all \(M\) locations to train the same agent, meaning that we obtain \(M\) rewards in every iteration. This concept is visualized in Figure 3.
For training, we thus proceed according to the standard DDPG setting, only with \(M\) identical agents. This means that we measure the environment by applying the convolution \(M\) times, then decide on \(M\) local actions and collect \(M\) corresponding rewards. These \(M\) additional training data samples are then fed to the DDPG replay buffer in order to continue training.
## 4 Examples
We now evaluate the performance of the convolutional RL framework on different examples. For the Kuramoto-Sivashinsky equation, we first show that we can stabilize a chaotic PDE with small training effort. We then further demonstrate that our approach scales easily, and that a transfer from one domain to the other is possible just as easily. Finally, we study the performance under nonhomogeneous disturbances. In the next example, we study a more complicated control task for a two-component Chemotaxis system, where we control the cell density by varying the chemoattractant concentration using our framework. For a two-dimensioal domain, we then study an isotropic turbulent flow.
### Kuramoto-Sivashinsky equation
The first system we study is the 1D _Kuramoto-Sivashinsky equation_ (KS), which models the diffusive-thermal instabilities in a laminar flame front:
\[\frac{\partial y}{\partial t}=-y\frac{\partial y}{\partial x}-\frac{\partial^ {2}y}{\partial x^{2}}-\frac{\partial^{4}y}{\partial x^{4}}+\mu\cos\left(\frac{ 4\pi x}{L}\right)+f(x,u),\]
with periodic boundary conditions, \(\mu\geq 0\). Note that for \(\mu>0\), this system is no longer invariant under translation in \(x\). However, similar to [20], we will see that our approach is capable of handling small disturbances of this kind. The final term \(f(x,u)\) is the control term:
\[f(x,u)=\sum_{i=1}^{P}u_{i}\psi(x-c_{i}),\]
where the control action \(u\) is a scaling factor to \(P\) different basis functions \(\psi(x-c_{i})\) located at \(c_{1},\ldots,c_{P}\). These can be, for instance, the functions depicted in Figure 2. For a radial basis function, we have
\[\psi(x-c_{i})=\exp\left(-\frac{1}{2}\left(\frac{x-c_{i}}{\sigma}\right)^{2} \right),\]
where \(\sigma\) is an additional parameter determining the kernel width.
**Remark 5**.: _Note that we have now separated the number of convolved states from the inputs, i.e., we do not have to use \(P=M\). In many situations, we may have fewer actuators than sensors, which means \(P<M\)._
For domains \(\Omega=[0,L]\) with sufficiently large \(L\), the dynamics exhibit chaotic behavior, rendering this a popular system for turbulence studies. Moreover, it has been studied in terms of control as well, both in theory [33] as well as in practice, see, e.g., [34, 35, 36] for reinforcement learning approaches. In all three articles, \(P=4\) actuators are used on a domain of size \(L=22\), which is comparatively small and is close to the onset of turbulence. In [35] and [36], the goal is to minimize dissipation, which yields a global reward of the form \(r=-\langle(y_{x})^{2}\rangle-\langle(y_{x})^{2}\rangle-\langle yf\rangle\), where \(\langle\cdot\rangle\) is the spatial average. In [34], the task is to steer from one fixed point to another. As the final term \(\langle yf\rangle\) is not invariant under translation, we here follow the latter and aim for the standard control task of driving \(y\) towards zero, i.e.,
\[r=-\ell=-\left(\langle y^{2}\rangle+\alpha\sum_{i=1}^{P}u_{i}^{2}\left\langle \psi^{2}(x-c_{i})\right\rangle\right),\]
where \(\alpha>0\) is a regularization parameter.
In our numerical experiments, we begin with \(\mu=0\), \(L=22\) (in accordance with [34, 35, 36]) and set
Figure 3: Data and parameter sharing in the convolutional RL framework (here for periodic boundary conditions). Left: We have \(M\) identical agents, each of which has a local environment of dimension \(S\) (here, \(S=5\)) and takes a scalar control decision. Right: Overall framework, in which we have \(M\) times the data for training.
8, i.e., the dimension of the state and the input are identical. As the kernel \(\psi\), use equidistantly placed Gaussians with \(\sigma=0.8\). Eight identical RL agents are placed at these centers \(c_{i}\). The respective environments consist of a single sensor, i.e., \(\{\tilde{y}_{i}\}\), which means \(S=1\). The control dimension is one, as only the action at \(c_{i}\) has to be determined. For the actor and critic networks, we use fully connected feed forward nets with a single hidden layer and \(7\) / \(14\) neurons, respectively. Due to this small dimension and the increased amount of data by a factor of \(P=8\), training is very fast and takes just a few minutes on a standard desktop computer. The details of the numerical setup are also summarized in Table 1.
The result is shown in Figure 4. Using a random initial condition, we let the system evolve autonomously for 100 time units before activating the agent. After that, the system is stabilized in little more than 5 seconds. This is much faster than what was achieved in [34, 35, 36], although the results are not perfectly comparable, as the objectives as well as the control dimensions are different.
Having confirmed the validity of our approach, we now exploit the scalability and significantly extend the domain size to \(L=200\) while increasing the number of sensors and actuators to \(M=P=80\), i.e., \(L/M=2.5\). This results in more complicated patterns; not only due to the increased domain size, but also because \(L\) serves as a system parameter and the dynamics become more chaotic with increasing \(L\)[20, 37]. Nevertheless, we keep the low dimension of the actor and critic networks constant. Interestingly, the performance is identical compared to the \(L=22\) case. The reward curve is depicted in Figure 6.
To show the robustness of convolutional sensors, we now apply the same agent to the KS system with \(\mu=0.02\), i.e., we allow for a small inhomogeneous disturbance. Figure 5 demonstrates that this only has a minimal negative influence on the performance, see also the reward curves in Figure 6. Finally, a straightforward transfer of the agent from \(L=200\) to \(L=500\) - now with \(M=P=200\) - is possible just as easily. This is visualized in Figure 7, where we have increased both \(M\) and \(P\) by a factor of 2.5 while leaving all other parameters unchanged. This way, the local agents have
\begin{table}
\begin{tabular}{l|c|c c c} & Var. & & Cases & \\ \hline Domain size & \(L\) & 22 & 200 & 500 \\ \# Sensors & \(M\) & 8 & 80 & 200 \\ \# Actuators & \(P\) & 8 & 80 & 200 \\ Agent state space dim. & \(S\) & 1 & 1 & 1 \\ \# Hidden layers Actor/Critic & & 1/1 & 1/1 & 1/1 \\ \# Neurons per layer & & 7/14 & 7/14 & 7/14 \\ \end{tabular}
\end{table}
Table 1: Numerical settings for the Kuramoto–Sivashinsky example. In all cases, the kernel \(\psi\) consists of Gaussians with \(\sigma=0.8\).
Figure 4: Kuramoto–Sivashinsky: results for \(\mu=0\), \(L=22\) and \(M=P=8\) sensors and actuators.
Figure 5: Kuramoto–Sivashinsky: results for \(\mu=0.02\), \(L=200\) and \(M=P=80\) sensors and actuators.
Figure 6: Negated rewards (i.e., 0 is optimal, similar to standard control formulations) over time for the conducted Kuramoto–Sivashinsky experiments. The agent starts at \(t=100\).
the same \(L/M\) ratio and hence, we can simply use the agent trained on the smaller domain.
### Keller-Segel model for Chemotaxis
In our second example, we study a Keller-Segel type model for Chemotaxis [38], which is a process that describes the movement of cells (or organisms) in response to the presence of a chemical signal substance inhomogeneously distributed in space:
\[\frac{\partial y}{\partial t} =\frac{\partial}{\partial x}(D\frac{\partial y}{\partial x}- \chi y\frac{\partial z}{\partial x})+qy(1-y),\] \[\frac{\partial z}{\partial t} =\frac{\partial^{2}z}{\partial x^{2}}+y-z+f(x,u),\]
with homogeneous Neumann boundary conditions. Here, \(y\) and \(z\) denote the cell density and the chemoattractant concentration, respectively, at time \(t\) and location \(x\in[0,L]\). Furthermore, \(D\) is the cell diffusion coefficient, \(\chi\) is the chemotactic coefficient and \(q\) is the growth rate. The control task is to steer the chemoattractant \(z\) (via \(f\) as above) in order to stabilize the cell density \(y\), i.e.,
\[r=-\ell=-\left(\left\langle y^{2}\right\rangle+\alpha\sum_{i=1}^{P}u_{i}^{2} \left\langle\psi^{2}(x-c_{i})\right\rangle\right).\]
Optimal control of Keller-Segel type models has been studied theoretically (see, e.g., [33]), but there is until now little work on numerical results; see [39] for an exception using a slightly different model.
In our numerical example, we set \(D=q=1\), \(\chi=5.6\) and \(L=10\). As the convolution kernel, we use indicator functions with width \(0.25\), placed equidistantly at \(M=40\) locations. We again choose a local environment dimension of \(S=3\) and as we do not have periodic boundary conditions, we can only place \(P=36\) actuators inside the domain (with "distance" at least two to each boundary). As the state, we consider the convolved states \(\tilde{y}\) and \(\tilde{z}\) as well as a time-delayed observation of both \(\tilde{y}\) and \(\tilde{z}\) with \(\Delta t=1\), meaning that each agent gets \(S\cdot 2\cdot 2=12\) inputs. Both actor and critic again consist of a fully connected feed forward neural network with two hidden layers consisting of 20 neurons each. The results are shown in Figure 8. Even though we cannot directly control \(y\) but only \(z\), we see that our approach performs very well in stabilizing the cell density.
### 2D Turbulence
As the final and most complex case, we consider a fluid problem, more specifically two-dimensional decaying isotropic turbulence, see [21] for a detailed description and the numerical solver. The dynamics are described by the two-dimensional vorticity transport equation:
\[\frac{\partial\omega}{\partial t}+y_{j}\frac{\partial\omega}{\partial x_{j}} =\frac{1}{Re}\frac{\partial^{2}\omega}{\partial x_{i}\partial x_{j}}+f(x,u),\]
where \(y\) and \(\omega\) are the velocity and vorticity variables, respectively. The Reynolds number is defined as \(Re=\frac{y^{*}\ell^{*}}{y^{*}}\), where \(\nu\) is the kinematic viscosity. \(y^{*}(t_{0})=[\overline{y^{2}}(t0)]^{1/2}\) and \(\ell^{*}(t_{0})=[2\overline{y^{2}}(t_{0})/\omega^{2}(t_{0})]^{1/2}\) are the velocity (normalized by the square root of the spatial average of the initial kinetic energy) and the initial integral length scale, respectively.
Similar to the previous two examples, our goal is to stabilize the system. Due to the viscosity term, the uncontrolled system is self-stabilizing, and we now compare different control settings (i.e., a grid of \(4\times 4\), \(8\times 8\) and \(16\times 16\) sensors and actuators, respectively) in order to study the speedup of this stabilization process. In every test we perform, the DDPG agent has a
Figure 8: The components \(y\) and \(z\), the action \(u\) and the negated reward signal \(r\) averaged over the actuators for Keller-Segel. Agent starts at \(t=21\).
Figure 7: Kuramoto–Sivashinsky: results for \(\mu=0\), \(L=500\) and \(M=P=200\) sensors and actuators.
\(S=3\times 3\) input states. The results are shown in Figure 9 in which we observe very good control performance using yet again very small neural networks (a single hidden layer with only four neurons for both the actor and critic networks).
Even though the figures show that our framework is effective in stabilizing the system, additional evaluation is called for to compare the performance the performance against a simple opposition control scheme. Moreover, it will be interesting for future research to study systems that are more challenging to stabilize, such as the 3D channel flow [2], which has tremendous fundamental and industrial implications.
## 5 Conclusion
We have shown that exploiting system knowledge, here in terms of symmetries and finite-velocity transport of information, can help us in massively reducing the complexity of reinforcement learning in distributed PDE control. Our convolutional framework allows for the "cloning" of many small agents with shared parameters as well as shared training data. This yields efficient control strategies for very large domains and allows for an easy transfer between different domains. In addition, the convolution operation in observing the state yields robustness against inhomogeneous disturbances.
For future work, there are several options to extend this framework. Most importantly, the question of global reward functions as well as control of only parts of the domain need to be addressed. Furthermore, it will be interesting to see whether additional knowledge (e.g., in the form of differential equations) can help us further increase the efficiency. In addition, it may be fruitful to study how the trained controller varies with physical parameters, such as the Reynolds number. This may lead to parameterized control laws that are valid over larger ranges of operating conditions. Similarly, it may also be interesting to investigate self-similarity of the control law in the context of spatially developing turbulence, such as a developing boundary layer. Finally, other symmetries and invariances may be similarly embedded into the RL framework for the control of more complex spatiotemporal processes such as turbulent channel flows or Rayleigh-Benard convection.
## Code
The source code is written in the language Julia and is publicly available under [https://github.com/janstenner/DistributedConvRL-PDE-Control](https://github.com/janstenner/DistributedConvRL-PDE-Control).
## Acknowledgements
SP acknowledges support by the Priority Programme 1962 of the Deutsche Forschungsgemeinschaft (DFG). All authors from Paderborn (SP, JS, VC, OW) acknowledge support by the BMBF within the project "DARE". KT thanks the support from the US Air Force Office of Scientific Research (FA9550-21-1-0178) and the Vannevar Bush Faculty Fellowship (N00014-22-1-2798).
|
2301.08178 | Work-Efficient Query Evaluation with PRAMs | The paper studies query evaluation in parallel constant time in the PRAM
model. While it is well-known that all relational algebra queries can be
evaluated in constant time on an appropriate CRCW-PRAM, this paper is
interested in the efficiency of evaluation algorithms, that is, in the number
of processors or, asymptotically equivalent, in the work. Naive evaluation in
the parallel setting results in huge (polynomial) bounds on the work of such
algorithms and in presentations of the result sets that can be extremely
scattered in memory. The paper first discusses some obstacles for constant time
PRAM query evaluation. It presents algorithms for relational operators that are
considerably more efficient than the naive approaches. Further it explores
three settings, in which efficient sequential query evaluation algorithms
exist: acyclic queries, semi-join algebra queries, and join queries -- the
latter in the worst-case optimal framework. Under natural assumptions on the
representation of the database, the work of the given algorithms matches the
best sequential algorithms in the case of semi-join queries, and it comes close
in the other two settings. An important tool is the compaction technique from
Hagerup (1992). | Jens Keppeler, Thomas Schwentick, Christopher Spinrath | 2023-01-19T17:10:30Z | http://arxiv.org/abs/2301.08178v1 | # Work-Efficient Query Evaluation with PRAMs
###### Abstract
The paper studies query evaluation in parallel constant time in the PRAM model. While it is well-known that all relational algebra queries can be evaluated in constant time on an appropriate CRCW-PRAM, this paper is interested in the efficiency of evaluation algorithms, that is, in the number of processors or, asymptotically equivalent, in the work. Naive evaluation in the parallel setting results in huge (polynomial) bounds on the work of such algorithms and in presentations of the result sets that can be extremely scattered in memory. The paper first discusses some obstacles for constant time PRAM query evaluation. It presents algorithms for relational operators that are considerably more efficient than the naive approaches. Further it explores three settings, in which efficient sequential query evaluation algorithms exist: acyclic queries, semi-join algebra queries, and join queries -- the latter in the worst-case optimal framework. Under natural assumptions on the representation of the database, the work of the given algorithms matches the best sequential algorithms in the case of semi-join queries, and it comes close in the other two settings. An important tool is the compaction technique from Hagerup (1992).
PRAM, query evaluation, work-efficient, parallel, acyclic queries, free-connex queries [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][30][31][32][34][31][35][37][38][39][31][30][32][33][34][35][36][37][38][39][32][30][31][32][34][35][37][39][30][31][32][38][39][32][31][33][34][35][36][37][38][39][30][32][39][31][32][33][34][35][38][39][30][30][32][31][34][36][37][38][39][32][39][33][340][39][341][39][342][39][35][39][36][39][37][38][39][39][31][39][32][39][33][343][31][344][35][36][37][38][39][39][30][30][33][39][31][32][341][342][343][344][35][36][37][38][39][39][300][39][31][32][344][35][39][31][32][345][31][346][31][347][31][348][31][349][32][349][33][350][351][36][37][38][39][39][31][39][32][340][39][33][341][342][35][343][36][37][38][39][39][39][31][32][344][39][39][32][343][39][33][344][35][36][37][39][38][39][39][300][39][39][31][39][32][341][39][33][342][343][344][35][36][37][38][39][39][39][31][32][343][39][344][39][345][39][31][32][346][39][347][39][39][32][348][39][33][349][35][36][37][38][39][39][39][33][39][31][32][349][341][342][343][344][35][36][37][38][39][39][31][32][343][39][344][35][39][344][35][36][37][39][38][39][39][39][39][31][32][345][39][39][33][346][39][39][31][32][347][39][348][39][350][39][31][32][349][39][349][351][39][39][31][32][349][352][36][37][38][39][39][39][31][32][349][39][33][340][39][341][342][343][344][353][36][37][38][39][39][39][31][32][343][344][354][36][37][38][39][39][31][32][345][39][39][31][32][346][39][39][31][32][347][39][39][32][348][39][33][349][39][340][341][342][349][343][344][355][344][36][370][38][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][31][32][349][39][39][31][32][340][39][33][341][342][343][35][344][355][36][371][36][372][363][373][38][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][399][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][39][399][39][39][39][39][39][399][39][39][39][399][39][399][39][399][399][39][399][399][39][39][399][399][39][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][399][3999][399][399][399][3999][399][399][399][399][399][399][399][399][399][3999][399][399][399][3999][399][3999][399][3999][399][3999][399][399][3999][3999][399][3999][3999][3999][3999][399][3999][3999][3999][399][3999][3999][399][3999][399][399][3999][399][399][3999][399][3999][3999][399][3999][399][399][3999][3999][399][3999][399][399][3999][399][3999][3999][399][3999][3999][399][3999][3999][
work-efficiency of \(\mathcal{O}(1)\)-time PRAM algorithms for query evaluation has not been investigated in the literature. This paper is meant to lay some groundwork in this direction.
The proof of the afore-mentioned result that each relational algebra query can be evaluated in constant-time by a PRAM with polynomial work is scattered over several papers. It consists basically of three steps, translating from queries to first-order logic formulas [11], to bounded-depth circuits [6], and then to PRAMs [18]. It was not meant as a "practical translation" of queries and does not yield one. However, it is not hard to see directly that the operators of the relational algebra can, in principle, be evaluated in constant time on a PRAM. It is a bit less obvious, though, how the output of such an operation is represented, and how it can be fed into the next operator.
Let us consider an example to illustrate some issues of constant-time query evaluation on a PRAM. Let \(q\) be the following conjunctive query, written in a rule-based fashion, for readability.
\[q(x,y,z)\gets E(x,x_{1}),E(x_{1},x_{2}),E(y,y_{1}),E(y_{1},y_{2}),E(z,z_{1 }),E(z_{1},z_{2}),R(x_{2},y_{2},z_{2})\]
A (very) naive evaluation algorithm can assign one processor to each combination of six \(E\)-tuples and one \(R\)-tuple, resulting in work \(\mathcal{O}(|E|^{6}|R|)\). Since the query is obviously acyclic, it can be evaluated more efficiently in the spirit of Yannakakis' algorithm. For simplicity, we ignore the semi-join step of the Yannakakis algorithm. The join underlying the first two atoms \(E(x,x_{1}),E(x_{1},x_{2})\) can be computed as a sub-query \(q_{1}(x,x_{2})\gets E(x,x_{1}),E(x_{1},x_{2})\). This can be evaluated by \(|E|^{2}\) many processors, each getting a pair of tuples \(E(a_{1},a_{2}),E(b_{1},b_{2})\) and producing output tuple \((a_{1},b_{2})\) in case \(a_{2}=b_{1}\). The output can be written into a 2-dimensional table, indexed in both dimensions by the tuples of \(E\). In the next round, the output tuples can be joined with tuples from \(R\) to compute \(q_{2}(x,y_{2},z_{2})\gets E(x,x_{1}),E(x_{1},x_{2}),R(x_{2},y_{2},z_{2})\). However, since it is not clear in advance, which entries of the two-dimensional table carry \(q_{1}\)-tuples, the required work is about \(|E|^{2}\cdot|R|\). Output tuples of \(q_{2}\) can again be written into a 2-dimensional table, this time indexed by one tuple from \(E\) (for \(x\)) and one tuple from \(R\) (for \(y_{2}\) and \(z_{2}\)). Proceeding in a similar fashion, \(q\) can be evaluated with work \(\mathcal{O}(|E|^{3}|R|)\). Other evaluation orders are possible but result in similar work bounds. In terms of the input size of the database, this amounts to \(\mathcal{O}(\mathsf{IN}^{4})\), whereas Yannakakis' algorithm yields \(\mathcal{O}(\mathsf{IN}\cdot\mathsf{OUT})\) in the sequential setting, where \(\mathsf{IN}\) denotes the number of tuples in the given relations and \(\mathsf{OUT}\) the size of the query result, respectively.
Let us take a look at the representation of the output of this algorithm. The result tuples reside in a 3-dimensional table, each indexed by a tuple from \(E\). It is thus scattered over a space of size \(|E|^{3}\), no matter the size of the result. Furthermore, the same output tuple might occur several times due to several valuations. To produce a table, in which each output tuples occurs exactly once, a deduplication step is needed, which could yield additional work in the order of \(|E|^{6}\).
The example illustrates two challenges posed by the \(\mathcal{O}(1)\)-time PRAM setting, which will be discussed in more detail in Section 3.
* It is, in general, not possible to represent the result of a query in a compact form, say, as an array, contiguously filled with result tuples. This obstacle results here in upper bounds in terms of the size of the input database, but not in the size of the query result.
* It might be necessary to deduplicate output (or intermediate) relations, but, unfortunately, this cannot be done by sorting a relation, since sorting is not possible in \(\mathcal{O}(1)\)-time on a PRAM, either.
We will use compactification techniques for PRAMs from [16] to deal with the first challenge. The second challenge is addressed by a suitable representation of the relations. Besides the setting without any assumptions, we consider the setting, where data items are mapped to an initial segment of the natural numbers by a dictionary, and the setting where the relations are represented by ordered arrays.
We show that, for each \(\varepsilon>0\), there is a Yannakakis-based algorithms for acyclic join queries in the dictionary setting with an upper work bound of \(\mathcal{O}(\mathsf{IN}\cdot\mathsf{OUT})^{1+\varepsilon}\). Two other results are work-optimal algorithms for queries of the semijoin algebra and almost worst-case and work-optimal algorithms for join queries.
We emphasize that the stated result does not claim a fixed algorithm that has an upper work bound \(W\) such that, for every \(\varepsilon>0\), it holds \(W\in\mathcal{O}(\mathsf{IN}\cdot\mathsf{OUT})^{1+\varepsilon}\). It rather means that there is a uniform algorithm that has \(\varepsilon\) as a parameter and has the stated work bound, for each fixed \(\varepsilon>0\). The linear factor hidden in the \(\mathcal{O}\)-notation thus depends on \(\varepsilon\). This holds analogously for our other upper bounds of this form.
This paper consists roughly of three parts, each of which has some contributions to our knowledge on work-efficient constant-time PRAM query evaluation.
The first part presents some preliminaries in Section 2, discusses the framework including some of our (typical) assumptions and data structures in Section 4, surveys lower bound results that pose challenges for \(\mathcal{O}(1)\)-time PRAM query evaluation in Section 3, and presents some basic operations that will be used in our query evaluation algorithms in Section 4.
The second part presents algorithms for these basic operations in Section 5 and for relational operators in Section 6.
After this preparation, the third part studies query evaluation in three settings in which (arguably) efficient algorithms exist for sequential query evaluation: the semi-join algebra (Subsection 7.1), acyclic queries (Subsection 7.2), and worst-case optimal join evaluation (Subsection 7.3).
Related work.Due to space limitations, we only mention two other related papers. In a recent paper, query evaluation by circuits has been studied [30]. Although this is in principle closely related, the paper ignores polylogarithmic depth factors and therefore does not study \(\mathcal{O}(1)\)-time. The work of \(\mathcal{O}(1)\)-time PRAM algorithms has recently been studied in the context of dynamic complexity, where the database can change and the algorithms need to _maintain_ the query result [27].
Acknowledgements.We are grateful to Uri Zwick for clarifications regarding results in [14] and to Jonas Schmidt and Jennifer Toldenhoefer for careful proof reading. Furthermore, we thank the reviewers of ICDT for many insightful suggestions.
## 2 Preliminaries
In this section, we fix some notation and recall some concepts from database theory and PRAMs that are relevant for this paper. For a natural number \(n\), we write \([n]\) for \(\{1,\ldots,n\}\).
A _database schema_\(\Sigma\) is a finite set of relation symbols, where each symbol \(R\) is equipped with a finite set \(\mathtt{attr}(R)\) of attributes. A tuple \(t=(a_{1},\ldots,a_{|X|})\) over a finite list \(X=(A_{1},\ldots,A_{k})\) of attributes has, for each \(i\), a value \(a_{i}\) for attribute \(A_{i}\). Unless we are interested in the lexicographic order of a relation, induced by \(X\), we can view \(X\) as a set. An \(R\)-relation is a finite set of tuples over \(\mathtt{attr}(R)\). The arity of \(R\) is \(|\mathtt{attr}(R)|\). For \(Y\subseteq X\), we write \(t[Y]\) for the restriction of \(t\) to \(Y\). For \(Y\subseteq\mathtt{attr}(R)\), \(R[Y]=\{t[Y]\mid t\in R\}\). A database \(D\) over
\(\Sigma\) consists of an \(R\)-relation \(D(R)\), for each \(R\in\Sigma\). We usually write \(R\) instead of \(D(R)\) if \(D\) is understood from the context. That is, we do not distinguish between relations and relation symbols. The size \(|R|\) of a relation \(R\) is the number of tuples in \(R\). By \(|D|\) we denote the number of tuple entries in database \(D\). For details on (the operators of) the relational algebra, we refer to [3]. We always assume a fixed schema and therefore a fixed maximal arity of tuples.
Parallel Random Access Machines (PRAMs).A _parallel random access machine_ (PRAM) consists of a number of processors that work in parallel and use a shared memory. The memory is comprised of memory cells which can be accessed by a processor in \(\mathcal{O}(1)\) time. Furthermore, we assume that the usual arithmetic and bitwise operations can be done in \(\mathcal{O}(1)\) time by a processor. In particular, since the schema is considered fixed, a database tuple can be loaded, compared, etc. in \(\mathcal{O}(1)\) time by one processor.
We mostly use the Concurrent-Read Concurrent-Write model (CRCW-PRAM), i.e. processors are allowed to read and write concurrently from and to the same memory location. More precisely, we mainly assume the _arbitrary_ PRAM model: if multiple processors concurrently write to the same memory location, one of them, "arbitrarily", succeeds. For some algorithms the _common_ model would suffice, where all processors need to write the same value into the same location. We sometimes also use the weaker Exclusive-Read Exclusive-Write model (EREW-PRAM), where concurrent access is forbidden and the CREW-PRAM, where concurrent writing is forbidden. The work \(w\) of a PRAM computation is the sum of the number of all computation steps of all processors made during the computation. We define the space \(s\) required by a PRAM computation as the maximal index of any memory cell accessed during the computation. We refer to [20] for more details on PRAMs and to [28, Section 2.2.3] for a discussion of alternative space measures.
In principle, we assume that relations are stored as sequences of tuples, i.e., as arrays. Informally an array \(\mathcal{A}\) is a sequence \(t_{1},\ldots,t_{N}\) and it represents a relation \(R\), if each tuple from \(R\) appears once as some \(t_{i}\). In Section 4, we describe more precisely, how databases are represented for our PRAM algorithms.
## 3 Obstacles
We next discuss some obstacles that pose challenges for \(\mathcal{O}(1)\)-time parallel algorithms for query evaluation. They stem from various lower bound results from the literature.
The first obstacle, already mentioned in the introduction, is that we cannot expect that query results can be stored in arrays in a compact fashion, that is, as a sequence \(t_{1},\ldots,t_{m}\) of tuples, for a result with \(m\) tuples. This follows from the following lower bound on the _linear approximate compaction problem_, where the, somewhat relaxed, goal is to move \(m\) scattered tuples into a target array of size \(4m\).
[[23, Theorem 4.1]] Solving the linear approximate compaction problem on a randomized strong priority CRCW-PRAM requires \(\Omega(\log^{*}n)\) expected time.
Since the PRAM model of that bound is stronger than the arbitrary CRCW model, it applies to our setting.
The following theorem illustrates, how this lower bound restrains the ability to compute query results in a compact form, even if the input relations are given by compact arrays. Analogous results can be shown for simple projection and selection queries.
Let \(q\) be the conjunctive query defined by \(q:H(x)\gets R(x),S(x)\). Every algorithm, which, upon input of arrays for relations \(R\) and \(S\), computes an array of size \(|q(D)|\) for \(q(D)\) without empty cells, requires \(\omega(1)\) time. This holds even if the arrays for \(R\) and \(S\) are compact and their entries are lexicographically ordered.
Proof sketch.: Towards a contradiction, we assume that there is an algorithm on a PRAM which computes in constant time, upon input of an input database \(D\), an array of size \(|q(D)|\) for the query result \(q(D)\). This can be used to solve the linear approximate compaction problem in constant time as follows. Let \(\mathcal{A}\) be an instance for the linear approximate compaction problem with \(m\) non-empty cells. In the first step create an array \(\mathcal{A}_{R}\) of size \(n\) for the unary relation \(R\) that consists of even numbers \(2\) to \(2n\). This can be done in parallel in constant time with \(n\) processors.
In the second step, store, for every \(i\in\{1,\ldots,n\}\), the value \(2i\) in the array \(\mathcal{A}_{S}\) of size \(n\) for relation \(S\) if \(\mathcal{A}[i]\) has a value (i.e. the \(i\)-th cell is not empty), and \(2i+1\), otherwise. Hence, the query result of \(q\) exactly consists of even numbers \(2i\) where \(i\) is an index such that \(\mathcal{A}[i]\) has a value. From the array for the query result a solution for the linear approximate compaction problem can be obtained by replacing every value \(2i\) in the result by \(\mathcal{A}[i]\). The size of the compact array is \(m\).
All in all, the algorithm takes constant time to solve the linear approximate compaction problem. This is a contradiction to Proposition 3. We note that by construction, the arrays for the relations \(R\) and \(S\) are ordered and have no empty cells.
As a consequence of Theorem 3, our main data structure to represent relations are arrays that might contain _empty cells_ that do not correspond to tuples in the relation.
As the example in the introduction illustrated, it is important that intermediate results can be compacted to some extent. Indeed, processors can be assigned to all cells of an array, but not so easily to only the non-empty cells. We extensively use a classical technique by Hagerup [16] that yields some (non-linear) compaction (see Proposition 5 for the statement of this result). However, Hagerup's compaction algorithm does not preserve the order of the elements in the array, so ordered arrays with non-empty cells are transformed into more compact but unordered arrays.
This brings us to another notorious obstacle for \(\mathcal{O}(1)\)-time parallel algorithms: they can not sort with polynomially many processors, even not in a (slightly) non-compact fashion. The _padded sort problem_ asks to sort \(n\) given items into an array of length \(n+o(n)\) with empty cells in spots without an item.
[[23, Theorem 4.2]] Solving the padded sort problem on a randomized strong priority CRCW-PRAM requires \(\Omega(\log^{*}n)\) expected time.
Thus, we cannot rely on sorting as an intermediate operator, either.
Yet another weakness of \(\mathcal{O}(1)\)-time parallel algorithms is counting. It follows readily from the equivalence with polynomial-size constant-depth circuits that (reasonable) CRCW-PRAMs cannot tell whether the number of ones in a sequence of zeros and ones is even [13, 2], let alone count them. These obstacles apply in particular to the evaluation of aggregate queries and for query evaluation under bag semantics with explicit representation of multiplicities of tuples. However, we do not study any of those in this paper.
Note.It turned out at submission time of this paper that we had missed a paper by Goldberg and Zwick [14], that contains two improvements to the above: it shows that _ordered_ compaction and approximate counting, up to a factor of \(1+\frac{1}{(\log n)^{a}}\), for any \(a>0\), are
possible in constant time with work \(\mathcal{O}(n^{1+\varepsilon})\). In fact both results rely on the same technique for computing consistent approximate prefix sums. We discuss this issue further in our conclusion.
## 4 Basics
Query evaluation algorithms often use additional data structures like index structures. We consider two different kinds of such data structures for our \(\mathcal{O}(1)\)-time parallel algorithms.
The first setting that we consider is that of dictionary-based compressed databases, see, e.g., [10]. In a nutshell, the database has a dictionary that maps data values to natural numbers and internally stores and manipulates tuples over these numbers to improve performance. Such dictionaries are often defined attribute-wise, but for the purpose of this paper this does not matter. Query evaluation does not need to touch the actual dictionaries, it only works with the numbers. In this paper, we write "in the presence of a dictionary" or "in the dictionary setting" to indicate that we assume that such a dictionary exists for the database \(D\) at hand, and that it uses numbers of size at most \(\mathcal{O}(|D|)\). In particular, the database relations then only contain numbers of this size.
In the other _ordered_ setting, we assume that database relations are represented by ordered arrays, for each order of its attributes. In particular, we assume a linear order on the data values.
Arrays.As mentioned before, we assume in this paper that relations are stored in \(1\)-dimensional arrays, whose entries are tuples that might be augmented by additional data.
More formally, an array \(\mathcal{A}\) is a sequence of consecutive memory cells. The number of cells is its _size_\(|\mathcal{A}|\). By \(\mathcal{A}[i]\) for \(1\leq i\leq|\mathcal{A}|\) we refer to the \(i\)-th cell of \(\mathcal{A}\) and to the (current) content of that cell, and we call \(i\) its index. We assume that the size of an array is always available to all processors (for instance, it might be stored in a "hidden" cell with index \(0\)). Given an index \(i\), any processor can access cell \(\mathcal{A}[i]\) in \(\mathcal{O}(1)\) time with \(\mathcal{O}(1)\) work.
In this paper, a cell \(\mathcal{A}[i]\) of an array always holds a distinguished database tuple \(t=\mathcal{A}[i].t\) (over some schema) and a flag that indicates whether the cell is inhabited.1 There might be additional data, e.g., further Boolean flags and links to other cells of (possibly) other arrays.
Footnote 1: If a cell is not inhabited, its data is basically ignored.
We say that an array _represents_ a relation \(R\) if some inhabited cell holds tuple \(t\), for each tuple \(t\) of \(R\), and no inhabited cells contain other tuples. It represents \(R\)_concisely_, if each tuple occurs in exactly one inhabited cell. To indicate that an array represents a relation \(R\) we usually denote it as \(\mathcal{A}_{R}\), \(\mathcal{A}^{\prime}{}_{R}\), etc.
An array \(\mathcal{A}\) that represents a relation \(R\) is _ordered_ if it is lexicographically ordered with respect to some ordered list \(X\) of the attributes from \(R\)'s schema in the obvious sense. Its order is \(Y\)-compatible, for a set \(Y\) of attributes, if the attributes of \(Y\) form a prefix of \(X\) (hence the order induces a partial order with respect to \(Y\)).
We often consider the _induced tuple sequence_\(t_{1},\ldots,t_{|\mathcal{A}|}\) of an array \(\mathcal{A}\). Here, \(t_{i}=\mathcal{A}[i].t\) is a _proper_ tuple, if \(\mathcal{A}[i]\) is inhabited, or otherwise \(t_{i}\) is the _empty tuple_\(\bot\).
**Example 4.1**.: The tuple sequence \([(1,5),\bot,(3,4),(8,3),(1,5),\bot,\bot,(7,3)]\) from an array \(\mathcal{A}\) of size eight has five proper tuples and three empty tuples. It represents the relation \(R=\{(1,5),(3,4),(8,3),(7,3)\}\), but _not_ concisely. The sequence \([(1,5),(3,4),(7,3),\bot,(8,3)]\) represents \(R\) concisely and ordered with respect to the canonical attribute order.
Operations and links.Before we explain the basic operations used by our evaluation algorithms we illustrate some aspects by an example.
We sketch how to evaluate the projection \(\pi_{B}(R)\) given the array \(\mathcal{A}=[(1,5),(3,4),(7,3),\bot,(8,3)]\) from Example 4.
First, with the operation Map the array \(\mathcal{A}^{\prime}[5,4,3,\bot,3]\) is computed and each tuple \(\mathcal{A}[i]\) is augmented by a link to \(\mathcal{A}^{\prime}[i]\) and vice versa. To achieve this, the tuples from \(\mathcal{A}\) are loaded to processors \(1,\ldots,5\), each processor applies the necessary changes to its tuple, and then writes the new tuple to the new array \(\mathcal{A}^{\prime}\). We note that it is not known in advance which cells of \(\mathcal{A}\) are inhabited and therefore, we need to assign one processor for each cell. Each processor only applies a constant number of steps, so the overall work for the Map-operation is \(\mathcal{O}(|\mathcal{A}|)\).
To get an array that represents \(\pi_{B}(R)\) concisely, a second operation eliminates duplicates. To this end, it creates a copy \(\mathcal{A}^{\prime\prime}\) of \(\mathcal{A}^{\prime}\) and checks with one processor for each pair \((i,j)\) of indices with \(i<j\) in parallel, whether \(\mathcal{A}^{\prime\prime}[i]=\mathcal{A}^{\prime\prime}[j]\) holds. If \(\mathcal{A}^{\prime\prime}[i]=\mathcal{A}^{\prime\prime}[j]\) holds, then cell \(\mathcal{A}^{\prime\prime}[i]\) is made uninhabited. Lastly, the algorithm creates links from every cell in \(\mathcal{A}^{\prime}\) to the unique inhabited cell in \(\mathcal{A}^{\prime\prime}\) holding the same tuple. To do this, it checks with one processor for each pair \((i,j)\) of indices with \(i<j\) in parallel, whether \(\mathcal{A}^{\prime}[i]=\mathcal{A}^{\prime\prime}[j]\) holds and \(\mathcal{A}^{\prime\prime}[j]\) is inhabited. If this is the case, the processor for \((i,j)\) augments the cell \(\mathcal{A}^{\prime}[i]\) with a link to \(\mathcal{A}^{\prime\prime}[j]\) and vice versa. Note that multiple processors might attempt to augment a cell \(\mathcal{A}^{\prime\prime}[j]\) with a link to a cell of \(\mathcal{A}^{\prime}\); but since this happens in parallel only one processor will be successful. Overall, we get 2-step-links from \(\mathcal{A}\) to \(\mathcal{A}^{\prime\prime}\) and 2-step-links from \(\mathcal{A}^{\prime\prime}\) to "representatives" in \(\mathcal{A}\).
The second operation has a work bound of \(\mathcal{O}(|\mathcal{A}|^{2})\) because it suffices to assign one processor for each pair \((i,j)\) of indices and each processor only applies a constant number of steps. We will show in Section 5 that work bounds \(\mathcal{O}(|\mathcal{A}|)\) and \(\mathcal{O}(|\mathcal{A}|^{1+\varepsilon})\) can be achieved for eliminating duplicates in the dictionary and ordered setting, respectively (cf. Lemma 5.2).
Of course, one might skip the intermediate writing and reading of tuples of \(\mathcal{A}^{\prime}\). We will often blur the distinction whether tuples reside in an array or within a sequence of processors.
Basic operations.Next, we describe some basic operations which we will use as building blocks in the remainder of this paper to query evaluation algorithms for PRAMs. We will describe algorithms for them in the next section.
Just as in Example 4, the operations usually get arrays as input, produce arrays as output, augment tuples and add links between tuples. In fact, each time a tuple of a new array results from some tuple of an input array we silently assume that (possibly mutual) links are added.
\(\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\proc{\}}}}{{{{{{{ \
an upper bound for the (possibly unknown) number \(n\) of proper tuples in \(\mathcal{A}\), given as an optional parameter. We refer to \(i\) as the "compaction parameter" and note that it can be inferred from the size of \(\mathcal{B}\). Mutual links are added as usual.
\(\mathsf{SearchRepresentatives}(\mathcal{A},\mathcal{B})\) links every inhabited cell \(\mathcal{A}[i]\) to an inhabited representative cell \(\mathcal{B}[j]\), such that \(\mathcal{B}[j].t=\mathcal{A}[i].t\) holds, if such a cell exists. Furthermore, for every \(i_{1},i_{2}\) with \(\mathcal{A}[i_{1}]=\mathcal{A}[i_{2}]\neq\bot\), both \(\mathcal{A}[i_{1}]\) and \(\mathcal{A}[i_{2}]\) are linked to the same representative. If required a copy of \(\mathcal{B}\) might be produced in which representatives are marked. We stress that \(\mathcal{B}\) does not have to represent its relation concisely, and, in fact, the operation is used to remove duplicates.
\(\mathsf{Deduplicate}(\mathcal{A})\) chooses one representative tuple for each tuple-value, marks the remaining cells as uninhabited and redirects incoming links from other arrays towards the representatives, if possible.
## 5 Algorithmic Techniques and Algorithms for Basic Array Operations
In this section, we first describe some important algorithmic techniques and present algorithms for the basic operations established in Section 4, afterwards.
Compaction.To implement the operation \(\mathsf{Compact}_{\varepsilon}\), we will utilise the following classical result by Hagerup, whose formulation is slightly adapted to our setting.
[[16]] Unnumbered theorem, p. 340] For every \(\varepsilon>0\), there is a \(\mathcal{O}(1)\)-time parallel algorithm that, given an array \(\mathcal{A}\) and a number \(k\), copies the proper tuples in \(\mathcal{A}\) to distinct cells of an array of size at most \(k^{1+\varepsilon}\) or detects that \(\mathcal{A}\) contains more than \(k\) proper tuples. The algorithm requires \(\mathcal{O}(|\mathcal{A}|)\) work and space on an arbitrary CRCW-PRAM.
The space bound is only implicit in [16]. To keep the paper self-contained, we give a detailed account of the algorithm in Appendix A, including an analysis of the space requirements.
Array hash tables.In the presence of dictionaries we use _array hash tables_ which associate each inhabited cell in \(\mathcal{A}\) with a number from \([|\mathcal{A}|]\), such that \(\mathcal{A}[i],\mathcal{A}[j]\) get the same number if and only if \(\mathcal{A}[i].t=\mathcal{A}[j].t\) holds. Array hash tables can be efficiently computed.
There is a \(\mathcal{O}(1)\)-time parallel algorithm that, in the presence of a dictionary, computes an array hash table for a given array \(\mathcal{A}\), and requires \(\mathcal{O}(|\mathcal{A}|)\) work and \(\mathcal{O}(|\mathcal{A}|\cdot|D|)\) space on an arbitrary CRCW-PRAM.
We note that due to the "arbitrary" resolution of concurrent write, the result of such a computation is not uniquely determined by the relations.
Proof sketch.Let \(A_{1},\ldots,A_{\ell}\) be the attributes of the relation \(R\) represented by \(\mathcal{A}\) in an arbitrary but fixed order and define \(X_{j}=\{A_{1},\ldots,A_{j}\}\) for all \(j\in\{1,\ldots,\ell\}\). The algorithm inductively computes hash values for tuples in \(R[X_{j}]\) for increasing \(j\) from \(1\) to \(\ell\).
The idea is to assign, to each tuple \(t\in R[X_{j}]\), a processor number in the range \(\{1,\ldots,|\mathcal{A}|\}\) as hash value and augment each cell of \(\mathcal{A}\) containing a proper tuple \(t^{\prime}\) with \(t^{\prime}[X]=t\) by this hash value. Since the same (projected) tuple \(t\in R[X_{j}]\) might occur in multiple, pairwise different, cells of \(\mathcal{A}\), it does not suffice to load all tuples in \(\mathcal{A}\) to \(|\mathcal{A}|\) processors and let each processor augment the tuple loaded to it by its processor number: multiple (different) numbers might get assigned to the same tuple (in different cells of \(\mathcal{A}\)). To resolve these conflicts, the algorithm utilizes the presence of a dictionary and the hash values for tuples in \(R[X_{j-1}]\).
For the base case \(j=1\) the algorithm allocates an auxiliary array of size \(\mathcal{O}(|D|)\) and loads the tuples in \(\mathcal{A}\) to processors \(1\) to \(|\mathcal{A}|\). To be more precise, for each \(i\in\{1,\ldots,|\mathcal{A}|\}\), the tuple \(t_{i}\) in cell \(\mathcal{A}[i]\) is loaded to processor \(i\). Recall that, in the dictionary setting, each value in the active domain is a number of size at most \(\mathcal{O}(|D|)\). Thus, the projection \(t[A_{1}]\) can be used as index for the auxiliary array. Each processor \(i\) with a proper tuple writes its processor number \(i\) into cell \(t_{i}[A_{1}]\) of the auxiliary array and then assigns to \(t_{i}\) the value actually written at position \(t_{i}[A_{1}]\). Note that, for each value \(a\), all processors \(i\) with \(t_{i}[A_{1}]=a\) will assign the same value to their tuple \(t_{i}\), since precisely one processor among the processors with \(t_{i}[A_{1}]=a\) succeeds in writing its number to cell \(t_{i}[A_{1}]\) on an arbitrary CRCW-PRAM. This can be done with \(\mathcal{O}(|\mathcal{A}|)\) work and \(\mathcal{O}(|D|)\) space.
For \(j>1\) the algorithm proceeds similarly but also takes, for a tuple \(t\), the hash value \(h_{j-1}(t[X_{j-1}])\) for \(t[X_{j-1}]\) into account, in addition to \(t[A_{j}]\). For this purpose, the algorithm first computes the hash values for \(R[X_{j-1}]\). It then allocates an auxiliary array of size \(\mathcal{O}(|\mathcal{A}|\cdot|D|)\) which is interpreted as two-dimensional array and each processor \(i\) writes its number into cell \((h_{j-1}(t_{i}[X_{j-1}]),t_{i}[X_{j}])\) of the auxiliary array (if \(t_{i}\) is a proper tuple). The number in this cell is then the hash value for \(t_{i}[X_{j}]\).
Writing and reading back the processor numbers requires \(\mathcal{O}(|\mathcal{A}|)\) work and the auxiliary array requires \(\mathcal{O}(|\mathcal{A}|\cdot|D|)\) space. The same bounds hold for the recursive invocations of ComputeHashvalues. Since the recursion depth is \(\ell\), the procedure requires \(\mathcal{O}(\ell\cdot|\mathcal{A}|)=\mathcal{O}(|\mathcal{A}|)\) work and, because the space for the auxiliary arrays can be reused, \(\mathcal{O}(|\mathcal{A}|\cdot|D|)\) space in total.
Since most of our query evaluation algorithms rely on these two techniques, we adopt the arbitrary CRCW-PRAM as our standard model and refer to it simply by _CRCW-PRAM_.
Search in ordered arrays.In sequential database processing, indexes implemented by search trees play an important role, in particularly for the test whether a given tuple is in a given relation. We use ordered arrays instead. Our search algorithm for ordered arrays uses links from each cell to the next and previous inhabited cell. We refer to those links as predecessor and successor links, respectively, and say an array is _fully linked_ if it has predecessor and successor links.
For every \(\varepsilon>0\), there is a \(\mathcal{O}(1)\)-time parallel algorithm that computes, for an array \(\mathcal{A}\), predecessor and successor links with work \(\mathcal{O}(|\mathcal{A}|^{1+\varepsilon})\) on a common CRCW-PRAM.
Proof sketch.We describe the computation of predecessor links. Successor links can be computed analogously. Let \(n=|\mathcal{A}|\) and \(\delta=\frac{\varepsilon}{2}\). In the first round, the algorithm considers subintervals of length \(n^{\delta}\) and establishes predecessor links within them. To this end, it uses, for each interval a \(n^{\delta}{\times}n^{\delta}\)-table whose entries \((i,j)\) with \(i<j\) are initialised by \(1\), if \(\mathcal{A}[i]\) is inhabited and, otherwise, by \(0\). Next, for each triple \(i,j,k\) of positions in the interval entry \((i,j)\) is set to \(0\) if \(i<k<j\) and \(\mathcal{A}[k]\) is inhabited. It is easy to see that afterwards entry \((i,j)\) still carries a \(1\) if and only if \(i\) is the predecessor of \(j\). And for all such pairs a link from \(\mathcal{A}[j]\) to \(\mathcal{A}[i]\) is added. For every interval, \((n^{\delta})^{3}=n^{3\delta}\) processors suffice for this computation, i.e. one processor for each triple \(i,j,k\) of positions in the interval. Since there are \(\frac{n}{n^{\delta}}=n^{1-\delta}\) intervals, this yields an overall work of \(n^{1-\delta}\cdot n^{3\delta}=n^{1+2\delta}=n^{1+\varepsilon}\). In the next round, intervals of length \(n^{2\delta}\) are considered and each is viewed as a sequence of \(n^{\delta}\) smaller intervals of length \(n^{\delta}\). The goal in the second round is to establish predecessor links for the minimum cells of each of the smaller intervals. This can be done similarly with the same
asymptotic work as round \(1\). After \(\lceil\frac{1}{\delta}\rceil\) rounds, this process has established predecessor links for all cells (besides for the minimum cells without a predecessor).
In fully linked ordered arrays, tuples can be searched for efficiently.
For every \(\varepsilon>0\), there is a \(\mathcal{O}(1)\)-time parallel algorithm that computes, for a given tuple \(t\) and an ordered array \(\mathcal{A}\) with predecessor and successor links, the largest tuple \(t^{\prime}\) in \(\mathcal{A}\) with \(t^{\prime}\leq t\) with work \(\mathcal{O}(|\mathcal{A}|^{\varepsilon})\) on an CREW-PRAM.
Proof sketch.: Let \(n=|\mathcal{A}|\). In the first round, using \(n^{\varepsilon}\) processors, the algorithm tests for all cells with positions \(k=in^{1-\varepsilon}\) whether \(\mathcal{A}[k]\) is inhabited, or the predecessor of \(\mathcal{A}[k]\) contains a tuple \(t^{\prime}\leq t\) and whether this does not hold for position \((i+1)n^{1-\varepsilon}\) or its successor. By a suitable process the search continues recursively in the thus identified sub-interval. After \(\lceil\frac{1}{\varepsilon}\rceil\) rounds it terminates. Since, in each round, \(n^{\varepsilon}\) processors are used, the statement follows.
We note that analogously it is possible to search for \(m\) tuples in parallel with work \(\mathcal{O}(m\,|\mathcal{A}|^{\varepsilon})\).
As an alternative to ordered arrays, bounded-depth search trees could be used. They can be defined in the obvious way with degree about \(n^{\varepsilon}\). The work for a search is asymptotically the same as for fully linked ordered arrays.
Algorithms for basic array operations.In the remainder of this section we consider algorithms for basic array operations. For the operations Concatenate, Map, and Partition, neither the algorithm nor the analysis depends on the setting, i.e. they are the same in the dictionary setting and the ordered setting. Furthermore, their implementation is straightforward and only requires EREW-PRAMs, instead of CRCW-PRAMs.
There is a \(\mathcal{O}(1)\)-time parallel algorithm for Concatenate that, given arrays \(\mathcal{A}\) and \(\mathcal{B}\), requires \(\mathcal{O}(|\mathcal{A}|+|\mathcal{B}|)\) work and space on an EREW-PRAM.
Proof sketch.: The tuples in \(\mathcal{A}\) are loaded to processors \(1\) to \(|\mathcal{A}|\) and the tuples in \(\mathcal{B}\) to processors \(|\mathcal{A}|+1\) to \(|\mathcal{A}|+|\mathcal{B}|\). The tuples are then stored in the output array \(\mathcal{C}\). Each processor can also augment its tuple (between \(\mathcal{A}\), \(\mathcal{B}\) and \(\mathcal{C}\)) with mutual links.
There is a \(\mathcal{O}(1)\)-time parallel algorithm for Map that, given an array \(\mathcal{A}\) and a function \(f\) that can be evaluated in \(\mathcal{O}(1)\)-time with work and space \(\mathcal{O}(1)\) on an EREW-PRAM, requires \(\mathcal{O}(|\mathcal{A}|)\) work and \(\mathcal{O}(|\mathcal{A}|)\) space on an EREW-PRAM. If \(f\) is order-preserving and \(\mathcal{A}\) is ordered, then the output array is ordered, too.
Proof sketch.: The algorithm loads the tuples in \(\mathcal{A}\) to processors \(1\) to \(|\mathcal{A}|\) and each processor computes the image \(f(t)\) for its tuple \(t\).
Since \(f(t)\) has to be computed for \(|\mathcal{A}|\) tuples, the bounds for work and space follow.
There is a \(\mathcal{O}(1)\)-time parallel algorithm for Partition that, given an array \(\mathcal{A}\), an integer \(n\), and a function \(g\) that maps proper tuples in \(\mathcal{A}\) into \(\{1,\ldots,n\}\) and can be evaluated in \(\mathcal{O}(1)\)-time with work and space \(\mathcal{O}(1)\), requires \(\mathcal{O}(n\cdot|\mathcal{A}|)\) work and \(\mathcal{O}(n\cdot|\mathcal{A}|)\) space on an EREW-PRAM.
Proof sketch.: The algorithm first augments every proper tuple \(t\) with the number \(g(t)\) using Map. This requires \(\mathcal{O}(|\mathcal{A}|)\) work and \(\mathcal{O}(|\mathcal{A}|)\) space, cf. Lemma 5.
Then the arrays \(\mathcal{A}_{1},\ldots,\mathcal{A}_{n}\) of size \(|\mathcal{A}|\) are allocated (and initialised). This requires \(\mathcal{O}(n\cdot|\mathcal{A}|)\) work and space.
For each \(i\in\{1,\ldots,|\mathcal{A}|\}\) in parallel, the tuple \(t_{i}\) in cell \(\mathcal{A}[i]\) is then - if it is a proper tuple - copied into cell \(\mathcal{A}_{j}[i]\) where \(j=g(t_{i})\). This requires \(\mathcal{O}(\mathcal{A})\) work.
Let us point out that the upper bound for the work stated in Lemma 5.7 can be reduced to \(\mathcal{O}(n+|\mathcal{A}|)\) by adapting the classical _lazy array initialisation technique_ for (sequential) RAMs to PRAMs.3 It turned out, however, that this is not necessary for our results, since \(n\) is either a constant or the work is dominated by other operations in our algorithms.
Footnote 3: In a nutshell, this requires replacing a global counter by one counter per processor and maintaining back-references to initialised cells per processor (processors can still read counters and back-references of other processors).
The algorithm for \(\textsc{Compact}_{\varepsilon}\) does not depend on the setting either. It is implicitly proved in the proof of Proposition 5.1 in [16] for a fixed choice of \(b\). The idea is to try several, but constantly many, compaction parameters.
[[16], implicit in proof] For every \(\varepsilon>0\), there is a \(\mathcal{O}(1)\)-time parallel algorithm for \(\textsc{Compact}_{\varepsilon}\) that, given an array \(\mathcal{A}\) and an upper bound \(b\) for the number of proper tuples in \(\mathcal{A}\), requires \(\mathcal{O}(|\mathcal{A}|)\) work and space on a CRCW-PRAM.
Proof sketch.: Let \(n\) denote the number of proper tuples in the given array \(\mathcal{A}\).
We assume \(n\geq 1\) in the following4 and set \(k=\lceil\frac{1}{\varepsilon}\rceil\). The algorithm invokes the algorithm guaranteed by Proposition 5.1 with \(k_{i}=b^{\frac{i\varepsilon}{1+\varepsilon}}\) for every \(0\leq i\leq\lceil k(1+\varepsilon)\rceil\) in ascending order until it is successful (or the current \(k_{i}\) is larger than \(|\mathcal{A}|\) in which case the procedure can just return \(\mathcal{A}\)). Each of the (constantly many) invocations requires \(\mathcal{O}(|\mathcal{A}|)\) work and space.
Footnote 4: The algorithm yields \(i=0\), if \(n=0\) holds, as required.
If the algorithm is successful for \(i=0\), the requirements are trivially met.
Otherwise, let \(i\) be such that the compaction for \(i+1\) was successful but the compaction for \(i\) was not (note that for \(j=\lceil k(1+\varepsilon)\rceil\) the compaction will always succeed, since \(k_{j}\) is an upper bound for the number of proper tuples in \(\mathcal{A}\)). Then the resulting array has size
\[k_{i+1}^{1+\varepsilon}=\left(b^{\frac{(i+1)\varepsilon}{1+\varepsilon}} \right)^{1+\varepsilon}=b^{(i+1)\varepsilon}=b^{i\varepsilon}\cdot b^{ \varepsilon}.\]
Moreover, since the compaction algorithm was not successful for \(i\), we also have that \(k_{i}=b^{\frac{i\varepsilon}{1+\varepsilon}}<n\) and, thus, \(b^{i\varepsilon}<n^{1+\varepsilon}\).
All in all, the array \(\mathcal{B}\) has size \(b^{(i+1)\varepsilon}\leq n^{1+\varepsilon}\cdot b^{\varepsilon}\).
For \(\textsc{SearchRepresentatives}(\mathcal{A},\mathcal{B})\) we present four algorithms, depending on the setting and, in the ordered setting, whether \(\mathcal{A}\) or \(\mathcal{B}\) is suitably ordered. This operation is crucial for deduplication, semi-join and join and the upper bounds impact the bounds for those operations as well.
For every \(\varepsilon>0\), there are \(\mathcal{O}(1)\)-time parallel algorithms for \(\textsc{SearchRepresentatives}\) that, given arrays \(\mathcal{A}\) and \(\mathcal{B}\), have the following bounds on a CRCW-PRAM.
**(a)**: _Work_ \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|)\cdot|\mathcal{B}|)\) _and space_ \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|))\)_, without any assumptions;_
**(b)**: _Work_ \(\mathcal{O}(|\mathcal{A}|+|\mathcal{B}|)\) _and space_ \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|)\cdot|D|)\) _in the presence of a dictionary;_
**(c)**: _Work_ \(\mathcal{O}(|\mathcal{A}|\cdot|\mathcal{B}|^{\varepsilon})\) _and space_ \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|))\)_, if_ \(\mathcal{B}\) _is ordered and fully linked;_
**(d)**: _Work_ \(\mathcal{O}(|\mathcal{B}|\cdot|\mathcal{A}|^{\varepsilon})\) _and space_ \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|))\)_, if_ \(\mathcal{A}\) _is ordered and fully linked and_ \(\mathcal{B}\) _is concise._
Proof sketch.: For (a), the naive algorithm can be used. In a first phase, it uses one processor per pair \((i,j)\) of indices for the cells of \(\mathcal{B}\) to mark duplicates in \(\mathcal{B}\): if \(i<j\) and \(\mathcal{B}[i].t=\mathcal{B}[j].t\), then \(\mathcal{B}[j]\) is marked as duplicate. On the second phase, it uses one processor per pair \((i,j)\) of indices for the cells of \(\mathcal{A}\) and \(\mathcal{B}\) and links \(\mathcal{A}[i]\) to \(\mathcal{B}[j]\) if \(\mathcal{B}[i].t=\mathcal{B}[j].t\) and \(\mathcal{B}[j]\) is not marked as duplicate.
For (b), first an array hash table for (the concatenation of) \(\mathcal{A}\) and \(\mathcal{B}\) is computed with \(\mathcal{O}(|\mathcal{A}|+|\mathcal{B}|)\) work and \(\mathcal{O}((|\mathcal{A}|+|\mathcal{B}|)\cdot|D|)\) space, thanks to Lemma 5.2 and Lemma 5.5. For a proper tuple \(t\), let \(h(t)\) denote the hash value in the range \(\{1,\ldots,|\mathcal{A}|+|\mathcal{B}|\}\) assigned to \(t\). The algorithm then allocates an auxiliary array of size \(|\mathcal{A}|+|\mathcal{B}|\) and, for each proper tuple \(s_{i}\) in \(\mathcal{B}\), it writes, in parallel, \(i\) into cell \(h(s_{i})\) of the auxiliary array. Here \(s_{i}\) denotes the \(i\)-tuple from \(\mathcal{B}\). Other processors might attempt to write an index to cell \(h(s_{i})\) but only one will succeed. This requires \(\mathcal{O}(|\mathcal{B}|)\) work to write the indices and \(\mathcal{O}(|\mathcal{A}|+|\mathcal{B}|)\) space for the auxiliary array.
For each proper tuple \(t\) in \(\mathcal{A}\) it is then checked in parallel, if cell \(h(t)\) contains an index \(i\). If yes, then \(t\) is marked and augmented with a pointer to cell \(\mathcal{B}[i]\), since \(\mathcal{B}[i].t=t\). If not, then \(t\) has no partner tuple in \(\mathcal{B}\), thus \(t\) is not augmented by a link.
Towards (c), the algorithm identifies, for each proper tuple \(t\) in \(\mathcal{A}\), the smallest proper tuple \(s\) in \(\mathcal{B}\) such that \(t\leq s\). If \(t=s\) holds, \(t\)'s cell is marked and a link to the cell of \(s\) is added. For each tuple, this can be done with work \(|\mathcal{B}|^{\varepsilon}\), thanks to Proposition 5.4 and these searches can be done in parallel by assigning \(|\mathcal{B}|^{\varepsilon}\) processors per tuple of \(\mathcal{A}\).
For (d), the algorithm searches, for each proper tuple \(s\) in \(\mathcal{B}\), the smallest tuple \(t\) in \(\mathcal{A}\) with \(t\geq s\). If \(t=s\) then the cell of \(t\) is marked and a link to the cell of \(s\) is added. If \(\mathcal{A}\) is guaranteed to be concise, that's all. Otherwise, for each proper tuple \(t\) in \(\mathcal{A}\) the smallest inhabited cell \(\mathcal{A}[i]\) with \(\mathcal{A}[i].t=t\) is searched. If it is marked then \(t\) is marked as well and a link to the cell in \(\mathcal{B}\) to which \(\mathcal{A}[i]\) links is added.
For every \(\varepsilon>0\), there are \(\mathcal{O}(1)\)-time parallel algorithms for \(\mathsf{Deduplicate}\) that, given an array \(\mathcal{A}\) have the following bounds on an arbitrary CRCW-PRAM.
1. [label=(),ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=,ref=
\(\mathcal{O}(|\mathcal{A}_{R}|)\) work and space on an EREW-PRAM. The output array is of size at most \(|\mathcal{A}_{R}|\). If \(\mathcal{A}_{R}\) is ordered, then the output is ordered, too._
The algorithm is a simple application of the operation Map.
For every \(\varepsilon>0\), there are \(\mathcal{O}(1)\)-time parallel algorithms for CRCW-PRAMs that compute upon input of two relations \(R\) and \(S\) the semijoin \(R\ltimes S\) with the following bounds. Here, \(X\) denotes the joint attributes of \(R\) and \(S\).
1. [label=()]
2. Work \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), without any assumptions;
3. Work \(\mathcal{O}(|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|D|)\) in the presence of a dictionary;
4. Work \(\mathcal{O}(|\mathcal{A}_{R}|\cdot|\mathcal{A}_{S}|^{\varepsilon})\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), if \(\mathcal{A}_{S}\) is \(X\)-compatibly ordered and fully linked;
5. Work \(\mathcal{O}(|\mathcal{A}_{S}|\cdot|\mathcal{A}_{R}|^{\varepsilon})\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), if \(\mathcal{A}_{R}\) is \(X\)-compatibly ordered and fully linked and \(\mathcal{A}_{S}\) is concise.
The output array is of size \(|\mathcal{A}_{R}|\). If \(\mathcal{A}_{R}\) is ordered, then the output is ordered, too. Moreover, each \(t\in R\ltimes S\) in the output of \(R\ltimes S\) gets augmented by a link to a corresponding tuple in \(S\).
For every \(\varepsilon>0\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute upon input of two relations \(R\) and \(S\) the difference \(R\setminus S\) with the following bounds.
1. [label=()]
2. Work \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), without any assumptions;
3. Work \(\mathcal{O}(|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|D|)\) in the presence of a dictionary;
4. Work \(\mathcal{O}(|\mathcal{A}_{R}|\cdot|\mathcal{A}_{S}|^{\varepsilon})\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), if \(\mathcal{A}_{S}\) is ordered and fully linked;
5. Work \(\mathcal{O}(|\mathcal{A}_{S}|\cdot|\mathcal{A}_{R}|^{\varepsilon})\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), if \(\mathcal{A}_{R}\) is ordered and fully linked and \(\mathcal{A}_{S}\) is concise.
The output array is of size \(|\mathcal{A}_{R}|\). If \(\mathcal{A}_{R}\) is ordered, then the output is ordered, too.
The algorithms for Proposition 3 and Proposition 3 combine the appropriate algorithm for SearchRepresentatives with suitable applications of Map.
For every \(\varepsilon>0\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that receive as input a relation \(R\) and a list \(X\) of attributes from \(R\), and evaluate the projection \(\pi_{X}(R)\) with the following bounds.
1. [label=()]
2. Work \(\mathcal{O}((|\mathcal{A}_{R}|^{2})\) and space \(\mathcal{O}(|\mathcal{A}_{R}|)\), without any assumptions;
3. Work \(\mathcal{O}(|\mathcal{A}_{R}|)\) and space \(\mathcal{O}(|\mathcal{A}_{R}|\cdot|D|)\) in the presence of a dictionary;
4. Work \(\mathcal{O}(|\mathcal{A}_{R}|^{1+\varepsilon})\) and space \(\mathcal{O}(|\mathcal{A}_{R}|)\), if \(\mathcal{A}_{R}\) is \(X\)-compatibly ordered.
The output array is of size \(|\mathcal{A}_{R}|\). If \(\mathcal{A}_{R}\) is ordered then the output is ordered, too.
The algorithms combine Deduplicate with Map in a straightforward manner. We note that we do not require in (c) that \(\mathcal{A}_{R}\) is fully linked, since the work bound allows to compute links.
For every \(\varepsilon>0\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute upon input of two relations \(R\) and \(S\) the union \(R\cup S\) with the following bounds.
1. [label=()]
2. Work \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), without any assumptions;
3. Work \(\mathcal{O}(|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|)\cdot|D|)\) in the presence of a dictionary;
4. Work \(\mathcal{O}(|\mathcal{A}_{R}|\cdot|\mathcal{A}_{S}|^{\varepsilon}+|\mathcal{ A}_{S}|)\) and space \(\mathcal{O}((|\mathcal{A}_{R}|+|\mathcal{A}_{S}|))\), if \(\mathcal{A}_{S}\) is ordered and fully linked.
The output array is of size \(|\mathcal{A}_{R}|+|\mathcal{A}_{S}|\).
The algorithms basically concatenate \(R\setminus S\) and \(S\). We note that thanks to the symmetry of union, the algorithm of (c) can also be applied if \(\mathcal{A}_{R}\) is ordered.
**Proposition 6.6**.: _For every \(\varepsilon>0\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute upon input of two relations \(R\) and \(S\) the join \(R\bowtie S\) with the following bounds. Here, \(X\) denotes the joint attributes of \(R\) and \(S\)._
1. _[label=()]_
2. _Work_ \(\mathcal{O}(\left|\mathcal{A}_{S}\right|^{2}+\left|\pi_{X}(S)\right|^{1+ \varepsilon}\left|\mathcal{A}_{S}\right|^{1+\varepsilon})+\left(\left|\mathcal{ A}_{R}\right|+\left|\mathcal{A}_{S}\right|\right)\left|\mathcal{A}_{S} \right|+\left|R\bowtie S\right|\left|\mathcal{A}_{R}\right|^{2\varepsilon} \left|\mathcal{A}_{S}\right|^{2\varepsilon})\) _and space_ \(\mathcal{O}(\left|\pi_{X}(S)\right|^{1+\varepsilon}\left|\mathcal{A}_{S} \right|^{1+\varepsilon}+\left|\mathcal{A}_{R}\right|+\left|\mathcal{A}_{S} \right|+\left|\mathcal{A}_{R}\right|^{2\varepsilon}\left|\mathcal{A}_{S} \right|^{2\varepsilon})\) _without any assumptions;_
3. _Work_ \(\mathcal{O}(\left|\pi_{X}(S)\right|^{1+\varepsilon}\left|\mathcal{A}_{S} \right|^{1+\varepsilon}+(\left|\mathcal{A}_{R}\right|+\left|\mathcal{A}_{S} \right|)+\left|R\bowtie S\right|\left|\mathcal{A}_{R}\right|^{2\varepsilon} \left|\mathcal{A}_{S}\right|^{2\varepsilon})\) _and_ _in the presence of a dictionary;_
4. _Work_ \(\mathcal{O}(\left|\mathcal{A}_{S}\right|^{1+\varepsilon}+\left|\mathcal{A}_{R} \right|\cdot\left|\mathcal{A}_{S}\right|^{\varepsilon}+\left|R\bowtie S\right| \left|\mathcal{A}_{R}\right|^{2\varepsilon}\left|\mathcal{A}_{S}\right|^{2 \varepsilon})\) _and_ _if_ \(\mathcal{A}_{S}\) _is_ \(X\)_-compatibly ordered and fully linked._
_The output array is of size \(\left|R\bowtie S\right|\left|\mathcal{A}_{R}\right|^{2\varepsilon}\left| \mathcal{A}_{S}\right|^{2\varepsilon}\)._
Proof idea.: The algorithms proceed in three phases, the grouping phase, the pairing phase and the joining phase. For (a) and (b), the tuples of \(S\) are grouped with respect to their \(X\)-attributes in the grouping phase. Each group is compacted into an array of some size \(\left|\mathcal{A}_{S}\right|^{\ell\varepsilon}\). Likewise the projection \(\pi_{X}(S)\), containing the "index tuples", is compacted. In the pairing phase, a semijoin reduction is performed and the remaining \(R\)-tuples are partitioned with respect to the size of their corresponding "\(X\)-group" from \(S\). Finally, during the joining phase, output tuples are produced, by combining tuples from \(R\) with the tuples from their "\(X\)-group" from \(S\). The work bounds for the three phases can be seen as the three main summands in the statement of the proposition.
If \(S\) is represented by an array that is \(X\)-compatibly ordered and fully linked, the grouping phase can be performed more efficiently. In that case, \(\mathcal{A}_{S}\) itself can be viewed as the concatenation of all "\(X\)-groups". Thus, this steps is for free and, furthermore, the compaction of the "\(X\)-groups" can be done in-place and therefore only requires work \(\left|\mathcal{A}_{S}\right|\) in total. The pairing phase and the joining phase are basically as for (a) and (b), but the work bounds for the pairing phase differ, due to the more efficient semijoin algorithm in the ordered setting.
## 7 Query Evaluation
After studying algorithms for basic operations and operators of the relational algebra, we are now prepared to investigate the complexity of \(\mathcal{O}(1)\)-time parallel algorithms for query evaluation.
Although every query of the relational algebra can be evaluated by a \(\mathcal{O}(1)\)-time parallel algorithms with polynomial work, the polynomials can be arbitrarily bad. In fact, that a graph has a \(k\)-clique can be expressed by a conjunctive query with \(k\) variables and it follows from Rossman's \(\omega(n^{k/4})\) lower bound for the size of bounded-depth circuit families for \(k\)-Clique [26] that any \(\mathcal{O}(1)\)-time parallel algorithm that evaluates this query needs work \(\omega(n^{k/4})\).
We therefore concentrate in this section on restricted query evaluation settings. We study two restrictions of query languages which allow efficient sequential algorithms, the semijoin algebra and free-connex and/or acylic conjunctive queries. Furthermore, we present a \(\mathcal{O}(1)\)-time parallel version of worst-case optimal join algorithms.
In the following, IN always denotes the maximum number of tuples in any relation of the underlying database that is addressed by the given query. Furthermore, we always
assume that the database relations are represented concisely by _compact_ arrays without any uninhabited cells.
### Semi-Join Algebra
The semijoin algebra is the fragment of the relational algebra that uses only selection, projection, rename, union, set difference and, not least, semijoin. It is well-known that semijoin queries produce only query results of size \(\mathcal{O}(|D|)\) and can be evaluated in time \(\mathcal{O}(|D|)\)[22, Theorem 7]. From the results of Section 6 we can easily conclude the following.
For each query \(q\) of the semijoin algebra and for every \(\varepsilon>0\) there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that, given a database \(D\), evaluate \(q(D)\) with the following bounds.
1. Work \(\mathcal{O}(\text{IN}^{2+\varepsilon})\) and space \(\mathcal{O}(\text{IN}^{2+\varepsilon})\), without any assumptions;
2. Work \(\mathcal{O}(\text{IN})\) and space \(\mathcal{O}(\text{IN}\cdot|D|)\) in the presence of a dictionary.
Proof sketch.: Towards (a), the operators of the query are evaluated with the naive algorithms from Section 6 (stated as (a)). After each evaluation the result array is compacted by \(\texttt{Compact}_{\varepsilon/2}\). Statement (b) follows by using the (b)-algorithms from Section 6.
Altogether, semijoin queries can be evaluated work-optimally by a \(\mathcal{O}(1)\)-time parallel algorithm. We plan to address ordered setting in a journal version of this paper. We expect that the results of [14] enable almost work-optimal \(\mathcal{O}(1)\)-time parallel algorithms with a \(\mathcal{O}(\text{IN}^{1+\varepsilon})\) work bound, if the relations are represented by suitably ordered arrays. We discuss this further in our conclusion.
### Evaluation of Conjunctive Queries
In this section we give algorithms to evaluate subclasses of conjunctive queries in parallel. More precisely, we consider acyclic join queries, acyclic conjunctive queries, free-connex acyclic conjunctive queries and arbitrary free-connex conjunctive queries.
Conjunctive queries are conjunctions of relation atoms. We write a _conjunctive query_ (_CQ_ for short) \(q\) as a rule of the form \(q:\mathsf{A}\leftarrow\mathsf{A}_{1},\ldots,\mathsf{A}_{m}\), where \(\mathsf{A},\mathsf{A}_{1},\ldots,\mathsf{A}_{m}\) are atoms and \(m\geq 1\). A conjunctive query \(q\) is _acyclic_, if it has a join tree \(T_{q}\), i.e. an undirected tree \((V(T),E(T))\) where \(V(T)\) consists of the atoms in \(q\) and for each variable \(v\) in \(T_{q}\) the set \(\{\alpha\in V(T)|\alpha\text{ contains }v\}\) induces a connected subtree of \(T_{q}\). It is _free-connex acyclic_ if \(q\) is acyclic and the Boolean query whose body consists of the body atoms _and_ the head atom of \(q\) is acyclic as well [5, 9]. A _join query_ is a conjunctive query with no quantified variable, i.e. every variable in a join query is free. For more background on (acyclic) conjunctive queries we refer to [1, 3].
Our algorithms rely on the well-known Yannakakis algorithm [31]. Yannakakis' algorithm receives as input an acyclic conjunctive query \(q\), the join tree \(T_{q}\) and a database \(D\). With each node \(v\) in \(T_{q}\) a relation \(S_{v}\) is associated. Initially, \(S_{v}=R_{v}(D)\), where \(R_{v}\) is the relation that is labelled in \(v\). The algorithm is divided into three steps.
1. **bottom-up semijoin reduction:** All nodes are visited in bottom-up traversal order of \(T\). When a node \(v\) is visited, \(S_{v}\) is updated to \(S_{v}\ltimes S_{c}\) for every child \(c\) of \(v\) in \(T\).
2. **top-down semijoin reduction:** All nodes are visited in top-down traversal order of \(T\). When a node \(v\) is visited, the relation \(S_{c}\) is updated to \(S_{c}\ltimes S_{v}\) for every child \(c\) of \(v\) in \(T\)
3. All nodes are visited in bottom-up traversal order in \(T\). When a node \(v\) is visited, the algorithm updates, for every child \(c\) of \(v\), the relation \(S_{v}\) to \(\pi_{\mathsf{free}(q)\mathsf{UAttr}(S_{v})}(S_{v}\bowtie S_{c})\), where \(\mathsf{free}(q)\) denotes the attributes that are associated with the free variables of \(q\).
Proposition 6 immediately yields the following lemma. There are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms for phase (1) and (2) of the Yannakakis algorithm with the following bounds.
1. Work \(\mathcal{O}(\mathsf{IN}^{2})\) and space \(\mathcal{O}(\mathsf{IN})\), without any assumptions;
2. Work \(\mathcal{O}(\mathsf{IN})\) and space \(\mathcal{O}(\mathsf{IN}\cdot|D|)\) in the presence of a dictionary.
By combining Yannakakis' algorithm with the algorithms from Section6 we obtain the following results.
For every \(\varepsilon>0\) and every acyclic join query \(q\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute \(q(D)\), given a database \(D\), with the following bounds.
1. Work \(\mathcal{O}(\mathsf{IN}^{2}+\mathsf{OUT}^{2+\varepsilon}\mathsf{IN}^{\varepsilon})\) and space \(\mathcal{O}(\mathsf{IN}+\mathsf{OUT}^{2+\varepsilon}\mathsf{IN}^{\varepsilon})\), without any assumptions;
2. Work \(\mathcal{O}(\mathsf{IN}^{1+\varepsilon}\cdot\mathsf{OUT}^{1+\varepsilon})\) and space \(\mathcal{O}((\mathsf{IN}\cdot\mathsf{OUT})^{1+\varepsilon}\,|D|)\), in the presence of a dictionary.
To perform phase (3) of the Yannakakis algorithm the parallel algorithms first shrink every array \(\mathcal{A}_{R_{v}}\) to the size \(|S_{v}|^{1+\varepsilon^{\prime}}\,|R_{v}(D)|^{\varepsilon^{\prime}}\) using \(\mathsf{Compact}_{\varepsilon^{\prime}}(S_{v})\), for some very small \(\varepsilon^{\prime}\), depending (only) on the size of the join tree. Likewise, by calling the join algorithm with a suitable parameter, it strongly compacts each intermediate join result. That the stated bounds are met can be established by a straightforward, but tedious calculation, given in AppendixC.
For every \(\varepsilon>0\), and every acyclic conjunctive query \(q\), there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute \(q(D)\), given a database \(D\), with the following bounds.
1. Work \(\mathcal{O}(\mathsf{IN}^{2}+\mathsf{OUT}^{2+\varepsilon}\mathsf{IN}^{2+ \varepsilon})\) and space \(\mathcal{O}(\mathsf{IN}+\mathsf{IN}^{2+\varepsilon}\cdot\mathsf{OUT}^{1+ \varepsilon})\), without any assumptions;
2. Work \(\mathcal{O}(\mathsf{OUT}^{1+\varepsilon}\mathsf{IN}^{2+2\varepsilon})\) and space \(\mathcal{O}(\mathsf{OUT}^{1+\varepsilon}\mathsf{IN}^{2+\varepsilon}\,|D|)\), in the presence of a dictionary.
The algorithms are obtained from the algorithms for Proposition7 by a suitable adaptation of phase (3). A proof sketch for Proposition7 is given in AppendixC.
It turns out that the bounds for acyclic join queries carry over to free-connex acyclic conjunctive queries. We use the reduction from free-connex acyclic queries to join queries given in [8]. We adapt it for \(\mathcal{O}(1)\)-time parallel algorithms.
For every free-connex acyclic query \(q\) and every database \(D\) there exists an acyclic join query \(\tilde{q}\) and a database \(\widetilde{D}\) such that \(q(D)=\tilde{q}(\widetilde{D})\). Here, \(\tilde{q}\) only depends on \(q\).
Furthermore, there are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that compute upon input of a free-connex acyclic query \(q\) and a database \(D\) the corresponding join query \(\widetilde{q}\) and database \(\widetilde{D}\) with the following bounds.
1. Work \(\mathcal{O}(\mathsf{IN}^{2})\) and space \(\mathcal{O}(\mathsf{IN})\), without any assumptions;
2. Work \(\mathcal{O}(\mathsf{IN})\) and space \(\mathcal{O}(\mathsf{IN}\cdot|D|)\) in the presence of a dictionary;
A proof sketch is given in AppendixC.
By combining Lemma7 and Proposition7 we obtain the following result. There are CRCW-PRAM \(\mathcal{O}(1)\)-time parallel algorithms that receives as input a free-connex acyclic conjunctive query \(q\) and a database \(D\) and computes the result \(q(D)\), with the following bounds.
1. _[label=()]_
2. _Work_ \(\mathcal{O}(\mbox{IN}^{2}+\mbox{OUT}^{2+\varepsilon}\mbox{IN}^{\varepsilon})\) _and space_ \(\mathcal{O}(\mbox{IN}+\mbox{OUT}^{2+\varepsilon}\mbox{IN}^{\varepsilon})\)_, without any assumptions;_
3. _Work_ \(\mathcal{O}(\mbox{IN}^{1+\varepsilon}\cdot\mbox{OUT}^{1+\varepsilon})\) _and space_ \(\mathcal{O}((\mbox{IN}\cdot\mbox{OUT})^{1+\varepsilon}\,|D|)\)_, in the presence of a dictionary._
In [5, Definition 36] and [8, Definition 3.2], a definition of free-connex, not necessarily acyclic, conjunctive queries is given. For the sake of completeness, the definition is also given in Appendix C. Corollary 7.6 can be extended to that class of queries along the lines of [8, Lemma 4.4].
We plan to give a more detailed account in a journal version of this paper.
### Weakly Worst-Case Optimal Work for Natural Joins
This section is concerned with the evaluation of _natural join queries_\(q=R_{1}\Join\ldots\Join R_{m}\) over some schema \(\Sigma=\{R_{1},\ldots,R_{m}\}\) with attributes \(\mbox{attr}(q)=\bigcup_{i=1}^{m}\mbox{attr}(R_{i})\). It was shown in [4] that \(|q(D)|\leq\prod_{i=1}^{m}|R_{i}|^{x_{i}}\) holds for every database \(D\) and that this bound is tight for infinitely many databases \(D\) (this is also known as the AGM bound). Here \(x_{1},\ldots,x_{m}\) is a fractional edge cover of \(q\) defined as a solution of the following linear program.
\[\mbox{minimize}\,\sum_{i=1}^{m}x_{i}\,\,\mbox{subject to}\sum_{ i:A\in\mbox{attr}(R_{i})}x_{i}\geq 1\,\,\mbox{for all}\,\,A\in\mbox{attr}(q)\\ \mbox{and}\,\,x_{i}\geq 0\,\,\mbox{for all}\,\,1\leq i\,\leq m\]
We say that a natural join query \(q\) has _weakly worst-case optimal_\(\mathcal{O}(1)\)-time parallel algorithms, if, for every \(\varepsilon>0\), there is a \(\mathcal{O}(1)\)-time parallel algorithm that evaluates \(q\) with work \((\prod_{i=1}^{m}|R_{i}|^{x_{i}}+\mbox{IN})^{1+\varepsilon}\). For comparison, in the sequential setting, algorithms are considered worst-case optimal if they have a time bound \(O(\prod_{i=1}^{m}|R_{i}|^{x_{i}}+\mbox{IN})\)[24]. In this subsection, we show that natural join queries indeed have weakly worst-case optimal \(\mathcal{O}(1)\)-time parallel algorithms.
For every \(\varepsilon>0\) and natural join query \(q=R_{1}\Join\ldots\Join R_{m}\) with attributes \(X=(A_{1},\ldots,A_{k})\), there is a \(\mathcal{O}(1)\)-time parallel algorithm that, given arrays \(\mathcal{A}_{R_{1}},\ldots,\mathcal{A}_{R_{m}}\) ordered w.r.t. \(X\), computes \(q(D)\) and requires \(\mathcal{O}\left(\left((\prod_{i=1}^{m}|R_{i}|^{x_{i}})+\mbox{IN}\right) \cdot\mbox{IN}^{\varepsilon}\right)\) work and space on a CRCW-PRAM where \((x_{1},\ldots,x_{m})\) is a fractional edge cover of \(q\).
Proof idea.: A \(\mathcal{O}(1)\)-time parallel algorithm can proceed, from a high-level perspective, similarly to the sequential attribute elimination join algorithm, see e.g. [3, Algorithm 10].
In a nutshell, the algorithm computes iteratively, for increasing \(j\) from \(1\) to \(k\) relations \(L_{j}\) defined as follows: \(L_{1}=\bigcap_{1\leq i\leq m,A_{1}\in\mbox{attr}(R_{i})}\pi_{A_{1}}(R_{i})\) and, for \(j>1\), \(L_{j}\) is the union of all relations \(V_{t}=\{t\}\times\bigcap_{1\leq i\leq m,A_{j}\in\mbox{attr}(R_{i})}\pi_{A_{j} }(R_{i}\times\{t\})\) for each \(t\in L_{j-1}\). \(L_{k}\) is then the query result \(q(D)\). Note that each \(L_{j}\) contains tuples over attributes \(X_{j}=(A_{1},\ldots,A_{j})\).
To achieve the desired running time in the sequential setting, it is essential that each relation \(V_{t}\) for \(t\in L_{j-1}\) is computed in time \(\tilde{\mathcal{O}}(\min_{1\leq i\leq m}|R_{i}\times\{t\}|)\), where \(\tilde{\mathcal{O}}\) hides a logarithmic factor; for instance with the Leapfrog algorithm, see e.g. [29], [3, Proposition 27.10].
In the parallel setting each relation \(V_{t}\) is computed with work \(\mathcal{O}(\min_{1\leq i\leq m}|R_{i}\times\{t\}|\cdot\mbox{IN}^{\frac{1}{2 \varepsilon}})\) - for all tuples \(t\in L_{i-1}\) in parallel. Note that the work bound is _not_ uniform, i.e. the work bound for a tuple \(t\) depends on how many "matching" tuples there are in each of the input relations. This makes assigning processors challenging.
Utilizing that the input relations are ordered w.r.t. \(X_{j}\), our algorithm groups the tuples in the relations \(\pi_{X_{j}}(R_{i})\) w.r.t. \(X_{j-1}\) and identifies, for each \(t\in L_{j-1}\), the corresponding group in \(\pi_{X_{j}}(R_{i})\). These groups are compacted using \(\mbox{\tt Compact}_{\delta}\) for \(\delta=\frac{\varepsilon}{4}\) which allows to approximate
the size of \(R_{i}\ltimes\{t\}\) up to a factor of \(\mathsf{IN}^{\frac{1}{2}\varepsilon}\) for each \(i\), and, thus, \(\min_{1\leq i\leq m}|R_{i}\ltimes\{t\}|\) for each tuple \(t\).
The tuples in \(L_{j-1}\) are then partitioned w.r.t. (the approximation of) \(\min_{1\leq i\leq m}|R_{i}\ltimes\{t\}|\) into sets \(S_{j,\ell}\). Each tuple in a set \(S_{j,\ell}\) can then be assigned the same number of processors, determined by the size of the array for the smallest group, similarly as in the Leapfrog algorithm. This is feasible because the number of sets \(S_{j,\ell}\) in the partition is bounded by a constant due to the guarantees of \(\mathsf{Compact}_{\varepsilon}\).
The full proof is given in Appendix D.
We plan to address the evaluation of natural join queries in the dictionary setting in a journal version of this paper. We expect that almost the same work bound holds, with an additional summand \(\mathcal{O}(\max_{1\leq i\leq m}|R_{i}|^{2})\), accounting for the grouping of each \(\pi_{X_{j}}(R_{i})\) with respect to \(\pi_{X_{j-1}}(R_{i})\).
## 8 Conclusion
This paper is meant as a first study on work-efficient \(\mathcal{O}(1)\)-time parallel algorithms for query evaluation and many questions remain open. The results are very encouraging as they show that quite work-efficient \(\mathcal{O}(1)\)-time parallel algorithms for query evaluation are possible. In fact, the results give a hint at what could be a good notion of _work-efficiency_ in the context of constant-time parallel query evaluation. Our impression is that work-optimality is very hard to achieve in constant time and that query evaluation should be considered as work-efficient for a query language, if there are constant-time parallel algorithms with \(\mathcal{O}(T^{1+\varepsilon})\) work, for every \(\varepsilon>0\), where \(T\) is the best sequential time of an evaluation algorithm. Of course, it would be nice if this impression could be substantiated by lower bound results, but that seems to be quite challenging.
We have not given results for all combinations of query languages and settings, e.g., Subsection 7.1 and Subsection 7.2 do not yet cover the ordered setting and Subsection 7.3 not the dictionary setting.
As mentioned in Section 3, when finding the results of this paper we were unaware of the fact that [14] provides algorithms for _ordered_ compaction with constant time and work \(\mathcal{O}(n^{1+\varepsilon})\). Naturally, these algorithms can be useful for the ordered setting and we expect them to yield a \(\mathcal{O}(n^{1+\varepsilon})\) work bound for the semi-join algebra (Subsection 7.1). We do not expect them to improve the bounds for natural joins (Subsection 7.3) or for general acyclic queries (Subsection 7.2). We plan to fully explore the consequences in a journal version of this paper, but we decided against incorporating them into the final version of this paper, due to the lack of peer-review. In that journal version we will also address some of the reviewer's suggestions that could not be incorporated yet.
|
2306.15830 | Safe Navigation using Density Functions | This paper presents a novel approach for safe control synthesis using the
dual formulation of the navigation problem. The main contribution of this paper
is in the analytical construction of density functions for almost everywhere
navigation with safety constraints. In contrast to the existing approaches,
where density functions are used for the analysis of navigation problems, we
use density functions for the synthesis of safe controllers. We provide
convergence proof using the proposed density functions for navigation with
safety. Further, we use these density functions to design feedback controllers
capable of navigating in cluttered environments and high-dimensional
configuration spaces. The proposed analytical construction of density functions
overcomes the problem associated with navigation functions, which are known to
exist but challenging to construct, and potential functions, which suffer from
local minima. Application of the developed framework is demonstrated on simple
integrator dynamics and fully actuated robotic systems. | Andrew Zheng, Sriram S. K. S. Narayanan, Umesh Vaidya | 2023-06-27T23:30:35Z | http://arxiv.org/abs/2306.15830v2 | # Safety using Analytically Constructed Density Functions
###### Abstract
This paper presents a novel approach for safe control synthesis using the dual formulation of the navigation problem. The main contribution of this paper is in the analytical construction of density functions for almost everywhere navigation with safety constraints. In contrast to the existing approaches, where density functions are used for the analysis of navigation problems, we use density functions for the synthesis of safe controllers. We provide convergence proof using the proposed density functions for navigation with safety. Further, we use these density functions to design feedback controllers capable of navigating in cluttered environments and high-dimensional configuration spaces. The proposed analytical construction of density functions overcomes the problem associated with navigation functions, which are known to exist but challenging to construct, and potential functions, which suffer from local minima. Application of the developed framework is demonstrated on simple integrator dynamics and fully actuated robotic systems. Our project page with implementation is available at [https://github.com/clemson-dira/density_feedback_control](https://github.com/clemson-dira/density_feedback_control)
## I Introduction
Safe navigation of mission-critical systems is of utmost importance in many modern autonomous applications. Autonomous vehicles and industrial robots are all critical applications in which there exists a need for navigation that adheres to safety constraints. Over the past decades, the general approach to the navigation problem has consisted of formulating compositions of the system that complies with the safety certification of the original system. This traditionally implies a hierarchical architecture that decomposes the navigation problem into planning and control [1].
The planning problem involves defining a collision-free trajectory in the feasible configuration space given an initial and final configuration. These are typically implemented through sample-based planners such as rapidly-exploring random tree search (RRT) and probabilistic roadmaps (PRM) [2, 3]. These sample-based methods are observed to be probabilistically complete through iterative samples of locally safe and feasible paths. Asymptotically optimal variations of these planners have been developed in [4], where the convergence rate for optimality is improved in [5, 6].
Designing controllers to track these trajectories from the plan while satisfying dynamic and safety constraints is not so simple. Traditional methods, such as inverse dynamics, rely on the exact cancellation of the nonlinearities to track a resulting linear system through closed-loop control [7]. However, they do not guarantee safety in the presence of unsafe regions. More recently, control barrier functions (CBFs) have been introduced to provide safety certificates for the controller [8]. However, CBFs only provide safety, so augmentation of CBFs with control Lyapunov functions (CLFs) is needed to guarantee convergence and safety [9].
This framework of hierarchical navigation has seen great success in many robotic applications [10]; however, a natural issue of hierarchical navigation is the evaluation of safety certificates from the planning to the control level, which increases in complexity for large-scale systems [11].
A natural proposal is to jointly solve the navigation problem without the hierarchical structure. Artificial potential field based methods have attempted to solve the joint problem by the sum of attractive and repulsive potentials [12]. However, the existence of local minima is a well-known issue [13, 14]. In [14, 15], a class of analytical potential functions, known as navigation functions (NFs), are introduced, which guarantees almost everywhere (a.e.) convergence while adhering to safety constraints. This method relies on a range of problem-specific tuning parameters to guarantee a.e. convergence. Moreso, complex safety constraints arising from arbitrarily shaped obstacles are limited by the possible mapping to a model sphere world. Recent works have proposed using altered NF or conformal mapping
Fig. 1: Navigation framework using density where (a) defines the navigation problem, (b) shows the density for navigation, and (c) shows occupancy measure, which physically denotes the duration of system trajectories occupying the set.
to navigate complex unsafe sets [16, 17, 18]; however, these methods are nontrivial and physically unintuitive.
The navigation problem can alternatively be formulated in the dual space of density. In [19], a navigation measure was introduced to provide a convex formulation for synthesizing safe controllers. In the continuous-time setting, the density function was used as a safety certificate for the analysis and synthesis using the sum of squares optimization method [20]. Similarly, density-based approaches are also used for the convergence analysis of existing navigation algorithms [21, 22]. More recently, convex data-driven approaches based on the linear transfer Perron-Frobenius and Koopman operators are used for solving the optimal navigation problem with safety constraints [23, 24]. In contrast to using the convex dual formulation for navigation, we provide an analytical construction of density functions for navigation. In particular, the analytical construction of navigation density can be viewed as the dual construction of the classical NFs from [14]. However, unlike [14], the construction is not restricted to navigation in the sphere world environment.
The main contribution of this paper is in providing analytical construction of density functions used for solving the safe navigation problem. The density function has a physical interpretation, where the measure associated with the density is a measure of occupancy of the system trajectories in any set of the state space as shown in Figure 1. We exploit this occupancy-based physical interpretation of the density function in the construction of the navigation density functions. Unlike NFs, the density formulation can represent arbitrary shapes of the obstacle sets. We prove that the proposed density function can navigate almost all initial conditions from the initial set to the target set while avoiding the obstacle set. We show navigation results for simple integrator dynamics in complex environments as well as high-dimensional configuration spaces. Similarly, navigation results for obstacle avoidance involving robotics systems such as the two-link planar robotic arm manipulator are presented.
The rest of the paper is organized as follows. Section II discuss the preliminaries and the problem formulation. Section III discusses the construction of density functions and Section IV discusses the properties of density functions for the navigation problem. This is followed by application to robotic systems in section V and conclusive remarks about the results in section VI.
## II Notations and Problem Statement
**Notations**: The following notations will be used in this paper. \(\mathbb{R}^{n}\) denotes the \(n\) dimensional Euclidean space, \(\mathbf{x}\in\mathbb{R}^{n}\) denotes a vector of system states, \(\mathbf{u}\in\mathbb{R}^{n}\) is a vector of control inputs. Let \(\mathbf{X}\subset\mathbb{R}^{n}\) be a bounded subset that denotes the workspace for the robot. \(\mathbf{X}_{0},\mathbf{X}_{T},\mathbf{X}_{u_{k}}\subset\mathbf{X}\), for \(k=1,\ldots,L\) denote the initial, target, and unsafe sets, respectively. With no loss of generality, we will assume that the target set is a single point set and located at the origin, i.e., \(\mathbf{X}_{T}=\{0\}\). \(\mathbf{X}_{u}=\cup_{k=1}^{L}\mathbf{X}_{u_{k}}\) defines the unsafe set and \(\mathbf{X}_{s}:=\mathbf{X}\setminus\mathbf{X}_{u}\) defines the safe set. We will denote by \(\mathbf{X}_{1}:=\mathbf{X}\setminus\mathcal{B}_{\mathcal{B}}\), where \(\mathcal{B}_{\mathcal{B}}\) is the \(\mathcal{B}\) neighborhood of the origin for arbitrary small \(\mathcal{B}\). We use \(\mathcal{C}^{k}(\mathbf{X})\) to denote the space of all \(k\)-times differentiable functions of \(\mathbf{x}\). We use \(\mathcal{M}(\mathbf{X})\) to denote the space of all measures on \(\mathbf{X}\) and \(m(\cdot)\) to denote the Lebesgue measure. \(\mathds{1}_{A}(\mathbf{x})\) denotes the indicator function for set \(A\subset\mathbf{X}\).
The formal statement of the navigation problem that we solve in this paper is stated as follows.
**Problem 1**: _(Almost everywhere navigation problem) The objective of this problem is to design a smooth feedback control input \(\mathbf{u}=\mathbf{k}(\mathbf{x})\) to drive the trajectories of the dynamical system_
\[\hat{\mathbf{x}}=\mathbf{u}, \tag{1}\]
_from almost every initial condition (w.r.t. Lebesgue measure) from the initial set \(\mathbf{X}_{0}\) to the target set \(\mathbf{X}_{T}\) while avoiding the unsafe set \(\mathbf{X}_{u}\)._
**Assumption 1**: _We assume that there exists a feedback controller that solves the a.e. navigation problem as stated above._
## III Construction of Density Function
The a.e. navigation problem, as stated in Problem 1, is solved using the navigation density function. The construction of the navigation density is inspired by the work of [19, 25, 26]. The navigation measure, as introduced in [19], has a physical interpretation of occupancy, where the measure of any set is equal to the occupancy of the system trajectories in the set, as shown in Figure 1. Hence, zero occupancy in a set implies system trajectories not occupying that particular set. So by ensuring that the navigation measure is zero on the obstacle set and maximum on the target set, it is possible to induce dynamics whereby the system trajectories will reach the desired target set while avoiding the obstacle set. We exploit this occupancy-based interpretation in the construction of analytical density functions.
We start with the construction of the unsafe set, where the boundary of the unsafe set is described in terms of the zero-level set of a function. Let \(h_{k}(\mathbf{x})\) be a continuous scalar-valued function for \(k=1,\ldots,L\) such that the set \(\{\mathbf{x}\in\mathbf{X}:h_{k}(\mathbf{x})\leq 0\}\), is connected with only one component. Thus, the unsafe set \(\mathbf{X}_{u_{k}}\) is defined using the function \(h_{k}(\mathbf{x})\) as follows
\[\mathbf{X}_{u_{k}}:=\{\mathbf{x}\in\mathbf{X}:h_{k}(\mathbf{x})\leq 0\}. \tag{2}\]
Next, we define a transition region \(\mathbf{X}_{s_{k}}\), which encloses the unsafe set \(\mathbf{X}_{u_{k}}\). Let \(s_{k}(\mathbf{x})\) be a continuous scalar-valued function for \(k=1,\ldots,L\) such that the set \(\{\mathbf{x}\in\mathbf{X}:s_{k}(\mathbf{x})=0\}\) defines the boundary of this transition region. Then the transition region can be defined by the following set
\[\mathbf{X}_{s_{k}}:=\{\mathbf{x}\in\mathbf{X}:s_{k}(\mathbf{x})\leq 0\} \setminus\mathbf{X}_{u_{k}}. \tag{3}\]
The proposed navigation density function is assumed to be of the form
\[\rho(\mathbf{x})=\frac{\prod_{k=1}^{L}\Psi_{k}(\mathbf{x})}{V(\mathbf{x})^{ \alpha}}. \tag{4}\]
Here, the function \(V(\mathbf{x})\) is the distance function that measures the distance from state \(\mathbf{x}\) to the target set, (i.e., the origin), and \(\alpha\) is a positive scalar. In this paper, we assume \(V(\mathbf{x})\) to be of the form \(V(\mathbf{x})=\|\mathbf{x}\|^{2}\). Additionally, \(\Psi_{k}(\mathbf{x})\) is a smooth \(\mathcal{C}^{\infty}\) function that captures the geometry of the unsafe set \(\mathbf{X}_{u_{k}}\) and can be constructed using the following sequence of functions. We first define an elementary \(\mathcal{C}^{\infty}\) function \(f\) as follows
\[f(\tau)=\begin{cases}\exp{(\frac{-1}{\tau})},&\tau>0\\ 0,&\tau\leq 0\end{cases}, \tag{5}\]
where \(\tau\in\mathbb{R}\)[27]. Next, we construct a smooth version of a step function \(\bar{f}\) from \(f\) as follows
\[\bar{f}(\tau)=\frac{f(\tau)}{f(\tau)+f(1-\tau)}. \tag{6}\]
Here, \(\bar{f}\) serves as the elementary function for representing zero and nonzero occupation through density. Furthermore, the form of the elementary function, \(\bar{f}\), is chosen to ensure that the gradient of the density function is well-defined. To incorporate more general geometric information about the environment, we define a change of variables such that \(\phi_{k}(\mathbf{x})=\bar{f}\left(\frac{h_{k}(\mathbf{x})}{h_{k}(\mathbf{x})- h_{k}(\mathbf{x})}\right)\). The resulting function \(\Phi_{k}(\mathbf{x})\) take the following form,
\[\Phi_{k}(\mathbf{x})=\begin{cases}0,&\mathbf{x}\in\mathbf{X}_{u_{k}}\\ \phi_{k}(\mathbf{x}),&\mathbf{x}\in\mathbf{X}_{s_{k}}\\ 1,&\text{otherwise}.\end{cases} \tag{7}\]
Finally, the function \(\Psi_{k}(\mathbf{x})\) is defined as
\[\Psi_{k}(\mathbf{x})=\Phi_{k}(\mathbf{x})+\theta, \tag{8}\]
where \(\theta>0\) is some positive parameter. The parameters \(\theta\) and \(\alpha\) are introduced in the construction of the navigation density. The physical significance of these parameters and the assumption made on these parameters and functions are stated in the following remark.
**Remark 1**: _The distance function \(V(\mathbf{x})\) can be modified to adapt to the geometry of the underlying configuration space. For a Euclidean space with \(\mathbf{x}\in\mathbb{R}^{n}\), we pick \(V(\mathbf{x})=\|\mathbf{x}\|^{2}\)._
* _The parameter_ \(\alpha\) _is used to control the sharpness of the distance function and is used in the proof of the main convergence results._
* _The function_ \(\Psi_{k}(\mathbf{x})\) _is a_ \(\theta\) _shifted version of inverse bump function_ \(\Phi_{k}(\mathbf{x})\) _and hence strictly positive i.e.,_ \(\Psi_{k}(\mathbf{x})\geq\theta>0\) _for_ \(k=1,\ldots,L\)_._
* \(\Psi_{k}(\mathbf{x})\) _makes a smooth transition from_ \(\theta\) _to_ \(1+\theta\) _in the transition region_ \(\mathbf{X}_{s_{k}}\)_._
* _The transition region,_ \(\mathbf{X}_{s_{k}}\)_, acts as a sensing region for system trajectories where they start to react to the unsafe set. We refer to the transition region as the sensing region for the rest of this paper._
* \(h_{k}(\mathbf{x})=0\) _defines the boundary of the unsafe set and_ \(s_{k}(\mathbf{x})=0\) _defines the boundary of the sensing region. Refer to Figure_ 2 _for an illustrative example. In the simplest case, the function_ \(s_{k}(\mathbf{x})\) _can be chosen to be synonymous to_ \(h_{k}(\mathbf{x})\)_, such that_ \(h_{k}(\mathbf{x})-s_{k}(\mathbf{x})=\sigma\) _(where_ \(\sigma>0\) _is a constant) uniformly scales the unsafe set to form a sensing region._
We assume explicit bounds on the functions \(\Psi_{k}\), \(V\), and their derivatives which follow from the construction of the density function in equation (4). It is important to emphasize that it is not necessary to estimate these bounds, but the existence of these bounds is used as part of the proof of the main results of this paper.
**Assumption 2**:
1. _We assume that the distance between the initial set, the target set, and the unsafe sets are all bounded away from zero by some positive constant, say_ \(\zeta\)_._
2. _For_ \(\mathbf{x}\in\mathbf{X}_{u_{k}}\)_, let_ \[V_{min}^{k}=\min_{\mathbf{x}\in\mathbf{X}_{u_{k}}}V(\mathbf{x})>0.\] (9) _Since the distance between the unsafe set and the target set is bounded away from zero, the above quantity is well-defined and greater than zero._
3. _Furthermore,_ \(m(\mathbf{X}_{u_{k}})\)_, i.e., the Lebesgue measure of the unsafe set, is assumed to be finite, with_ \(\theta\) _satisfying the following inequality for any given_ \(\varepsilon>0\)_,_ \[\theta\leq\frac{V_{min}^{k}}{m(\mathbf{X}_{u_{k}})}\varepsilon,\quad k=1, \ldots,L.\] (10)
4. _In the transition region, i.e.,_ \(0<h_{k}(\mathbf{x})\leq s_{k}(\mathbf{x})\)_, we assume following bounds_
\[\underline{c}_{V}\leq V(\mathbf{x})\leq\bar{c}_{V},\ \ \ \underline{c}_{Y_{x}}\leq \left|\frac{\partial V}{\partial x_{j}}\right|\leq\bar{c}_{V_{x}},\] \[\left|\frac{\partial\Psi}{\partial x_{j}}\right|\leq\bar{c}_{\Psi_{x}},\ \ \ \left| \frac{\partial^{2}\Psi}{\partial x_{j}^{2}}\right|\leq\bar{c}_{\Psi_{x}^{2}},\]
where \(\bar{c}\) and \(\underline{c}\) denote upper and lower bound constants, and subscripts of \(\bar{c}\) and \(\underline{c}\) denote bounds of the corresponding expression of the function. Further, by construction, both the first and second derivatives of \(\Psi\) w.r.t. \(x_{j}\) are zero outside the transition region.
## IV Almost Everywhere Navigation Using Density Functions
Given the construction of \(\rho(\mathbf{x})\) in (4), we design a controller for navigation as the positive gradient of the density function \(\rho(\mathbf{x})\), i.e.,
\[\hat{\mathbf{x}}=\mathbf{k}(\mathbf{x})=\nabla\rho(\mathbf{x})\\ =\left(-\frac{\alpha}{V^{\alpha+1}}\frac{\partial V}{\partial \mathbf{x}}\prod_{k=1}^{L}\Psi_{k}(\mathbf{x})+\frac{1}{V^{\alpha}}\frac{ \partial}{\partial\mathbf{x}}\prod_{k=1}^{L}\Psi_{k}(\mathbf{x})\right)^{ \top}. \tag{11}\]
The main result of the paper is given in the following theorem.
**Theorem 1**: _Under Assumptions 1 and 2, the dynamical system (11) will solve the a.e. navigation problem as stated in Problem 1._
Proof of this main theorem is differed to the Appendix. The feedback controller design for the a.e. navigation problem is illustrated in pseudo-code in Algorithm 1.
```
Input:\(\mathbf{X_{0},X_{u},X_{T}}\) \(\Psi(\mathbf{x})\gets 1\) Define \(V(\mathbf{x})\) according to configuration for\(X_{u_{k}}\) in \(\mathbf{X_{u}}\)do Define \(h_{k}(\mathbf{x})\) and \(s_{k}(\mathbf{x})\) (see Remark 1 and 2) Form \(\Psi_{k}(\mathbf{x})\) from \(h_{k}(\mathbf{x})\) and \(s_{k}(\mathbf{x})\) (see equation 7) \(\Psi(\mathbf{x})\,\leftarrow\,\Psi(\mathbf{x})\,\times\,\Psi_{k}(\mathbf{x})\) endfor \(\rho(\mathbf{x})=\frac{\Psi(\mathbf{x})}{V(\mathbf{x})^{\alpha}}\) \(u=\nabla\rho(\mathbf{x})\)
```
**Algorithm 1** Density-based Navigation Algorithm
The rest of the section showcases the navigation results using the controller designed from the analytical density function. We first show the characteristics of the proposed controller, which validates the a.e. navigation properties. Then, we extend our feedback controller to a more complex environment. Lastly, a comparison of our algorithm to NFs is presented.
### _Characteristics of Density Functions_
In this example, we demonstrate the a.e. navigation properties of the proposed controller. The navigation problem is defined with the target set at \(\mathbf{X}_{T}=(4,-3)\) and the unsafe set \(\mathbf{X}_{u}\), which is constructed using a circular inverse bump function with \(h(\mathbf{x})=||\mathbf{x}||^{2}-r_{1}^{2}\) and \(s(\mathbf{x})=||\mathbf{x}||^{2}-r_{2}^{2}\) with \(r_{1}=2\) and \(r_{2}=3\). Hence, \(\mathbf{X}_{u_{k}}\) for the inverse bump function is defined on the domain \(2<||\mathbf{x}||<3\).
Figure (a)a illustrates the a.e convergence of the proposed controller with initial conditions set defined by a line at the top left of the environment boundary. The blue contour lines represent the level sets of the density function. For this example, all the initial conditions starting on the set \(\{\mathbf{X}_{0}\subset\mathbf{X}\,\colon m(\mathbf{X}_{0})=0\}\), which is polar opposite of the target set, cannot converge. This set of initial conditions constitutes a measure zero set. Furthermore, these initial conditions are attracted to a saddle point, implying the existence of local maxima (shown in Figure (b)b). Note that the existence of a saddle point will imply the existence of local maxima. Any other trajectory starting from an initial condition perturbed from the zero-measure set converges to the target set \(\mathbf{X}_{T}\) while avoiding the obstacle set \(\mathbf{X}_{u}\). Furthermore, we look at the characteristics of initial conditions starting outside the sensing region, defined as a state \(\mathbf{x}\) such that \(s(\mathbf{x})\geq 0\) (trajectory A), and within the sensing region, defined as a state \(\mathbf{x}\) such that \(0<h(\mathbf{x})<s(\mathbf{x})\) (trajectory B), shown in Figure (c)c. The gradients of the density function \(\rho(\mathbf{x})\) are such that trajectory A starts to react as it enters the sensing region while trajectory B is repelled outward towards the boundary of the sensing region before converging to the target set (see Figure (d)d).
### _Complex environment_
One of the main features of our proposed navigation density is that it can incorporate complex shapes of the obstacle set, which is captured in terms of the unsafe set by some appropriate function \(h_{k}(\mathbf{x})\). The unsafe set \(\mathbf{X}_{u}\in\mathbb{R}^{2}\) in Figure (a)a is constructed using an implicit function that geometrically represents a circle, an ellipse, an oval, and a bowtie. We show that the initial conditions starting along the boundary converge to the goal at the center while safely avoiding obstacles. The proposed controller can also satisfy a.e navigation in complex maze-like environments. Figure (b)b shows a trajectory finding a tight feasible region between two obstacles while navigating to the target set. Furthermore, this can be easily extended to navigation problems in higher dimensions. Figure (c)c shows all trajectories starting from a plane converging to the target set while avoiding obstacles represented as 3D spheres. Figure (d)d shows navigation with unsafe sets composed of two tori, an unbounded cylinder, and a sphere. We note that unlike [16, 17], the construction of the density function naturally admits any complex shapes.
### _Comparison to Navigation Functions_
In this section, we compare the a.e. convergence property of artificial potential field NF to the proposed density functions in a complex environment as shown in Figure 5. More specifically, we compare the tuning of \(s(\mathbf{x})\) for a.e. convergence in the density function formulation shown in equation (4) to the tuning of \(\kappa\in\mathbb{R}\) for a.e. convergence in
NFs proposed in [14, Ch. 3, p. 36],
\[\psi_{k}(\mathbf{x})=\frac{||\mathbf{x}-\mathbf{x}_{g}||^{2}}{||\mathbf{x}- \mathbf{x}_{g}||^{2}+\beta(\mathbf{x})^{1/\kappa}}, \tag{12}\]
where \(\mathbf{x}_{g}\) is the desired goal location, \(\beta(\mathbf{x})\) is an obstacle function and \(\kappa\) is a tuning parameter.
Although a domain is not necessary in the density formulation, NFs do require a radially bounded sphere world. Hence, we define an appropriate bounded sphere world of radius \(25\). The authors note that NFs do not make any claims about tuning \(\kappa\) for a.e. convergence other than the sphere world and its extensions [14, 17], but for the sake of comparison, we look at an environment with a C-shaped unsafe set. We then look at initial conditions that lie inside the C-shaped unsafe set with the target set defined outside the cavity of the unsafe set.
Figures 4(a) and 4(b) show that the trajectories do not converge to the goal for all random initial conditions for small values in tuning parameter in either the density function formulation or the artificial potential field NF formulation. This is expected in NF as only large \(\kappa\) in a sphere world guarantees a.e. convergence. Likewise, the density formulation sees the same results. However, tuning \(s(\mathbf{x})\) such that the density function formulation has a.e. convergence property is intuitive, as stated below in Remark 2. This is shown in Figure 4(c), where tuning \(s(\mathbf{x})\) to be larger than the C-shaped unsafe sets results in all system trajectories converging to the target set. Note, no explicit mapping to a simplistic unsafe set
Fig. 4: (a) Trajectories converge to the target set (green) while avoiding arbitrary obstacles (gray), (b) Trajectory finding a narrow feasible region around obstacles, (c) Navigation in a spherical grid, (d) Navigation through two tori, unbounded cylinder, and sphere.
Fig. 5: Comparison of density functions and NFs for random initial conditions. The sensing region for the density function is defined by \(s(\mathbf{x})=a^{2}\chi_{1}^{2}+b^{2}\chi_{2}^{2}c^{x_{1}}-r^{2}\) (\(r\), \(a,b,c\) are parameters). For (a) \(r=2.5\), trajectories don’t converge, while setting (c) \(r=4.5\) leads to all trajectories converging. NFs with their corresponding tuning parameter for convergence (b) \(\kappa=1\) and (d) \(\kappa=10\) lead to trajectories not converging.
Fig. 3: (a) Trajectories converge to the target set (green) while avoiding the unsafe set (gray) with a.e. convergence, (b) Initial conditions along the zero-measure set (black) converge to a saddle point (purple), (c) Trajectories starting at A (\(s(\mathbf{x})>0\)) and B (in \(\mathbf{X}_{s_{k}}\)) converge to the target set, (d) Trajectories starting from A and B follow the same path near the boundary of \(s(\mathbf{x})\).
(e.g., circle) is required, where the same cannot be stated for NFs (even with high \(\kappa\)), which does not give a.e. convergence results for complex unsafe sets. This can be seen in Figure (d)d, where some trajectories exit the unsafe set and converge to the goal (by taking a large curvature path) while others get trapped inside the cavity of the unsafe set.
**Remark 2**: _The tuning parameter in the design of the navigation density functions are \(\alpha\), and \(s_{k}(\mathbf{x})\). The tuning of \(\alpha\) depends on the rate of convergence of the trajectories. Although a large value of \(\alpha\) is required for a.e. navigation (as shown in Appendix), in practice, even small values of \(\alpha\) (between 1 to 10) have shown to work. The tuning of \(s_{k}(\mathbf{x})\) is physically intuitive, as it signifies the sensing region. Hence, a sensing region that encompasses the unsafe set with a sufficiently curved convex set has worked in the simulations._
## V Application to Robotic Systems
We consider cases which are highly important in application, with specifics to robotic systems. These cases, constrained control, stochastic settings, and fully actuated multi-body systems are considered in the subsequent sections.
### _Constrained Control w/ Density Function_
The case of controlling the magnitude of the controller defined in (11) is highly crucial in practical systems due to actuation limits. Although, the magnitude of the controller can be implicitly controlled through the tuning of \(\alpha\), as change in \(\alpha\) changes the sharpness of \(V(\mathbf{x})\), hence change in gradient of density (i.e. change in magnitude of control), we consider explicitly defining control constraints. In particular, we consider a system with constraints in the following form
\[\dot{\mathbf{x}}=\mathbf{u}=\nabla\rho(\mathbf{x}),\quad\mathbf{u}\in[-u_{max}, u_{max}], \tag{13}\]
where \(u_{max}\) is the bound on control. Without formality, we constrain the control when \(||\mathbf{u}||_{\infty}>u_{max}\) by normalizing the control
\[\ddot{\mathbf{u}}=\frac{\mathbf{u}}{||\mathbf{u}||_{\infty}}u_{max}, \tag{14}\]
where \(\ddot{\mathbf{u}}\) is the constrained control.
### _Performance of Density Function w/ Noise_
We consider the performance of our controller in a stochastic setting where noise is entered through the control input
\[\dot{\mathbf{x}}=\mathbf{u}+\mathbf{w}\quad\mathbf{u}\in[-u_{max},u_{max}], \tag{15}\]
where \(\mathbf{w}\in\mathcal{N}(\mu,\Sigma)\) is the gaussian white noise with mean \(\mu=0\) and covariance \(\Sigma\). Figure 7 showcases the navigation problem with control noise for varying levels of covariance.
We see that the feedback controller is capable of invariance while converging towards the goal with noise. Although the invariance of our control law is not guaranteed, we see that up to a certain bound on the noise, the control performance is robust.
### _Fully Actuated Robotic System_
We also extend the density function presented in Section III to a general class of fully actuated robotic systems. For a robot with \(n\) joints and \(n\) rigid links, the system's dynamics can be expressed using the Euler-Lagrange equations. Consider an unconstrained system where \(\mathbf{M}(\mathbf{q})\) is the inertia matrix and \(\mathbf{H}(\mathbf{q},\dot{\mathbf{q}})\) represents the Coriolis and gravity effects on the system, \(\mathbf{q}\in\mathbb{S}^{1}\times\mathbb{S}^{1}\). Then the corresponding system is represented as follows
\[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{H}(\mathbf{q},\dot{\mathbf{q }})=\mathbf{u}. \tag{16}\]
We then take a similar approach outlined in [15] in which there exists an equivalent "planning" system defined by \(\dot{\mathbf{q}}=\nabla\rho(\mathbf{q})\) and a control law given by \(\mathbf{u}=\nabla\rho(\mathbf{q})+d(\mathbf{q},\dot{\mathbf{q}})\) (\(d(\mathbf{q},\dot{\mathbf{q}})\) is a dissipative term and \(\dot{\mathbf{q}}^{\top}d(\mathbf{q},\dot{\mathbf{q}})<\mathbf{0}\)), where the system defined in (16) tracks the planning system asymptotically [15]. For a general robotic system such as the system defined in equation (16), \(d(\mathbf{q},\dot{\mathbf{q}})\) can be selected such that it cancels out the nonlinearities of the system similar to the inverse dynamics approach. Therefore, we define a density-based inverse dynamics controller given by
\[\mathbf{u}_{p}=\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}_{\mathbf{q}}+\mathbf{ H}(\mathbf{q},\dot{\mathbf{q}})+\mathbf{M}(\mathbf{q})\Bigg{(}\mathbf{K}_{ \mathbf{p}}\nabla\rho(\mathbf{e})-\mathbf{K}_{\nu}\dot{\mathbf{e}}\Bigg{)}, \tag{17}\]
where \(\mathbf{e}:=\mathbf{q}-\mathbf{q}_{\mathbf{d}}\), \(\dot{\mathbf{e}}:=\dot{\mathbf{q}}-\dot{\mathbf{q}}_{\mathbf{d}}\), \(\mathbf{q}_{\mathbf{d}}\) is the desired reference trajectory to follow, and \(\mathbf{K}_{\mathbf{p}}\) and \(\mathbf{K}_{\mathbf{v}}\) are positive definite
Fig. 6: Constrained control w/ navigation density
gain matrices. Figure (a)a shows a fully actuated two-link planar robotic arm executing a swing-up maneuver with \(\mathbf{K_{p}}=\text{diag}([1,1]),\quad\mathbf{K_{v}}=\text{diag}([10,10])\) and \(V(\mathbf{q})=(1-\cos(q_{1})(1-\cos(q_{2}))\). The mass and length of each link are set to unity. The task space obstacles (circular with a radius of 0.2) are mapped to joint space and approximated using inverse bump functions. The reference trajectories are obtained in joint space based on the planning system \(\dot{\mathbf{q}}=\nabla\rho(\mathbf{q})\). The corresponding state and control trajectories are shown in Figures (b)b and (c)c, respectively. It is seen that the density-based inverse dynamics controller drives the two-link manipulator to the upright position while avoiding the obstacle set.
## VI Conclusions
This work provides an analytical construction for the navigation density. Moreso, we prove that the navigation density solves the almost everywhere navigation problem. The proposed navigation density can be viewed as dual to the popular navigation function and is derived based on the occupancy-based interpretation of the density function. The navigation density has a few advantages compared to navigation functions. Unlike navigation functions, which are hard to construct, navigation density can be easily constructed. Furthermore, the density function formulation can incorporate arbitrary shapes of the unsafe set. We provide simulation results for navigation using density function in complex and high dimensional environments as demonstrated. Lastly, we also demonstrate the application of the density function for control on a robotic system with safety constraints.
## VII Appendix
The proof of Theorem 1 relies on the following Lemma.
**Lemma 1**: _Consider the navigation density function as given in equation (4), then under Assumption 2, we have_
\[\nabla\cdot(\mathbf{k}(\mathbf{x})\rho(\mathbf{x})) \geq 0,\ \ a.e.\ \mathbf{x}\in\mathbf{X}, \tag{18}\] \[\nabla\cdot(\mathbf{k}(\mathbf{x})\rho(\mathbf{x})) \geq\xi>0\ \ \text{for}\ \ \mathbf{x}\in\mathbf{X}_{0}, \tag{19}\]
_where \(\mathbf{k}(\mathbf{x})=\nabla\rho(\mathbf{x})\) is the feedback control input as given in equation (11)._
**Proof:** We have
\[\nabla\cdot(\mathbf{k}(\mathbf{x})\rho(\mathbf{x}))=\rho(\mathbf{x})\nabla \cdot\mathbf{k}(\mathbf{x})+\frac{\partial\rho}{\partial\mathbf{x}}\frac{ \partial\rho}{\partial\mathbf{x}}^{\top}. \tag{20}\]
Since \(\rho(\mathbf{x})>0\) and \(\frac{\partial\rho}{\partial\mathbf{x}}\frac{\partial\rho}{\partial\mathbf{x} }^{\top}\geq 0\), the proof will follow if we can show that \(\nabla\cdot\mathbf{k}(\mathbf{x})\geq 0\). We have
\[\nabla\cdot\mathbf{k}(\mathbf{x})=\sum_{j=1}^{n}\frac{\partial^{2}\rho}{ \partial x_{j}^{2}}. \tag{21}\]
Letting \(\Psi(\mathbf{x})=\prod_{k=1}^{L}\Psi_{k}(\mathbf{x})\), we obtain
\[\frac{\partial^{2}\rho}{\partial x_{j}^{2}} =\frac{\partial}{\partial x_{j}}\left(-\frac{\alpha}{V^{\alpha+1} }\frac{\partial V}{\partial x_{j}}\Psi(\mathbf{x})+\frac{1}{V^{\alpha}}\frac {\partial\Psi}{\partial x_{j}}\right) \tag{22}\] \[=\frac{\alpha(\alpha+1)}{V^{\alpha+2}}\left|\frac{\partial V}{ \partial x_{j}}\right|^{2}\Psi(\mathbf{x})-\frac{2\alpha}{V^{\alpha+1}}\frac {\partial V}{\partial x_{j}}\frac{\partial\Psi}{\partial x_{j}}+\frac{1}{V^{ \alpha}}\frac{\partial\Psi^{2}}{\partial x_{j}^{2}}\] \[=\frac{\alpha}{V^{\alpha}}\left(\frac{(\alpha+1)}{V^{2}}\left| \frac{\partial V}{\partial x_{j}}\right|^{2}\Psi(\mathbf{x})-\frac{2}{V}\frac {\partial V}{\partial x_{j}}\frac{\partial\Psi}{\partial x_{j}}+\frac{1}{ \alpha}\frac{\partial\Psi^{2}}{\partial x_{j}^{2}}\right).\]
It is important to note that the last two terms in the above expression are non-zero only in the transition region \(\mathbf{X}_{x_{k}}\). Outside this transition region \(\frac{\partial\Psi}{\partial x_{j}}=0\) and \(\frac{\partial^{2}\Psi}{\partial x_{j}^{2}}=0\) and hence equation (22) is non-negative.
We first show that equation (22) is non-negative in the transition region. For this, we make use of the following facts. First, \(\Psi_{k}(\mathbf{x})\geq\theta>0\) for \(k=1,\ldots,L\) and hence \(\Psi(\mathbf{x})\) is bounded away from zero. Second, from the construction of \(\Psi(\mathbf{x})\) and \(V(\mathbf{x})\) functions there exists uniform bounds on \(\frac{\partial\Psi_{k}}{\partial x_{j}}\), \(\frac{\partial^{2}\Psi_{k}}{\partial x_{j}^{2}}\), and \(\frac{\partial V}{\partial x_{j}}\). Third, using Assumption 2, we know that the distance between the unsafe set and the target set is bounded away from zero by a positive constant \(\zeta\) and hence \(\left|\frac{\partial V}{\partial x_{j}}\right|^{2}\) is bounded away from zero. Hence, the following bounds can be obtained for the \(\frac{\partial^{2}\rho}{\partial x_{j}^{2}}\) term
\[\frac{(\alpha+1)}{V^{2}}\left|\frac{\partial V}{\partial x_{j}}\right|^{2}\Psi (\mathbf{x})\geq(\alpha+1)\bar{c}_{V}^{-2}\bar{c}_{V_{x}}^{2}\theta,\]
\[-\frac{2}{V}\frac{\partial V}{\partial x_{j}}\frac{\partial\Psi}{\partial x_{j} }\geq-2\bar{c}_{V}^{-1}\bar{c}_{V_{x}}\bar{c}_{\Psi_{x}},\ \ \ \frac{1}{\alpha}\frac{\partial^{2}\Psi}{\partial x_{j}^{2}}\geq-\frac{\bar{c}_{ \Psi_{x}}}{\alpha}.\]
Therefore, we have following lower bound for \(\frac{\partial^{2}\rho}{\partial x_{j}^{2}}\)
\[\frac{\partial^{2}\rho}{\partial x_{j}^{2}}\geq\frac{\alpha}{V^{\alpha}}\left(( \alpha+1)\bar{c}_{V}^{-2}\bar{c}_{V_{x}}^{2}\theta-2\bar{c}_{V}^{-1}\bar{c}_{V_ {x}}\bar{c}_{\Psi_{x}}-\frac{\bar{c}_{\Psi_{x}}}{\alpha}\right).\]
Hence, by choosing \(\alpha\) sufficiently large, of order \(\frac{1}{\theta}\), we can make the term inside the bracket positive.
To show that equation (19) is satisfied, we again make use of Assumption 2 and the fact that \(\Psi(\mathbf{x})=1\), \(\frac{\partial\Psi}{\partial\mathbf{x}_{j}}=0\), and \(\frac{\partial^{2}\Psi}{\partial x_{j}^{2}}=0\) for \(\mathbf{x}\in\mathbf{X}_{0}\) and \(j=1,\ldots,n\). Further, \(\left|\frac{\partial V}{\partial x_{j}}\right|^{2}\) is bounded away from zero. Hence for, \(\mathbf{x}\in\mathbf{X}_{0}\), we obtain
\[\nabla\cdot(\mathbf{k}(\mathbf{x})\rho(\mathbf{x}))=\frac{\partial\rho}{ \partial\mathbf{x}}\frac{\partial\rho}{\partial\mathbf{x}}^{\top}+\frac{\alpha( \alpha+1)}{V^{\alpha+2}}\left|\frac{\partial V}{\partial x_{j}}\right|^{2}\geq \xi>0\]
Fig. 8: (a) Robot (red) converges to the goal \((\pi,0)\) starting from equilibrium (0,0) while avoiding obstacles (gray). (b) state trajectories of the robot and (c) control inputs for executing the swing-up maneuver.
for some \(\xi>0\).
**Proof of Theorem 1:** Using the results of Lemma 1, we know that the density \(\rho\) satisfies
\[\nabla\cdot(\mathbf{k}(\mathbf{x})\rho(\mathbf{x}))=g(\mathbf{x}) \tag{23}\]
for some \(g(\mathbf{x})\geq 0\) such that \(g(\mathbf{x})\geq\xi>0\) for \(\mathbf{x}\in\mathbf{X}_{0}\).
Since \(\rho(\mathbf{x})\) satisfies the linear partial differential equation (23), it follows using the method of characteristics that the solution \(\rho(x)\) can be written in terms of the solution \(\mathbf{s}_{t}(\mathbf{x})\), of the system \(\dot{\mathbf{x}}=\mathbf{k}(\mathbf{x})\) as follows [28]
\[\rho(\mathbf{x})=\frac{\Psi(\mathbf{x})}{V^{\alpha}(\mathbf{x})}=\int_{0}^{ \infty}g(\mathbf{s}_{-t}(\mathbf{x}))\left|\frac{\partial\mathbf{s}_{-t}( \mathbf{x})}{\partial\mathbf{x}}\right|dt, \tag{24}\]
where \(|\cdot|\) is the determinant. The proof follows by substituting the integral formula for \(\rho(\mathbf{x})\) from (24) in (23) and using the fact that
\[\lim_{t\rightarrow\infty}g(\mathbf{s}_{-t}(\mathbf{x}))\left|\frac{\partial \mathbf{s}_{-t}(\mathbf{x})}{\partial\mathbf{x}}\right|=0. \tag{25}\]
The limit in (25) goes to zero as \(\rho(\mathbf{x})\) is bounded for all \(\mathbf{x}\in\mathbf{X}_{1}\) and using Barbalat's Lemma. The integrant in (24) defines a semi-group of linear Perron-Frobenius (P-F) operator, \(\mathbb{P}_{t}\), acting on function \(g(\mathbf{x})\) and hence can be written compactly as
\[[\mathbb{P}_{t}g](\mathbf{x})=g(\mathbf{s}_{-t}(\mathbf{x}))\left|\frac{ \partial\mathbf{s}_{-t}(\mathbf{x})}{\partial\mathbf{x}}\right|. \tag{26}\]
Using (26), (24) can be written as
\[\rho(\mathbf{x})=\int_{0}^{\infty}[\mathbb{P}_{t}g](\mathbf{x})dt. \tag{27}\]
Furthermore, (25) can be written as
\[\lim_{t\rightarrow\infty}[\mathbb{P}_{t}g](\mathbf{x})=0\implies\lim_{t \rightarrow\infty}[\mathbb{P}_{t}\mathds{1}\mathbf{x}_{0}](\mathbf{x})=0,\]
where \(\mathds{1}\mathbf{x}_{0}\) is the indicator function for set \(\mathbf{X}_{0}\). This implication follows because \(g(\mathbf{x})\geq\xi>0\) for all \(\mathbf{x}\in\mathbf{X}_{0}\) and from dominated convergence theorem. For any set \(A\subseteq\mathbf{X}_{1}\), we have
\[\int_{A}[\mathbb{P}_{t}\mathds{1}\mathbf{x}_{0}](\mathbf{x})d \mathbf{x} =\int_{\mathbf{X}_{1}}[\mathbb{P}_{t}\mathds{1}_{\mathbf{X}_{0}}] (\mathbf{x})\mathds{1}_{A}(\mathbf{x})d\mathbf{x}\] \[=\int_{\mathbf{X}_{1}}\mathds{1}_{\mathbf{X}_{0}}(\mathbf{x}) \mathds{1}_{A}(\mathbf{s}_{t}(\mathbf{x}))dx. \tag{28}\]
The above follows by using the definition of \(\mathbb{P}_{t}\) in (26) and change of variables in the integration, i.e., \(\mathbf{y}=\mathbf{s}_{-t}(\mathbf{x})\) and \(d\mathbf{y}=|\frac{\partial\mathbf{s}_{-t}(\mathbf{x})}{\partial\mathbf{x}}|d \mathbf{x}\) and after relabeling. Note that the right-hand side of (28) is nothing but
\[\int_{A}[\mathbb{P}_{t}\mathds{1}_{\mathbf{X}_{0}}](\mathbf{x})d\mathbf{x}=m \{\mathbf{x}\in\mathbf{X}_{0}:\mathbf{s}_{t}(\mathbf{x})\in A\}.\]
From Lebesgue dominated convergence theorem
\[0=\int_{A}\lim_{t\rightarrow\infty}[\mathbb{P}_{t}\mathds{1}_{\mathbf{X}_{0}} ](\mathbf{x})d\mathbf{x}\]
\[=\int_{\mathbf{X}_{1}}\mathds{1}_{\mathbf{X}_{0}}(\mathbf{x})\lim_{t \rightarrow\infty}\mathds{1}_{A}(\mathbf{s}_{t}(\mathbf{x}))d\mathbf{x}=m\{ \mathbf{x}\in\mathbf{X}_{0}:\mathbf{s}_{t}(\mathbf{x})\in A\}.\]
Since the above is true for any measurable and positive Lebesgue measure set \(A\subseteq\mathbf{X}_{1}:=\mathbf{X}\setminus\mathcal{B}_{\delta}\) for arbitrary small \(\delta\), we obtain
\[m\{\mathbf{x}\in\mathbf{X}_{0}:\lim_{t\rightarrow\infty}\mathbf{s}_{t}( \mathbf{x})\neq 0\}=0. \tag{29}\]
We next show that the unsafe set \(\mathbf{X}_{u_{k}}\) will be avoided by trajectories \(\mathbf{s}_{t}(\mathbf{x})\) starting from almost all w.r.t. Lebesgue measure initial condition \(\mathbf{x}\in\mathbf{X}_{0}\). We have for \(\mathbf{x}\in\mathbf{X}_{u_{k}}\)
\[\rho(\mathbf{x})=\frac{\Psi_{k}(\mathbf{x})}{V^{\alpha}}=\frac{\theta}{V^{ \alpha}}. \tag{30}\]
Following Assumption 2 (equation (9)), we have
\[\rho(\mathbf{x})=\frac{\theta}{V^{\alpha}}\leq\frac{\theta}{V_{min}^{k}}. \tag{31}\]
Using the above bound on \(\rho(\mathbf{x})\), we obtain
\[\mathbf{G}:=\int_{\mathbf{X}_{u_{k}}}\int_{0}^{\infty}[\mathbb{P}_{t}\mathds{1 }_{\mathbf{X}_{0}}](\mathbf{x})dtd\mathbf{x}=\int_{\mathbf{X}_{u_{k}}}\rho( \mathbf{x})d\mathbf{x}\leq\frac{\theta}{V_{min}^{k}}m(\mathbf{X}_{u_{k}}),\]
where \(m(\cdot)\) is the Lebesgue measure. Utilizing that \(d\mathbf{y}=|\frac{\partial\mathbf{s}_{-t}(\mathbf{x})}{\partial\mathbf{x}}|d \mathbf{x}\), which is described through the definition of \(\mathbb{P}_{t}\) and performing a change of variable \(\mathbf{y}=\mathbf{s}_{-t}(\mathbf{x})\), we can use the bounds on \(\rho(\mathbf{x})\) in (31) for \(\mathbf{x}\in\mathbf{X}_{u_{k}}\) to obtain
\[\mathbf{G}=\int_{\mathbf{X}_{1}}\mathds{1}_{\mathbf{X}_{0}}(\mathbf{y})\int_{0 }^{\infty}\mathds{1}_{\mathbf{X}_{u_{k}}}(\mathbf{s}_{t}(\mathbf{y}))dtd \mathbf{y}\leq\frac{\theta}{V_{min}^{k}}m(\mathbf{X}_{u_{k}}).\]
The time integral on the left-hand side is the time spent by system trajectories starting from the initial set \(\mathbf{X}_{0}\) in the unsafe set \(\mathbf{X}_{u_{k}}\). Let this time be denoted by \(T(\mathbf{y})\). Hence, we obtain
\[\int_{\mathbf{X}_{1}}T(\mathbf{y})\mathds{1}_{\mathbf{X}_{0}}(\mathbf{y})d \mathbf{y}\leq\frac{\theta}{V_{min}^{k}}m(\mathbf{X}_{u_{k}}).\]
Following Assumption 2 (equation (10)), we have
\[\theta\leq\varepsilon\frac{V_{min}^{k}}{m(\mathbf{X}_{u_{k}})}\implies\int_{ \mathbf{X}_{0}}T(\mathbf{y})d\mathbf{y}\leq\varepsilon,\]
for any given \(\varepsilon>0\).
Choose some \(\alpha<1\), then using Chebyshev's inequality and the fact that \(\mathbf{X}_{0}\subset\mathbf{X}_{1}\), we have
\[m\{\mathbf{x}\in\mathbf{X}_{0}:T(\mathbf{y})\geq\varepsilon^{\alpha}\}\leq \varepsilon^{-\alpha}\int_{\mathbf{X}_{0}}T(\mathbf{y})d\mathbf{y}\leq \varepsilon^{-\alpha+1}.\]
Since the above is true for arbitrary small \(\varepsilon>0\), we have
\[m\{\mathbf{x}\in\mathbf{X}_{0}:T(\mathbf{y})=\int_{0}^{\infty}\mathds{1}_{ \mathbf{X}_{u_{k}}}(\mathbf{s}_{t}(\mathbf{x}))dt>0\}=0. \tag{32}\]
Now we make use of the continuity property of the flow \(\mathbf{s}_{t}(\mathbf{x})\) w.r.t. time to show that \(\mathds{1}_{\mathbf{X}_{u_{k}}}(\mathbf{s}_{t}(\mathbf{x}))=0\) for all \(t\geq 0\). Assume not, then there exists \(\gamma\) and \(\bar{t}\) such that \(\mathds{1}_{\mathbf{X}_{u_{k}}}(\mathbf{s}_{t}(\mathbf{x}))\geq\gamma>0\). Then from the continuity of solution \(\mathbf{s}_{t}(\mathbf{x})\) w.r.t. time, we know that there exists \(\Delta>0\) such that \(\mathds{1}_{\mathbf{X}_{u_{k}}}(\mathbf{s}_{t}(\mathbf{x}))>0\) for \(t\in[\bar{t},\bar{t}+\Delta]\). This violates (32). |
2310.19734 | Langlands duality on the Beilinson-Drinfeld Grassmannian | We calculate various categories of equivariant sheaves on the
Beilinson-Drinfeld Grassmannian in Langlands dual terms. For one, we obtain the
factorizable derived geometric Satake theorem. More generally, we calculate the
categorical analogue of unramified vectors in the Jacquet module of sheaves on
the Grassmannian.
In all cases, our spectral categories involve factorization modules for
factorization algebras related to the Langlands dual group. | Justin Campbell, Sam Raskin | 2023-10-30T16:58:35Z | http://arxiv.org/abs/2310.19734v2 | # Langlands duality on the Beilinson-Drinfeld Grassmannian
###### Abstract.
We calculate various categories of equivariant sheaves on the Beilinson-Drinfeld Grassmannian in Langlands dual terms. For one, we obtain the factorizable derived geometric Satake theorem. More generally, we calculate the categorical analogue of unramified vectors in the Jacquet module of sheaves on the Grassmannian.
In all cases, our spectral categories involve factorization modules for factorization algebras related to the Langlands dual group.
###### Contents
* 1 Introduction
* 2 Renormalizing crystals of categories
* 3 Preliminaries on factorization
* 4 A local acyclity theorem
* 5 Spectral Hecke categories
* 6 Construction of the derived Satake transform
* 7 The case of a torus
* 8 Factorization modules at a point
* 9 The equivalence for \(P=G\)
* 10 The equivalence for a proper parabolic
## 1. Introduction
### What is this paper about?
Our main objectives in this paper are as follows:
* Realize a strategy of Gaitsgory-Lurie giving a natural construction of the derived geometric Satake equivalence [1] (cf. the first footnote in _loc. cit._).
* Extend derived geometric Satake from the usual affine Grassmannian to the Beilinson-Drinfeld (or factorizable) affine Grassmannian.
* Extend the Arkhipov-Bezrkuvaknikov-Ginzburg equivalence [1] to the Beilinson-Drinfeld affine Grassmannian.
In fact, we unify the latter two subjects with a mutual generalization to parabolic subgroups \(P\) of \(G\); for \(P=G\) we obtain factorizable derived Satake, while for \(P=B\) we obtain factorizable Arkhipov-Bezrukavnikov-Ginzburg.
These results have been widely anticipated. In particular, Gaitsgory has spoken and written about our main results for some time, cf. [1] SS4.7 and SS6.6.3, [1] SS10.3.3, [2] Talk V.2 Conjecture 1, [1] Program SS4.1.1, and [1] Remark 12.6.7 to name a few.
However, the questions we consider have previously resisted precise formulations. In the above sources, one finds no definitions of the spectral (or _Langlands dual_) sides of the equivalences under consideration. So although our results have been anticipated, we can find no precisely formulated
###### Contents
* 1 Introduction
* 2 The Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois of the Galois Theory of the Galois Theory of the Galois Theory of the Galois Theory of the Galois of the Galois Theory of the Galois Theory of the Galois of the Galois Theory of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of the Galois Theory of the Galois of
stack may be calculated more explicitly as
\[(\mathbb{B}\tilde{G})^{\mathbb{S}^{2}}=\mathbb{B}\tilde{G}\underset{(\mathbb{B} \tilde{G})^{\mathbb{S}^{1}}}{\times}\mathbb{B}\tilde{G}=\mathbb{B}\tilde{G} \underset{(\tilde{G}^{\mathbb{S}^{d}}/\tilde{G})}{\times}\mathbb{B}\tilde{G}=( \Omega_{1}\tilde{G})/\tilde{G}\]
where
\[\Omega_{1}\tilde{G}:=\operatorname{Spec}(k)\underset{\tilde{G}}{\times} \operatorname{Spec}(k)\]
is the DG group of automorphisms of the identity in \(\tilde{G}\). Using logarithms, one finds that
\[\Omega_{1}\tilde{G}=\Omega_{0}\tilde{\mathfrak{g}}=\operatorname{Spec}( \operatorname{Sym}(\tilde{\mathfrak{g}}[1])),\]
so we can interpret the functor (1.3.2) as a functor
\[\mathsf{Sph}_{G}\longrightarrow\operatorname{Sym}(\tilde{\mathfrak{g}}[1])) \text{--}\mathsf{mod}(\operatorname{Rep}(\tilde{G})).\]
The derived Satake theorem then says that this functor is _almost_ an equivalence. More precisely, the derived Satake theorem of [1] says that this is true _up to renormalization_ - we should use the _renormalized Satake category_ and the category \(\mathsf{IndCoh}((\mathbb{B}\tilde{G})^{\mathbb{S}^{2}})\).
To summarize, their heuristic says to follow the recipe:
1. Use the factorization action of \(\mathsf{Sph}_{G}\) on \(\operatorname{Rep}(\tilde{G})\) to obtain a functor to a suitable analogue of the \(\mathbb{E}_{2}\)-center of \(\operatorname{Rep}(\tilde{G})\).
2. Settle the homological algebra issues to obtain a functor that is plausibly an equivalence.
3. Perform an explicit calculation to prove that the functor so obtained is an equivalence.
_Remark 1.3.1_.: In [1], one finds that many of the difficulties have to do with constructing the functor. The Gaitsgory-Lurie idea is of a markedly different nature than what Bezrukavnikov-Finkelberg considered.
### Centers and \(\mathbb{E}_{n}\)-modules
We will reinterpret centers (and more generally: centralizers) in terms of modules over rings (or near enough). This material is not necessary, but we find it plays an important motivating role for our main results.
Suppose that \(\mathcal{C}\) and \(\mathcal{D}\) are symmetric monoidal categories.
Given an \(\mathbb{E}_{n}\)-monoidal functor \(F:\mathcal{C}\to\mathcal{D}\), its _\(\mathbb{E}_{n}\)-centralizer_\(Z_{\mathbb{E}_{n}}(F)\) is the category
\[\mathsf{Hom}_{\mathcal{C}\to\mathsf{mod}_{\mathbb{E}_{n}}}(\mathcal{C}, \mathcal{D}),\]
i.e., the \(\mathbb{E}_{n}\)-Hochschild homology of \(\mathcal{D}\) as an \(\mathbb{E}_{n}\)-module category for \(\mathcal{C}\). Its key property is that an \(\mathbb{E}_{n}\)-map \(\mathcal{E}\to Z_{\mathbb{E}_{n}}(F)\) is equivalent to an \(\mathbb{E}_{n}\)-map \(\mathcal{C}\otimes\mathcal{E}\to\mathcal{D}\) whose composition
\[\mathcal{C}\xrightarrow{\operatorname{id}_{\mathcal{C}}\otimes\mathbb{1}_{ \mathcal{E}_{n}}}\mathcal{C}\otimes\mathcal{E}\longrightarrow\mathcal{D}\]
is \(F\) as an \(\mathbb{E}_{n}\)-functor. (We recover the \(\mathbb{E}_{n}\)-center of \(\mathcal{C}\) by taking the centralizer of \(\operatorname{id}_{\mathcal{C}}\).)
We now give an alternative to \(\mathbb{E}_{n}\)-centralizers (that coincides in some cases). First, one may form \(\mathsf{Hom}(\mathcal{C},\mathcal{D})\), the category of continuous DG functors from \(\mathcal{C}\) to \(\mathcal{D}\). Using Day convolution, this category is naturally symmetric monoidal as well.
We remind the reader that \(\mathbb{E}_{n}\)-algebras in \(\mathsf{Hom}(\mathcal{C},\mathcal{D})\) are the same as (left) lax \(\mathbb{E}_{n}\)-monoidal (continuous DG) functors from \(\mathcal{C}\) to \(\mathcal{D}\). A variant: given \(F:\mathcal{C}\to\mathcal{D}\) a lax \(\mathbb{E}_{n}\)-monoidal functor, thought of as an \(\mathbb{E}_{n}\)-algebra in \(\mathsf{Hom}(\mathcal{C},\mathcal{D})\), an \(\mathbb{E}_{n}\)-module for \(F\) in \(\mathsf{Hom}(\mathcal{C},\mathcal{D})\) is a functor \(G:\mathcal{C}\to\mathcal{D}\) of lax \(\mathbb{E}_{n}\)-module categories for \(\mathcal{C}\). (In fact, this variant can be put on equal footing with its predecessor using general colored operads.)
Now suppose that \(\mathcal{C}\) is _rigid_ (symmetric) monoidal. Then recall that a _lax_ linear morphism of \(\mathcal{C}\)-module categories is automatically linear - this is standard for left module categories, but holds just the same for \(\mathbb{E}_{n}\)-module categories. Therefore, in the above discussion, we have
\[Z_{\mathbb{E}_{n}}(F):=\mathsf{Hom}_{\mathcal{C}\to\mathsf{mod}_{\mathbb{E}_{n }}}(\mathcal{C},\mathcal{D})=\mathsf{Hom}_{\mathcal{C}\to\mathsf{mod}_{ \mathbb{E}_{n}}^{\mathrm{lax}}}(\mathcal{C},\mathcal{D})\]
where \(\mathcal{C}\text{-}\mathsf{mod}_{\mathbb{E}_{n}}^{\text{lax}}\) is the category of _lax_\(\mathbb{E}_{n}\)-module categories for \(\mathcal{C}\). In particular, we see that when \(\mathcal{C}\) is rigid, we can extend the definition of \(\mathbb{E}_{n}\)-centralizer to allow \(F\) to be merely lax \(\mathbb{E}_{n}\)-monoidal. Moreover, using the preceding paragraph, we can think of \(Z_{\mathbb{E}_{n}}(F)\) as \(\mathbb{E}_{n}\)-modules for \(F\) in the symmetric monoidal category \(\mathsf{Hom}(\mathcal{C},\mathcal{D})\).
We now make one additional manipulation, continuing to assume \(\mathcal{C}\) is rigid. Then recall that \(\mathcal{C}\) is canonically self-dual as a DG category. In particular, we obtain
\[\mathsf{Hom}(\mathcal{C},\mathcal{D})\simeq\mathcal{C}^{\vee}\otimes\mathcal{ D}\simeq\mathcal{C}\otimes\mathcal{D}\]
where one easily finds2 that the composite equivalence is symmetric monoidal.
Footnote 2: For completeness, we supply the argument. Giving a lax symmetric monoidal functor \(\mathcal{E}\to\mathsf{Hom}(\mathcal{C},\mathcal{D})\) is the same as giving a lax symmetric monoidal functor \(\mathcal{C}\otimes\mathcal{E}\to\mathcal{D}\). For \(\mathcal{E}=\mathcal{C}\otimes\mathcal{D}\), we have the lax symmetric monoidal functor \(\mathcal{C}\otimes\mathcal{E}=\mathcal{C}\otimes\mathcal{C}\otimes\mathcal{D}\)\(\xrightarrow{\mathsf{m}\otimes\mathsf{id}}\)\(\mathcal{C}\otimes\mathcal{D}\)\(\xrightarrow{\mathsf{Hom}_{\mathcal{C}}(\mathds{1}_{c},-)\otimes\mathsf{id}}\)\(\mathcal{D}\) (the key point being that \(\mathds{1}\) is compact and \(\operatorname{Hom}_{\mathcal{C}}(\mathds{1},-):\mathcal{C}\to\operatorname{Vect}\) is lax symmetric monoidal). This gives rise to the lax symmetric monoidal functor \(\mathcal{C}\otimes\mathcal{D}\to\mathsf{Hom}(\mathcal{C},\mathcal{D})\). By definition, this is the composite equivalence from before. One can check that this lax symmetric monoidal equivalence is actually symmetric monoidal.
To summarize, we obtain the following under the rigidity assumption on \(\mathcal{C}\). Below, for \(F:\mathcal{C}\to\mathcal{D}\) (possibly lax) \(\mathbb{E}_{n}\)-monoidal, we let \(\mathcal{K}_{F}\in\mathcal{C}\otimes\mathcal{D}\) be the corresponding object.
1. The \(\mathbb{E}_{n}\)-center of \(\mathcal{C}\) is the category of \(\mathbb{E}_{n}\)-modules for \(\mathcal{K}_{\text{id}_{\mathcal{C}}}\in\mathbb{E}_{n}\text{-}\mathsf{alg}( \mathcal{C}\otimes\mathcal{D})\).
2. More generally, for \(F:\mathcal{C}\to\mathcal{D}\) lax \(\mathbb{E}_{n}\)-linear, the \(\mathbb{E}_{n}\)-centralizer of \(F\) is the category of \(\mathbb{E}_{n}\)-modules for \(\mathcal{K}_{F}\in\mathbb{E}_{n}\text{-}\mathsf{alg}(\mathcal{C}\otimes \mathcal{D})\).
In particular, we have expressed \(\mathbb{E}_{n}\)-centers and centralizers in terms of \(\mathbb{E}_{n}\)-modules.
_Example 1.4.1_.: For \(\mathcal{C}=\mathsf{Rep}(\check{G})\), we find that \(Z_{\mathbb{E}_{2}}(\mathsf{Rep}(\check{G}))\) is the category of \(\mathbb{E}_{2}\)-modules for the regular representation \(\mathcal{O}_{\check{G}}\in\mathsf{ComAlg}(\mathsf{Rep}(\check{G}\times\check {G}))\).
In SS1.6 below, we will extend this discussion to the setting where there is a parabolic subgroup; we remark that there, we actually do consider \(\mathbb{E}_{2}\)-modules for lax monoidal functors and use centralizers rather than centers.
_Remark 1.4.2_.: A technical remark: in [11] SS6, it was observed that factorization categories attached to _rigid_ symmetric monoidal categories have much more favorable properties than general factorization categories. We consider the use of rigidity in the above discussion to be related to this observation, but here working purely in the \(\mathbb{E}_{2}\)-setting.
### Chiral setting
Following standard analogies, we consider chiral (or factorization) algebras as de Rham analogues of \(\mathbb{E}_{2}\)-algebras (which are Betti objects), and similarly for \(\mathbb{E}_{2}\)-modules.
Recall from [11] that any symmetric monoidal category naturally gives rise to a factorization category (analogous to thinking of an \(\mathbb{E}_{\infty}\)-category as an \(\mathbb{E}_{2}\)-category), and any commutative algebra in a symmetric monoidal category gives rise to a factorization algebra in the associated factorization category (analogous to thinking of an \(\mathbb{E}_{\infty}\)-algebra as an \(\mathbb{E}_{2}\)-algebra).
We can then form the category of factorization modules
\[\mathsf{Sph}^{\text{spec,naive}}:=\mathcal{O}_{\check{G}}\text{-}\mathsf{mod }^{\text{fact}}(\mathsf{Rep}(\check{G}\times\check{G})).\]
In SS2 and SS5, we explain how to perform homological algebra corrections to \(\mathsf{Sph}^{\text{spec,naive}}\) (analogous to replacing \(\mathsf{QCoh}\) by \(\mathsf{IndCoh}\)) to obtain a factorization category \(\mathsf{Sph}^{\text{spec}}_{\check{G}}\).
Following Gaitsgory, we refer in this paper to this process of correcting homological defects as _renormalization_. The specific technical issues we face here are novel and significant. Their resolution occupies a large portion of this work.
In SS6, we explain how the Gaitsgory-Lurie paradigm plays out in the chiral setting. Roughly speaking, their strategy for constructing the derived Satake functor goes through without hiccups in our setting. The factorizable derived Satake equivalence itself is stated as Theorem 6.6.1.
Our proof that the functor is an equivalence takes up SS9. We do this by reducing to the case that \(G=T\) is a torus; in SS7 we verify this case directly. Our process of reduction to the torus uses the study of Jacquet functors from [11]-[11].
### The Gaitsgory-Lurie heuristic (parabolic case)
As far as we are aware, Gaitsgory and Lurie did not observe that a similar strategy also applies in the presence of a parabolic \(P\) of \(G\).
Let \(N_{P}\) denote the unipotent radical of \(P\) and let \(M=P/N_{P}\). On the geometric side, we consider the factorization category
\[\mathsf{D}(\operatorname{Gr}_{G})_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\]
where on the right hand side, we have formed the _coinvariant category_, cf. [11]. By [10], we note that this category can also be thought of as
\[\mathsf{D}(\operatorname{Gr}_{G})^{\mathfrak{L}N_{P^{-}\mathfrak{L}^{+}M}}\]
i.e., _invariants_ for the _opposite_ parabolic.
On the spectral side, there are _two_ natural candidates to consider, attached to two lax symmetric monoidal functors
\[\operatorname{Chev}_{\Omega},\operatorname{Chev}_{\Upsilon}:\mathsf{Rep}( \check{G})\longrightarrow\mathsf{Rep}(\check{M}).\]
Both functors come by first restricting a representation from \(\check{G}\) to \(\check{P}\) and then applying a functor \(\mathsf{Rep}(\check{P})\rightarrow\mathsf{Rep}(\check{M})\).
For \(\operatorname{Chev}_{\Omega}\), the relevant functor \(\mathsf{Rep}(\check{P})\rightarrow\mathsf{Rep}(\check{M})\) is (derived) invariants for \(\check{N}_{\check{P}}\).
The functor \(\operatorname{Chev}_{\Upsilon}\) is more complicated to define. Consider \(\tilde{\mathfrak{n}}_{\check{P}}\) as a Lie algebra in \(\mathsf{Rep}(\check{M})\). We consider
\[Z_{\mathsf{Rep}(\check{M}),\mathbb{E}_{1}}(\tilde{\mathfrak{n}}_{\check{P}} \text{-}\mathsf{mod}(\mathsf{Rep}(\check{M})),\]
i.e., the \(\mathbb{E}_{1}\)-center (alias: Drinfeld center) in the sense of \(\mathsf{Rep}(\check{M})\)-linear monoidal categories. Because \(\tilde{\mathfrak{n}}_{\check{P}}\text{-}\mathsf{mod}(\mathsf{Rep}(\check{M}))\) is itself symmetric monoidal, there is a natural functor
\[\tilde{\mathfrak{n}}_{\check{P}}\text{-}\mathsf{mod}(\mathsf{Rep}(\check{M}) )\longrightarrow Z_{\mathsf{Rep}(\check{M}),\mathbb{E}_{1}}(\tilde{ \mathfrak{n}}_{\check{P}}\text{-}\mathsf{mod}(\mathsf{Rep}(\check{M})).\]
We remark that one may identify \(Z_{\mathsf{Rep}(\check{M}),\mathbb{E}_{1}}(\tilde{\mathfrak{n}}_{\check{P}} \text{-}\mathsf{mod}(\mathsf{Rep}(\check{M}))\) more explicitly with
\[(\tilde{\mathfrak{n}}_{\check{P}}\otimes\operatorname{C}\!\cdot\!(\mathbb{S }^{1}))\text{-}\mathsf{mod}(\mathsf{Rep}(\check{M}))=U_{\mathbb{E}_{2}}( \tilde{\mathfrak{n}}_{\check{P}})\text{-}\mathsf{mod}_{\mathbb{E}_{2}}( \mathsf{Rep}(\check{M})).\]
(We highlight these later expressions are closer to those used in [11]-[11] and in the body of this paper, although they are perhaps less conceptual.) Then \(\operatorname{Chev}_{\Upsilon}\) comes by composing the forgetful functor \(\mathsf{Rep}(\check{G})\rightarrow\mathsf{Rep}(\check{P})\) with the composite functor
\[\mathsf{Rep}(\check{P})\subset\tilde{\mathfrak{n}}_{\check{P}}\text{-} \mathsf{mod}(\mathsf{Rep}(\check{M}))\longrightarrow Z_{\mathsf{Rep}(\check{ M}),\mathbb{E}_{1}}(\tilde{\mathfrak{n}}_{\check{P}}\text{-}\mathsf{mod}( \mathsf{Rep}(\check{M}))\stackrel{{\operatorname{oblv}}}{{ \longrightarrow}}\tilde{\mathfrak{n}}_{\check{P}}\text{-}\mathsf{mod}( \mathsf{Rep}(\check{M}))\stackrel{{\operatorname{oblv}}}{{ \longrightarrow}}\mathsf{Rep}(\check{M}).\]
Up to issues of completion, we expect the \(\mathbb{E}_{2}\)-centralizers \(Z_{\mathbb{E}_{2}}(\operatorname{Chev}_{\Omega})\) and \(Z_{\mathbb{E}_{2}}(\operatorname{Chev}_{\Upsilon})\) of these functors to be "approximately" (i.e., up to issues of completion) equivalent by a Koszul duality procedure.
Although \(\operatorname{Chev}_{\Omega}\) is easier to define and arguably is the better object, we use (the de Rham analogue of) \(\operatorname{Chev}_{\Upsilon}\). There are several reasons for this, which we take a moment to record:
* One should think that \(\operatorname{Chev}_{\Upsilon}\) is adapted to \(\mathfrak{L}N_{P}\mathfrak{L}^{+}M\)-coinvariants and \(\operatorname{Chev}_{\Omega}\) is adapted to invariants. One may use either invariants or coinvariants due to [10], so we are similarly free to use either \(\operatorname{Chev}_{\Upsilon}\) or \(\operatorname{Chev}_{\Omega}\).
* The paper [11] uses \(\operatorname{Chev}_{\Upsilon}\), which makes our references somewhat easier.
However, we remark that in the _quantum_ setting, more recent papers in the subject, such as [12], [13], and [14] prefer \(\operatorname{Chev}_{\Omega}\).
* When working factorizably (as opposed to at a single point), it appears to be much more technically convenient to use \(\operatorname{Chev}_{\Upsilon}\) rather than \(\operatorname{Chev}_{\Omega}\). This was implicit in [10], and holds again in our present work, particularly as concerns renormalization, ULA properties, and the use of t-structures.
With those remarks out of the way, we note that there is a pairing
\[\mathsf{D}(\operatorname{Gr}_{G})_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M} \otimes\mathsf{D}(\operatorname{Gr}_{G})^{\mathfrak{L}N^{-},\psi}\longrightarrow \left(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\right)^{ \mathfrak{L}N^{-},\psi}\]
by convolution. One can further \(!\)-restrict to:
\[\left(\mathsf{D}(\mathfrak{L}(P\times N_{M}^{-})_{\mathfrak{L}N_{P}\mathfrak{ L}^{+}M})^{\mathfrak{L}N^{-},\psi}=\mathsf{D}(\operatorname{Gr}_{M})^{ \mathfrak{L}N_{M}^{-},\psi}\right.\]
for \(N_{M}^{-}:=N^{-}\cap M\). Applying the equivalence (1.3.1) for \(G\) and \(M\), we can rewrite the resulting functor as a pairing
\[\mathsf{D}(\operatorname{Gr}_{G})_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M} \otimes\mathsf{Rep}(\check{G})\longrightarrow\mathsf{Rep}(\check{M}).\]
By [10] Theorem 4.15.1, the induced functor
\[\mathsf{Rep}(\check{G})\longrightarrow\mathsf{Rep}(\check{M})\]
obtained by pairing with the unit object in \(\mathsf{D}(\operatorname{Gr}_{G})_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\) is the functor \(\operatorname{Chev}_{\Upsilon}\) (or rather, its de Rham counterpart). If we imagined we were in the \(\mathbb{E}_{2}\)-setting, this would yield a functor
\[\mathsf{D}(\operatorname{Gr}_{G})_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M} \longrightarrow Z(\operatorname{Chev}_{\Upsilon})\]
that ought to be an equivalence up to issues of renormalization.
Using the essentially the same discussion as in SS1.5, we formulate and prove a purely de Rham version of exactly this assertion, Theorem 6.12.3. As we discuss in the paper, on underlying categories (i.e., forgetting factorization structures), we recover (by different means) the main theorem of [1].
_Remark 1.6.1_.: In truth, the references to [10] above are for \(P=B\). This is ultimately due to a choice in [1] to work only with the Borel. Work in progress by Fargeman-Hayashi is extending [1] to the case of general parabolics. As such, our results for general parabolics (besides \(B\) and \(G\)) are conditional on their work.
### Notation and conventions
We work over a field \(k\) of characteristic zero. We also assume that \(k\) is algebraically closed, which is not strictly necessary but simplifies some notations.
Denote by \(X\) a fixed smooth and connected (but not necessarily proper) curve over \(k\).
We fix a reductive group \(G\) over \(k\) and a Borel subgroup \(B\subset G\). Let \(N\subset B\) denote the unipotent radical of \(B\) and \(T=B/N\) the Cartan. Choose a splitting \(T\to B\), so that we can consider the opposite Borel \(B^{-}\), with unipotent radical \(N^{-}\). We will denote by \(P\) a standard parabolic, with unipotent radical \(N_{P}\), Levi subgroup \(M\), opposite parabolic \(P^{-}\), etc. The coweight lattice will be denoted by \(\Lambda=\operatorname{Hom}(\mathbb{G}_{m},T)\).
The Langlands dual group of \(G\) (over \(k\)) will be denoted by \(\check{G}\), with corresponding standard parabolics \(\check{P}\), etc.
We freely use the language of higher category theory as developed in [14] and [14], and usefully summarized in [11] Chapter 1. We will use the term _category_ to mean _\((\infty,1)\)-category_, and _DG category_ to mean _presentable stable \(k\)-linear category_. DG categories assemble into a category \(\mathsf{DGCat}\), which is symmetric monoidal with respect to the Lurie tensor product. The unit for this tensor product is the DG category Vect of (complexes of) \(k\)-vector spaces.
We work in the setting of derived algebraic geometry, i.e. our test objects are understood to be affine DG schemes over \(k\). For any prestack \(Y\) over \(k\), we denote by \(\mathsf{QCoh}(Y)\) the DG category of quasicoherent sheaves on \(Y\). If \(Y\) is locally almost of finite type, we can also consider the DG category \(\mathsf{IndCoh}(Y)\) of ind-coherent sheaves, whose theory is developed in [12] and [11]. We will denote the DG category of D-modules on such a prestack by \(\mathsf{D}(Y)\).
We will make use of the theory of D-modules on certain infinite-dimensional spaces such as loop groups, which has been developed in [16] and [17]. We use the \(\mathsf{D}^{*}\) version of this theory by default.
Finally, for a (possibly lax) prestack \(Y\), we will denote by \(\mathsf{ShvCat}(Y)\) the category of sheaves of categories on \(Y\). We refer the reader to [11] for more details on this notion, or [16] for the lax setting.
### Acknowledgements
As the above introduction should make clear, this paper develops an old idea of Dennis Gaitsgory and Jacob Lurie.
We warmly thank Dennis Gaitsgory for the boundedness generosity with which he shares his ideas. We learned of his work with Lurie through conversations with him, but also learned the major technical tools used in this paper from him.
In addition, we thank Dima Arinkin, David Ben-Zvi, Dario Beraldo, Lin Chen, Gurbir Dhillon, Kevin Lin, Sergey Lysenko, and Nick Rozenblyum for enlightening conversations related to this material.
S.R. was supported by NSF grant DMS-2101984 and a Sloan Research Fellowship.
## 2. Renormalizing crystals of categories
Below \(S\) will denote a smooth quasicompact scheme over \(k\) unless otherwise specified. This section lays the technical foundations for renormalization in the presence of a crystal structure over \(S\), i.e. a \(\mathsf{D}(S)\)-module structure. Proposition 2.7.1 gives sufficient conditions under which the canonical renormalization of a \(\mathsf{D}(S)\)-module category with compatible t-structure admits a natural crystal structure. Definition 2.8.1 formulates what it means for a crystal of categories with compatible t-structure to be "almost ULA generated," and Proposition 2.8.2 says that the canonical renormalization of such a category is ULA generated.
The forgetful functor
\[\operatorname{oblv}_{S}:\mathsf{D}(S)\longrightarrow\mathsf{IndCoh}(S)\]
is symmetric monoidal, and hence can be viewed as a morphism of \(\mathsf{D}(S)\)-modules. Its left adjoint
\[\operatorname{ind}_{S}:\mathsf{IndCoh}(S)\longrightarrow\mathsf{D}(S)\]
therefore admits an oplax \(\mathsf{D}(S)\)-linear structure, which is easily seen to be strict. Given a \(\mathsf{D}(S)\)-module category \(\mathcal{C}\), we therefore obtain adjoint functors
\[\operatorname{ind}_{\mathcal{C}}:\mathsf{IndCoh}(S)\mathop{\otimes}_{\mathsf{ D}(S)}\mathcal{C}\rightleftarrows\mathcal{C}:\operatorname{oblv}_{\mathcal{C}}. \tag{2.1.1}\]
Moreover, for any \(\mathsf{D}(S)\)-linear functor \(F:\mathcal{C}\to\mathcal{D}\) we have commutative squares
and
**Proposition 2.1.1**.: _The adjunction (2.1.1) is both monadic and comonadic._
Proof.: For the monadicity it suffices to show that \(\operatorname{oblv}_{\mathcal{C}}\) is conservative, which is Lemma B.2.1 in [11].
We claim that the monad \(\operatorname{oblv}_{\mathcal{C}}\circ\operatorname{ind}_{\mathcal{C}}\) is _effective_ in the sense of [11] Definition 6.9.1, which will imply comonadicity of the adjunction by Remark 6.9.2 of _loc. cit._ The convolution action of the infinitesimal groupoid
\[\widehat{(S^{2})}_{\Delta}\cong S\underset{S_{\operatorname{dR}}}{\times}S\]
on \(S\) gives rise to an action of \(\operatorname{\mathsf{IndCoh}}(\widehat{(S^{2})}_{\Delta})\) on \(\operatorname{\mathsf{IndCoh}}(S)\), which in turn induces an equivalence of monoidal categories
\[\operatorname{\mathsf{IndCoh}}(\widehat{(S^{2})}_{\Delta})\smash{\mathop{ \longrightarrow}\limits^{\vbox to 0.0pt{\hbox{\kern 1.0pt\hbox{\vrule width 0.0pt height 5.0pt depth 0.0pt\vrule width 0.0pt height 5.0pt depth 0.0pt}}}}}}\operatorname{\mathsf{End}}_{\mathsf{D}(S)}( \operatorname{\mathsf{IndCoh}}(S)).\]
The natural t-structure on \(\operatorname{\mathsf{IndCoh}}(\widehat{(S^{2})}_{\Delta})\) induces a t-structure on \(\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S))\), which we shift by \(\dim S\) so that the coconnective objects are exactly the left t-exact functors (i.e. we normalize so that the unit \(\operatorname{id}_{\operatorname{\mathsf{IndCoh}}(S)}\) belongs to the heart).
The monad \(\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\) lifts to an associative algebra in \(\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S))\), which corresponds to the algebra of differential operators in \(\operatorname{\mathsf{IndCoh}}(\widehat{(S^{2})}_{\Delta})\). According to Lemma 6.17.1 of _loc. cit._, to prove that the monad is effective, it suffices to show that the following conditions are satisfied:
1. \(\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\) is coconnective in \(\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S))\);
2. \(\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\) is right flat, i.e. the functor \[\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S)) \longrightarrow\operatorname{\mathsf{End}}_{\mathsf{D}(S)}( \operatorname{\mathsf{IndCoh}}(S))\] \[F \mapsto\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\circ F\] is left t-exact;
3. the object \[\operatorname{\operatorname{cofib}}(\operatorname{id}_{\operatorname{ \mathsf{IndCoh}}(S)}\longrightarrow\operatorname{oblv}_{S}\circ \operatorname{ind}_{S})\] in \(\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S))\) is right flat and eventually connective.
All three of these conditions follow from the existence of the standard filtration on differential operators. Namely, the corresponding filtration on \(\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\) in \(\operatorname{\mathsf{End}}_{\mathsf{D}(S)}(\operatorname{\mathsf{IndCoh}}( S))\) has associated graded given by tensoring with the locally free sheaf \(\operatorname{Sym}_{\mathbb{O}_{S}}\operatorname{T}(S)\) (using the action of \(\operatorname{\mathsf{QCoh}}(S)\) on \(\operatorname{\mathsf{IndCoh}}(S)\)).
Recall that for any DG categories \(\mathcal{C}\) and \(\mathcal{D}\) equipped with t-structures, their tensor product \(\mathcal{C}\otimes\mathcal{D}\) admits a canonical t-structure, whose connective subcategory \((\mathcal{C}\otimes\mathcal{D})^{\leqslant 0}\) is generated under colimits by objects \(c\boxtimes d\) with \(c\) and \(d\) connective. If \(\mathcal{C}=\mathsf{D}(S_{1})\) and \(\mathcal{D}=\mathsf{D}(S_{2})\), then this t-structure makes the natural equivalence
\[\mathsf{D}(S_{1})\otimes\mathsf{D}(S_{2})\smash{\mathop{\longrightarrow} \limits^{\vbox to 0.0pt{\hbox{\kern 1.0pt\hbox{\vrule width 0.0pt height 5.0pt depth 0.0pt\vrule width 0.0pt height 5.0pt depth 0.0pt}}}}}\mathsf{D}(S_{1}\times S_{2})\]
t-exact.
We will use the following technical lemma frequently. Recall that the t-structure on \(\mathcal{C}\) is said to be _compactly generated_ if the category \(\mathcal{C}^{\leqslant 0}\) is compactly generated.
**Lemma 2.2.1**.: _Let \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{D}\) be DG categories equipped with t-structures, and suppose we are given a functor \(F:\mathcal{C}_{1}\to\mathcal{C}_{2}\)._
1. _If_ \(F\) _is right t-exact, then so is_ \[F\otimes\operatorname{id}_{\mathcal{D}}:\mathcal{C}_{1}\otimes\mathcal{D} \longrightarrow\mathcal{C}_{2}\otimes\mathcal{D}.\]
2. _Suppose that_ \(F\) _is left t-exact, and that either_ 1. _the t-structures on_ \(\mathcal{C}_{1}\) _and_ \(\mathcal{C}_{2}\) _are compactly generated, or_ 2. _the t-structure on_ \(\mathcal{D}\) _is compactly generated._
_Then \(F\otimes\operatorname{id}_{\mathcal{D}}\) is left t-exact._
Proof.: This is a combination of Proposition A.1, Lemma A.6, and Lemma A.9 in [11].
### Compatible t-structures
The Verdier self-duality equivalence
\[\mathsf{D}(S)^{\vee}\cong\mathsf{D}(S)\]
interchanges the symmetric monoidal structure whose multiplication is given by \(\otimes^{!}\) with the co-commutative coalgebra structure on \(\mathsf{D}(S)\) whose comultiplication is given by
\[\Delta_{\operatorname{dR},*}:\mathsf{D}(S)\longrightarrow\mathsf{D}(S\times S )\cong\mathsf{D}(S)\otimes\mathsf{D}(S).\]
In particular, a \((\mathsf{D}(S),\otimes^{!})\)-module structure on a DG category \(\mathcal{C}\) is the same datum as a \((\mathsf{D}(S),\Delta_{\operatorname{dR},*})\)-comodule structure. In what follows we pass freely between the two.
Similarly to the case of D-modules, the Serre self-duality equivalence
\[\mathsf{IndCoh}(S)^{\vee}\cong\mathsf{IndCoh}(S)\]
interchanges the symmetric monoidal structure whose multiplication is given by \(\otimes^{!}\) with the co-commutative coalgebra structure on \(\mathsf{IndCoh}(S)\) whose comultiplication is given by \(\Delta_{*}\), so that an \((\mathsf{IndCoh}(S),\otimes^{!})\)-module structure is the same datum as an \((\mathsf{IndCoh}(S),\Delta_{*})\)-comodule structure.
_Definition 2.3.1_.: Given a \(\mathsf{D}(S)\)-module category \(\mathcal{C}\), we say that a t-structure on \(\mathcal{C}\) is _compatible with the \(\mathsf{D}(S)\)-module structure_ if the corresponding coaction map
\[\mathcal{C}\longrightarrow\mathsf{D}(S)\otimes\mathcal{C}\]
is t-exact. Similarly, if \(\mathcal{D}\) is an \(\mathsf{IndCoh}(S)\)-module category, we say that a t-structure on \(\mathcal{D}\) is _compatible with the \(\mathsf{IndCoh}(S)\)-module structure_ if the coaction map
\[\mathcal{D}\rightarrow\mathsf{IndCoh}(S)\otimes\mathcal{D}\]
is t-exact.
**Proposition 2.3.2**.: _If \(\mathcal{C}\) is a \(\mathsf{D}(S)\)-module category equipped with a compatible t-structure, then the category \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) admits a unique t-structure such that \(\operatorname{ind}_{\mathcal{C}}\) is t-exact. Moreover, this t-structure makes \(\operatorname{oblv}\)e t-exact and is compatible with the \(\mathsf{IndCoh}(S)\)-action._
Proof.: We claim that the comonad \(\operatorname{ind}_{\mathcal{C}}\circ\operatorname{oblv}_{\mathcal{C}}\) is t-exact. In view of the previous proposition, this will imply that there is a unique t-structure on \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) which makes \(\operatorname{ind}_{\mathcal{C}}\) t-exact (because the comonad is left t-exact), and moreover that \(\operatorname{oblv}_{\mathcal{C}}\) is t-exact with respect to this t-structure (because the comonad is t-exact).
The cartesian square
gives rise to a commutative square
where the functors are given by \(\mathsf{IndCoh}\) direct image, and base change implies that the square
\[\begin{CD}\mathsf{D}(S)@>{}>{}>\mathsf{D}(S)\otimes\mathsf{D}(S)\\ @V{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\cdot{ }}}}}}}}}}}}}}>{}> \mathsf{D}(S)\otimes\mathsf{D}(S)\\ @V{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\cdot{\cdot{\cdot{ \cdot
### Almost compact generation
This section borrows heavily from the appendix to [1]. We warn the reader that our terminology does not agree with _loc. cit._ in some places. For example, we use the term "coherent t-structure" for a weaker notion than theirs.
Let \(\mathcal{C}\) be a DG category equipped with a t-structure. Recall that an object \(c\) in \(\mathcal{C}\) is called _almost compact_ if its truncation \(\tau^{\geqslant n}c\) is compact in \(\mathcal{C}^{\geqslant n}\) for any \(n\in\mathbb{Z}\). If \(c\) is eventually coconnective and almost compact, then \(c\) is called _coherent_. Denote by \(\mathcal{C}^{\mathrm{coh}}\subset\mathcal{C}\) the (non-cocomplete) full subcategory consisting of coherent objects.
If the t-structure on \(\mathcal{C}\) is right complete, then any almost compact object is eventually connective, and hence any coherent object is cohomologically bounded.
We will use the following technical lemma repeatedly.
**Lemma 2.4.1**.: _Let \(\mathcal{C}\) and \(\mathcal{D}\) be DG categories equipped with t-structures which are compatible with filtered colimits and right complete. Suppose that \(F:\mathcal{C}\to\mathcal{D}\) is t-exact and admits a continuous right adjoint, and moreover that \(F\) is conservative on \(\mathcal{C}^{+}\). If \(c\) belongs to \(\mathcal{C}^{+}\) and \(F(c)\) is almost compact, then \(c\) is almost compact (equivalently, coherent)._
Proof.: See Step 3 in the proof of Lemma 6.11.2 in [10].
_Definition 2.4.2_.: We say that \(\mathcal{C}\) is _coherent_ with respect to its t-structure if the inclusion
\[\mathcal{C}^{\geqslant 0}\longrightarrow\mathcal{C}^{\geqslant-1}\]
preserves compact objects.
For example, if \(A\) is a classical commutative algebra, then \(A\)-\(\mathsf{mod}\) with its usual t-structure is coherent if and only if \(A\) is coherent, i.e. every finitely generated ideal is finitely presented.
**Proposition 2.4.3**.: _Suppose that the t-structure on \(\mathcal{C}\) is compatible with filtered colimits. Then \(\mathcal{C}\) is coherent if and only if almost compact objects in \(\mathcal{C}\) are stable under truncation._
In particular, if \(\mathcal{C}\) is coherent then the subcategory \(\mathcal{C}^{\mathrm{coh}}\) is stable under truncation functors, and hence inherits a t-structure from \(\mathcal{C}\). This t-structure extends uniquely to a t-structure on the _canonical renormalization_
\[\mathcal{C}^{\mathrm{ren}}:=\mathsf{Ind}(\mathcal{C}^{\mathrm{coh}})\]
which is compatible with filtered colimits. The unique continuous extension
\[\mathcal{C}^{\mathrm{ren}}\longrightarrow\mathcal{C}\]
of the inclusion \(\mathcal{C}^{\mathrm{coh}}\subset\mathcal{C}\) is t-exact.
_Definition 2.4.4_.: We call \(\mathcal{C}\)_almost compactly generated_ with respect to its t-structure if it satisfies the following conditions:
1. the t-structure is compatible with filtered colimits and right complete;
2. \(\mathcal{C}\) is coherent;
3. \(\mathcal{C}^{\geqslant 0}\) is compactly generated.
_Remark 2.4.5_.: Conditions (i) and (iii) are usually rather easy to verify in practice, while coherence is more subtle. One particularly bothersome example of this is the fact that the tensor product of coherent categories need not be coherent; for instance, it is well-known that a ring of polynomials with coefficients in a coherent ring need not be coherent.
The canonical renormalization behaves well for almost compactly generated categories.
**Proposition 2.4.6**.: _If \(\mathcal{C}\) is almost compactly generated, then the t-exact functor \(\mathcal{C}^{\mathrm{ren}}\to\mathcal{C}\) restricts to an equivalence_
\[\mathcal{C}^{\mathrm{ren},+}\rightharpoonup_{\mathcal{C}}\mathcal{C}^{+}.\]
The next lemma supplies a convenient criterion for almost compact generation.
**Lemma 2.4.7**.: _Suppose that \(\mathcal{C}\) is equipped with a t-structure such that_
1. _the t-structure is compatible with filtered colimits and right complete;_
2. _the category_ \(\mathcal{C}^{\heartsuit}\) _is compactly generated;_
3. _any compact object in_ \(\mathcal{C}^{\heartsuit}\) _is almost compact in_ \(\mathcal{C}\)_._
_Then \(\mathcal{C}\) is almost compactly generated._
Suppose that \(\mathcal{C}\) and \(\mathcal{D}\) are DG categories equipped with t-structures. We will write
\[\operatorname{Fun}_{\operatorname{fha}}(\mathcal{C},\mathcal{D})\subset \operatorname{Fun}(\mathcal{C},\mathcal{D})\]
for the full subcategory consisting of exact continuous functors of finite homological amplitude, or equivalently those exact continuous functors which send \(\mathcal{C}^{+}\) into \(\mathcal{D}^{+}\). Denote by
\[\operatorname{Fun}_{+\operatorname{-cts}}(\mathcal{C}^{+},\mathcal{D}^{+}) \subset\operatorname{Fun}(\mathcal{C}^{+},\mathcal{D}^{+})\]
the full subcategory consisting of exact functors which commute with filtered colimits bounded uniformly from below. There is a natural restriction functor
\[\operatorname{Fun}_{\operatorname{fha}}(\mathcal{C},\mathcal{D})\longrightarrow \operatorname{Fun}_{+\operatorname{-cts}}(\mathcal{C}^{+},\mathcal{D}^{+}). \tag{2.5.1}\]
**Lemma 2.5.1**.: _Suppose that \(\mathcal{C}\) is almost compactly generated, and that \(\mathcal{C}^{\operatorname{ren}}\widetilde{\to}\mathcal{C}\). Then the functor (2.5.1) is an equivalence._
Proof.: The inverse functor is given by the composite
\[\operatorname{Fun}_{+\operatorname{-cts}}(\mathcal{C}^{+},\mathcal{D}^{+}) \longrightarrow\operatorname{Fun}(\mathcal{C}^{\operatorname{coh}},\mathcal{D })\longrightarrow\operatorname{Fun}(\mathcal{C},\mathcal{D}),\]
where the first functor is restriction and the second is left Kan extension. Namely, as observed in Example 4.4.4 of [10], the fact that \(\mathcal{C}^{\operatorname{coh}}\) is stable under truncation functors implies that any functor in \(\operatorname{Fun}_{+\operatorname{-cts}}(\mathcal{C}^{+},\mathcal{D}^{+})\) is left Kan extended from \(\mathcal{C}^{\operatorname{coh}}\).
### Functorial renormalization
Let us write \(\mathsf{DGCat}_{\text{-str}}\) for the category of DG categories equipped with t-structures, with 1-morphisms given by all continuous functors (we impose no t-exactness conditions on functors here). As described above, the Lurie tensor product of two objects in \(\mathsf{DGCat}_{\text{t-str}}\) admits a canonical t-structure, and hence the Lurie tensor product lifts to a symmetric monoidal structure on \(\mathsf{DGCat}_{\text{t-str}}\). We denote by \((\mathsf{DGCat}_{\text{t-str}})^{\circledotimes}\) the associated pseudo-tensor category.3
Footnote 3: The term “pseudo-tensor category,” which we have borrowed from Beilinson-Drinfeld, is formally identical to “colored operad” but plays a different conceptual role. A morphism from a colored operad to a pseudo-tensor category should be thought of as an algebra over the colored operad in the pseudo-tensor category. In particular, for us colored operads are “small,” while pseudo-tensor categories tend to be “large.”
We define a 1-full pseudo-tensor subcategory
\[(\mathsf{DGCat}_{\text{-str}})^{\circledotimes}_{\operatorname{fha}}\subset( \mathsf{DGCat}_{\text{t-str}})^{\circledotimes}\]
as follows. An object in \(\mathsf{DGCat}_{\text{t-str}}\) belongs to \((\mathsf{DGCat}_{\text{t-str}})^{\circledotimes}_{\operatorname{fha}}\) provided that it is almost compactly generated. An \(n\)-ary morphism
\[\mathcal{C}_{1}\otimes\cdots\otimes\mathcal{C}_{n}\longrightarrow\mathcal{D}\]
belongs to \((\mathsf{DGCat}_{\text{t-str}})^{\circledotimes}_{\operatorname{fha}}\) provided that the image of the composition
\[\mathcal{C}_{1}^{+}\times\cdots\times\mathcal{C}_{n}^{+}\longrightarrow \mathcal{C}_{1}\otimes\cdots\otimes\mathcal{C}_{n}\longrightarrow\mathcal{D}\]
is contained in \(\mathcal{D}^{+}\).
Next, consider the symmetric monoidal category \(\mathsf{DGCat}^{\Delta^{1}}_{\text{t-str}}\) of arrows in \(\mathsf{DGCat}_{\text{t-str}}\). Define a 1-full pseudo-tensor subcategory
\[(\mathsf{DGCat}^{\Delta^{1}}_{\text{t-str}})^{\circledotimes}_{\operatorname{ ren}}\subset(\mathsf{DGCat}^{\Delta^{1}}_{\text{t-str}})^{\circledotimes}\]
as follows. An object \(\mathcal{C}^{\prime}\to\mathcal{C}\) in \(\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}}\) belongs to \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\) provided that the following conditions are satisfied:
1. \(\mathcal{C}^{\prime}\to\mathcal{C}\) is a t-exact functor;
2. \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) are almost compactly generated;
3. the category \(\mathcal{C}^{\prime}\) is compactly generated by \((\mathcal{C}^{\prime})^{\text{\rm coh}}\);
4. \(\mathcal{C}^{\prime}\to\mathcal{C}\) induces an equivalence \((\mathcal{C}^{\prime})^{+}\tilde{\to}\mathcal{C}^{+}\).
(Note that these conditions imply that \(\mathcal{C}^{\prime}\to\mathcal{C}\) factors uniqely through a continuous t-exact equivalence \(\mathcal{C}^{\prime}\tilde{\to}\mathcal{C}^{\text{\rm ren}}\).) Given objects
\[\mathcal{C}^{\prime}_{1}\to\mathcal{C}_{1},\cdots,\mathcal{C}^{\prime}_{n}\to \mathcal{C}_{n},\mathcal{D}^{\prime}\to\mathcal{D}\]
in \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\), an \(n\)-ary morphism
in \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\) provided that both horizontal functors are \(n\)-ary morphisms in \((\mathsf{DGCat}_{\text{\rm-str}})^{\otimes}_{\text{\rm fin}}\) (in fact, it suffices to impose this condition on the lower horizontal functor).
**Proposition 2.6.1**.: _The second projection_
\[\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}} \longrightarrow\mathsf{DGCat}_{\text{\rm-str}}\] \[(\mathcal{C}^{\prime}\to\mathcal{C}) \mapsto\mathcal{C}\]
_induces an equivalence of pseudo-tensor categories_
\[(\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}} \tilde{\to}(\mathsf{DGCat}_{\text{\rm-str}})^{\otimes}_{\text{\rm fin}}.\]
Proof.: This functor is essentially surjective on the underlying categories because for any almost compactly generated category \(\mathcal{C}\), the canonical renormalization \(\mathcal{C}^{\text{\rm ren}}\to\mathcal{C}\) is an object of \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\) which projects to \(\mathcal{C}\).
To see that this functor is fully faithful on \(n\)-ary morphisms, suppose that we are given objects
\[\mathcal{C}^{\prime}_{1}\to\mathcal{C}_{1},\cdots,\mathcal{C}^{\prime}_{n}\to \mathcal{C}_{n},\mathcal{D}^{\prime}\to\mathcal{D}\]
in \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\). We claim that any \(n\)-ary morphism
\[\mathcal{C}_{1}\otimes\cdots\otimes\mathcal{C}_{n}\longrightarrow\mathcal{D}\]
in \((\mathsf{DGCat}_{\text{\rm-str}})^{\otimes}_{\text{\rm fin}}\) lifts to an \(n\)-ary morphism in \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\) which is unique up to contractible choice. Namely, the image of the composition
\[(\mathcal{C}^{\prime}_{1})^{\text{\rm coh}}\times\cdots\times(\mathcal{C}^{ \prime}_{n})^{\text{\rm coh}}\tilde{\to}\mathcal{C}^{\text{\rm coh}}_{1} \times\cdots\times\mathcal{C}^{\text{\rm coh}}_{n}\longrightarrow\mathcal{D}\]
is contained in \(\mathcal{D}^{+}\). Since this functor is exact in each variable separately, it follows that there is a unique continuous extension to
\[\mathcal{C}^{\prime}_{1}\otimes\cdots\otimes\mathcal{C}^{\prime}_{n}=\mathsf{ Ind}((\mathcal{C}^{\prime}_{1})^{\text{\rm coh}})\otimes\cdots\otimes\mathsf{ Ind}((\mathcal{C}^{\prime}_{n})^{\text{\rm coh}})\longrightarrow\mathcal{D}.\]
Note that this extension defines an \(n\)-ary morphism in \((\mathsf{DGCat}^{\Delta^{1}}_{\text{\rm-str}})^{\otimes}_{\text{\rm ren}}\), since any object in \((\mathcal{C}^{\prime}_{i})^{+}\) is a filtered colimit of objects in \((\mathcal{C}^{\prime}_{i})^{\text{\rm coh}}\) bounded uniformly from below. This completes the proof.
We will make frequent use of the endomorphism of pseudo-tensor categories
\[(\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{ha}} \longrightarrow(\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{ha}} \tag{2.6.1}\] \[\mathcal{C} \mapsto\mathcal{C}^{\text{ren}}\]
obtained as the composition
\[(\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{ha}}\tilde{\longrightarrow}( \mathsf{DGCat}_{\text{t-str}}^{\Delta^{1}})^{\otimes}_{\text{ren}} \longrightarrow(\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{fha}},\]
where the first functor is inverse to the equivalence of Proposition 2.6.1 and the second functor is the first projection \((\mathcal{C}^{\prime}\to\mathcal{C})\mapsto\mathcal{C}^{\prime}\).
### Renormalization for crystals of categories
We continue to let \(S\) denote a smooth quasicompact scheme.
Consider the pseudo-tensor category \((\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}\) whose objects are \(\mathsf{D}(S)\)-module categories equipped with a compatible t-structure, with \(n\)-ary morphisms \((\mathcal{C}_{1},\cdots,\mathcal{C}_{n})\to\mathcal{D}\) given by arbitrary \(\mathsf{D}(S)\)-linear functors
\[\mathcal{C}_{1}\underset{\mathsf{D}(S)}{\otimes}\cdots\underset{\mathsf{D}(S )}{\otimes}\mathcal{C}_{n}\longrightarrow\mathcal{D}.\]
We define a pseudo-tensor subcategory
\[(\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}_{\text{fha}} \subset(\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}\]
a follows. An object \(\mathcal{C}\) of \((\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}\) belongs to \((\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}_{\text{fha}}\) if \(\mathcal{C}\) is almost compactly generated. An \(n\)-ary morphism
\[\mathcal{C}_{1}\underset{\mathsf{D}(S)}{\otimes}\cdots\underset{\mathsf{D}(S )}{\otimes}\mathcal{C}_{n}\longrightarrow\mathcal{D}\]
belongs to \((\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}_{\text{fha}}\) provided that the image of the composition
\[\mathcal{C}_{1}^{+}\times\cdots\times\mathcal{C}_{n}^{+}\longrightarrow \mathcal{C}_{1}\underset{\mathsf{D}(S)}{\otimes}\cdots\underset{\mathsf{D}(S )}{\otimes}\mathcal{C}_{n}\longrightarrow\mathcal{D}\]
is contained in \(\mathcal{D}^{+}\).
**Proposition 2.7.1**.: _The morphism (2.6.1) lifts canonically to a morphism of pseudo-tensor categories_
\[(\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{\otimes}_{ \text{fha}} \longrightarrow(\mathsf{D}(S)\text{--}\mathsf{mod}_{\text{t-str}})^{ \otimes}_{\text{fha}}\] \[\mathcal{C} \mapsto\mathcal{C}^{\text{ren}}.\]
Proof.: Observe that \(\mathsf{D}(S)\) is a commutative algebra object in \((\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{fha}}\) satisfying \(\mathsf{D}(S)^{\text{ren}}\tilde{\rightarrow}\mathsf{D}(S)\). Moreover, for any \(\mathsf{D}(S)\)-module category \(\mathcal{C}\), the coaction functor
\[\text{coact}_{\mathcal{C}}:\mathcal{C}\longrightarrow\mathsf{D}(S)\otimes \mathcal{C}\]
is left adjoint to the action functor
\[\text{act}_{\mathcal{C}}:\mathsf{D}(S)\otimes\mathcal{C}\longrightarrow \mathcal{C}.\]
It follows that if \(\mathcal{C}\) is equipped with a compatible t-structure, then \(\text{act}_{\mathcal{C}}\) is left t-exact, and hence the \(\mathsf{D}(S)\)-action on \(\mathcal{C}\) takes place in \((\mathsf{DGCat}_{\text{t-str}})^{\otimes}_{\text{fha}}\) provided that \(\mathcal{C}\) is almost compactly generated. All of this implies that (2.6.1) is functorial on \(\mathsf{D}(S)\)-modules; we need only check that if the \(\mathsf{D}(S)\)-action on \(\mathcal{C}\) is compatible with the t-structure, then so is the induced action on \(\mathcal{C}^{\text{ren}}\).
First, we claim that the functor \(\text{coact}_{\text{\tiny{$\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsf{ \mathsf{ \mathsf{ \mathsf{ \
coconnective, since they generate all compact objects in \(\mathsf{D}(S)\otimes\mathcal{C}^{\mathrm{ren}}\) under finite colimits and retracts. But compact objects in \(\mathsf{D}(S)\) and \(\mathcal{C}^{\mathrm{ren}}\) are eventually coconnective, which implies that objects \(\mathcal{M}\boxtimes c\) as above are eventually coconnective, by Lemma 2.2.1.
To see that \(\mathrm{coact}_{\mathcal{C}^{\mathrm{ren}}}\) is t-exact, consider the commutative square
The left vertical and lower horizontal functors are t-exact, and the right vertical functor is t-exact by Lemma 2.2.1. We have already shown that the upper horizontal functor has finite homological amplitude, and hence we obtain a commutative square
Since \((\mathcal{C}^{\mathrm{ren}})^{+}\to\mathcal{C}^{+}\) is an equivalence and in particular conservative, Lemma 4.6.2(3) in [10] implies that the right vertical functor is conservative. It follows immediately that \(\mathrm{coact}_{\mathcal{C}^{\mathrm{ren}}}\) is left t-exact. As for right t-exactness, since the t-structure on \(\mathcal{C}^{\mathrm{ren}}\) is compactly generated, it suffices to show that \(\mathrm{coact}_{\mathcal{C}^{\mathrm{ren}}}\) sends compact objects in \((\mathcal{C}^{\mathrm{ren}})^{\leq 0}\) into \((\mathsf{D}(S)\otimes\mathcal{C}^{\mathrm{ren}})^{\leq 0}\). Since compact objects in \(\mathcal{C}^{\mathrm{ren}}\) are eventually coconnective, this follows from the corresponding property of \(\mathrm{coact}_{\mathcal{C}}\) using the same commutative square.
### Almost ULA generation
Suppose that we are given a \(\mathsf{D}(S)\)-module category \(\mathcal{C}\) with compatible t-structure.
_Definition 2.8.1_.: An object \(c\) in \(\mathcal{C}\) will be called _almost ULA_ over \(S\) if \(\mathrm{oblve}(c)\) is almost compact in \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\). We will say that \(\mathcal{C}\) is _almost ULA generated_ if \(\mathcal{C}\) is almost compactly generated, and moreover \((\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C})^{\geq 0}\) is compactly generated by objects of the form
\[\tau^{\geq 0}(\mathcal{F}\otimes\mathrm{oblv}_{\mathcal{C}}(c))\]
where \(\mathcal{F}\) is a perfect complex on \(S\) and \(c\) is almost ULA in \(\mathcal{C}\).
As the name suggests, the renormalization of an almost ULA generated category is ULA generated.
**Proposition 2.8.2**.: _Suppose that a \(\mathsf{D}(S)\)-module category \(\mathcal{C}\) is almost ULA generated. Then the \(\mathsf{D}(S)\)-module category \(\mathcal{C}^{\mathrm{ren}}\) is ULA generated by the objects \(\tau^{\geq n}c\) in_
\[\mathcal{C}^{+}\cong(\mathcal{C}^{\mathrm{ren}})^{+}\subset\mathcal{C}^{ \mathrm{ren}},\]
_where \(n\in\mathbb{Z}\) and \(c\) is almost ULA in \(\mathcal{C}\)._
This proposition is immediate from the following lemma.
**Lemma 2.8.3**.: _Let \(\mathcal{C}\) be a \(\mathsf{D}(S)\)-module category equipped with compatible t-structure, and suppose that \(\mathcal{C}\) is almost compactly generated. Then \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) is coherent._
_If we assume in addition that \((\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C})^{\geq 0}\) is compactly generated (e.g. if \(\mathcal{C}\) is almost ULA generated), it follows that \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) is almost compactly generated. In that case, the functor_
\[\mathsf{IndCoh}(S)\underset{\mathsf{D}(S)}{\otimes}\mathcal{C}^{\mathrm{ren}} \longrightarrow\mathsf{IndCoh}(S)\underset{\mathsf{D}(S)}{\otimes}\mathcal{C}\]
_induced by \(\mathcal{C}^{\mathrm{ren}}\to\mathcal{C}\) factors uniquely through a t-exact equivalence_
\[\mathsf{IndCoh}(S)\underset{\mathsf{D}(S)}{\otimes}\mathcal{C}^{\mathrm{ren}} \tilde{\joinrel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\relrel\rel\rel\rel\relrel\rel\rel\relrel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\relrel\rel\rel\relrel\rel\rel\rel\relrel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\relrel\rel\rel\relrel\rel\rel\relrel\rel\relrel\rel\rel\relrel\rel\relrel\rel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrelrel\relrelrel\
For this, observe that \(\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S))\) is almost compactly generated with respect to the t-structure defined in the proof of Proposition 2.1.1, and that
\[\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S))^{\operatorname{ren}} \tilde{\longrightarrow}\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S)).\]
Namely, we have an equivalence
\[\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S))\tilde{\longrightarrow} \mathsf{IndCoh}((\widehat{S^{2}})_{\Delta})\]
which is t-exact up to shift, and the t-structure on the right side has these properties. The compatibility of the \(D(S)\)-action with the t-structure on \(\mathcal{C}\) implies that the action of \(\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S))\) on \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) takes place in \((\mathsf{DGCat}_{\operatorname{t-str}})^{\otimes}_{\operatorname{fla}}\), and hence (2.6.1) induces an action on the renormalization \((\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C})^{\operatorname{ren}}\). The monad induced by the adjunction
\[\operatorname{ind}_{\mathcal{C}}^{\operatorname{ren}}:(\mathsf{IndCoh}(S) \underset{\mathsf{D}(S)}{\otimes}\mathcal{C})^{\operatorname{ren}} \rightleftarrows\mathcal{C}^{\operatorname{ren}}:\operatorname{oblv}_{ \mathcal{C}}^{\operatorname{ren}}\]
is given by the action of the associative algebra \(\operatorname{oblv}_{S}\circ\operatorname{ind}_{S}\) in \(\operatorname{End}_{\mathsf{D}(S)}(\mathsf{IndCoh}(S))\), since this is the case on eventually coconnective objects and \((\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C})^{\operatorname{ren}}\) is compactly generated by coherent objects. As in Proposition 2.1.1, it follows that this monad is effective and hence that \(\operatorname{ind}_{\mathcal{C}}^{\operatorname{ren}}\) is comonadic.
### Change of base: open embedding
Suppose that \(j:U\to S\) is an open embedding and \(\mathcal{C}\) is a \(\mathsf{D}(S)\)-module category with compatible t-structure. Put
\[\mathcal{C}_{U}:=\mathsf{D}(U)\underset{\mathsf{D}(S)}{\otimes}\mathcal{C}.\]
Then we have an adjunction
\[j^{*}\otimes\operatorname{id}_{\mathcal{C}}:\mathcal{C}=\mathsf{D}(S) \underset{\mathsf{D}(S)}{\otimes}\mathcal{C}\rightleftarrows\mathcal{C}_{U}:j _{*}\otimes\operatorname{id}_{\mathcal{C}}\]
where the right adjoint is fully faithful, and in particular \(\mathcal{C}_{U}\) admits a unique t-structure such that \(j_{*}\otimes\operatorname{id}_{\mathcal{C}}\) is left t-exact, or equivalently such that \(j^{*}\otimes\operatorname{id}_{\mathcal{C}}\) is right t-exact. Here we are abusing notation slightly by writing \(j_{*}\) rather than \(j_{\operatorname{dR},*}\), but since \(j\) is an open embedding, this should not cause any confusion.
**Lemma 2.9.1**.: _The functor \(j^{*}\otimes\operatorname{id}_{\mathcal{C}}:\mathcal{C}\to\mathcal{C}_{U}\) is t-exact. If \(j\) is affine, then the right adjoint \(j_{*}\otimes\operatorname{id}_{\mathcal{C}}:\mathcal{C}_{U}\to\mathcal{C}\) is t-exact._
Proof.: To see that \(j^{*}\otimes\operatorname{id}_{\mathcal{C}}\) is t-exact, it suffices to show it is left t-exact, which is equivalent to left t-exactness of the endofunctor
\[(j_{*}j^{*})\otimes\operatorname{id}_{\mathcal{C}}:\mathcal{C}\longrightarrow \mathcal{C}_{U}\longrightarrow\mathcal{C}.\]
Similarly, if \((j_{*}j^{*})\otimes\operatorname{id}_{\mathcal{C}}\) is t-exact, then \(j_{*}\otimes\operatorname{id}_{\mathcal{C}}\) is t-exact. We have a commutative square
where the horizontal functors are fully faithful and t-exact. By Lemma 2.2.1, the right vertical functor is left t-exact, respectively right t-exact if the functor \(j_{*}j^{*}\) is such. The lemma follows.
**Proposition 2.9.2**.: _If \(\mathcal{C}\) is almost ULA generated over \(S\), then \(\mathcal{C}_{U}\) is almost ULA generated over \(U\), and the functor_
\[(\mathcal{C}^{\mathrm{ren}})_{U}\longrightarrow\mathcal{C}_{U}\]
_induced by \(\mathcal{C}^{\mathrm{ren}}\to\mathcal{C}\) factors uniquely through a t-exact equivalence_
\[(\mathcal{C}^{\mathrm{ren}})_{U}\tilde{\longrightarrow}(\mathcal{C}_{U})^{ \mathrm{ren}}.\]
Proof.: Suppose that \(\mathcal{C}\) is almost compactly generated. Since \(j_{*}\otimes\mathrm{id}_{\mathcal{C}}:\mathcal{C}_{U}\to\mathcal{C}\) is left t-exact and conservative, the t-structure on \(\mathcal{C}_{U}\) is compatible with filtered colimits. For the same reason, the t-structure on \(\mathcal{C}\) is right separated and hence right complete. To see that \((\mathcal{C}_{U})^{\geq 0}\) is compactly generated, consider the adjunction
\[j^{*}\otimes\mathrm{id}_{\mathcal{C}}:\mathcal{C}^{\geq 0}\rightleftarrows( \mathcal{C}_{U})^{\geq 0}:j_{*}\otimes\mathrm{id}_{\mathcal{C}}\,.\]
Since \(j_{*}\otimes\mathrm{id}_{\mathcal{C}}\) is continuous and fully faithful, hence conservative, it follows that \((\mathcal{C}_{U})^{\geq 0}\) is compactly generated.
To see that \(\mathcal{C}_{U}\) is almost compactly generated, it remains to show that the t-structure is coherent. We claim that any compact object \(c\) in \((\mathcal{C}_{U})^{\geq 0}\) is a direct summand of an object \((j^{*}\otimes\mathrm{id}_{\mathcal{C}})(c_{0})\) where \(c_{0}\) is a compact object in \(\mathcal{C}^{\geq 0}\). Since \(\mathcal{C}\) is almost compactly generated and \(j^{*}\otimes\mathrm{id}_{\mathcal{C}}\) preserves almost compact objects, it will follow that \(c\) is almost compact in \(\mathcal{C}_{U}\). To prove the claim, write
\[(j_{*}\otimes\mathrm{id}_{\mathcal{C}})(c)\cong\underset{\alpha}{\mathrm{colim }}\ c_{\alpha}\]
as a filtered colimit of compact objects \(c_{\alpha}\) in \(\mathcal{C}^{\geq 0}\). Then we have
\[c\cong\underset{\alpha}{\mathrm{colim}}\ (j^{*}\otimes\mathrm{id}_{\mathcal{C}} )(c_{\alpha})\]
in \((\mathcal{C}_{U})^{\geq 0}\). Compactness of \(c\) in \((\mathcal{C}_{U})^{\geq 0}\) now implies that \(c\) is a direct summand of \((j^{*}\otimes\mathrm{id}_{\mathcal{C}})(c_{\alpha})\) for some \(\alpha\).
Renormalization of the above adjunction yields a \(\mathsf{D}(S)\)-linear adjunction
\[(j^{*}\otimes\mathrm{id}_{\mathcal{C}})^{\mathrm{ren}}:\mathcal{C}^{\mathrm{ ren}}\rightleftarrows(\mathcal{C}_{U})^{\mathrm{ren}}:(j_{*}\otimes\mathrm{id}_{ \mathcal{C}})^{\mathrm{ren}}.\]
The right adjoint is continuous and fully faithful, so in particular this adjunction is monadic. Note that \((j_{*}\otimes\mathrm{id}_{\mathcal{C}})^{\mathrm{ren}}\) factors through
\[j_{*}\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}}:(\mathcal{C}^{\mathrm{ ren}})_{U}\longrightarrow\mathcal{C}^{\mathrm{ren}},\]
since this is the case after restricting to \((\mathcal{C}_{U})^{+}\). The resulting functor
\[(\mathcal{C}_{U})^{\mathrm{ren}}\longrightarrow(\mathcal{C}^{\mathrm{ren}})_{U}\]
corresponds to a morphism of (left t-exact) monads on \(\mathcal{C}^{\mathrm{ren}}\), which is an isomorphism because it is such after restricting to \(\mathcal{C}^{+}\).
The assertion that \(\mathcal{C}_{U}\) is almost ULA generated over \(U\) follows readily from the above.
### Change of base: closed embedding
Suppose that we are given a smooth closed subvariety \(i:T\to S\). Writing
\[\mathcal{C}_{T}:=\mathsf{D}(T)\underset{\mathsf{D}(S)}{\otimes}\mathcal{C}\]
for any \(\mathsf{D}(S)\)-module category \(\mathcal{C}\), note that we have an adjunction
\[i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}}:\mathcal{C}_{T} \rightleftarrows\mathcal{C}:i^{!}\otimes\mathrm{id}_{\mathcal{C}},\]
with the left adjoint being fully faithful.
**Lemma 2.10.1**.: _The functor \(i^{!}\otimes\mathrm{id}_{\mathcal{C}}:\mathcal{C}\to\mathcal{C}_{T}\) sends objects ULA over \(S\) to objects ULA over \(T\). If \(\mathcal{C}\) is ULA generated over \(S\), the \(\mathcal{C}_{T}\) is ULA generated over \(T\) by objects of the form \((i^{!}\otimes\mathrm{id}_{\mathcal{C}})(c)\), where \(c\) is ULA in \(\mathcal{C}\)._
Proof.: Consider the commutative square of symmetric monoidal categories
This can also be viewed as a commutative square of \(\mathsf{D}(S)\)-modules, which we can tensor with \(\mathcal{C}\) to obtain
Since \(S\) and \(T\) are smooth, the functor
\[i_{*}:\mathsf{IndCoh}(T)\longrightarrow\mathsf{IndCoh}(S)\]
admits a left adjoint \(i^{\mathsf{IndCoh},*}\), which agrees with \(i^{!}\) up to tensoring with a cohomologically shifted line bundle. Thus \(i^{!}\) admits a \(\mathsf{D}(S)\)-linear right adjoint, which implies that the lower horizontal functor in the above square admits a continuous right adjoint and hence preserves compact objects. It follows immediately that the upper horizontal functor preserves ULA objects.
The second claim follows from the observation that the image of
\[i^{!}\otimes\operatorname{id}\!e:\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)} \mathcal{C}\longrightarrow\mathsf{IndCoh}(T)\otimes_{\mathsf{D}(T)}\mathcal{ C}_{T}\]
generates the target under colimits, which is immediate from the corresponding property of
\[i^{!}:\mathsf{IndCoh}(S)\longrightarrow\mathsf{IndCoh}(T)\]
(this being equivalent to the conservativity of its right adjoint, which is isomorphic to \(i_{*}\) tensored with a cohomologically shifted line bundle).
**Lemma 2.10.2**.: _Suppose that \(\mathcal{C}\) is equipped with a t-structure compatible with the \(\mathsf{D}(S)\)-action. Then the image of the fully faithful functor_
\[i_{\operatorname{dR},*}\otimes\operatorname{id}\!e:\mathcal{C}_{T} \longrightarrow\mathcal{C}\]
_is stable under truncations, so \(\mathcal{C}_{T}\) admits a unique t-structure such that \(i_{\operatorname{dR},*}\otimes\operatorname{id}\!e\) is t-exact. Moreover, the subcategory \((\mathcal{C}_{T})^{\heartsuit}\subset\mathcal{C}^{\heartsuit}\) is stable under taking subobjects, and the t-structure on \(\mathcal{C}_{T}\) is compatible with the \(\mathsf{D}(T)\)-action._
Proof.: The stability of \(\mathcal{C}_{T}\) under truncations and of \((\mathcal{C}_{T})^{\heartsuit}\) under taking subobjects both follow formally from Lemma 2.9.1 (specifically, the fact that \(j^{*}\otimes\operatorname{id}\!e\) is t-exact). To see that the \(\mathsf{D}(T)\)-action on \(\mathcal{C}_{T}\) is compatible with the t-structure, consider the commutative square
The upper horizontal and right vertical functors are t-exact, so it suffices to show that that the lower horizontal functor is conservative and t-exact. In fact, it is fully faithful, being the tensor
product of two fully faithful functors which admit continuous right adjoints. To see the t-exactness, note that this functor can be decomposed as
\[\mathsf{D}(T)\otimes\mathcal{C}_{T}\xrightarrow{i_{\mathrm{dR},*}\otimes \mathrm{id}_{\mathcal{C}_{T}}}\mathsf{D}(S)\otimes\mathcal{C}_{T}\xrightarrow{ \mathrm{id}_{\mathsf{D}(S)}\otimes\mathcal{C}_{T}}\mathsf{D}(S)\otimes \mathcal{C}.\]
Since the t-structures on \(\mathsf{D}(S)\) and \(\mathsf{D}(T)\) are compactly generated, the t-exactness of these two functors follows from that of \(i_{\mathrm{dR},*}\) and \(i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}}:\mathcal{C}_{T}\to\mathcal{C}\) by Lemma 2.2.1.
If \(\mathcal{C}_{T}\) and \(\mathcal{C}\) are almost ULA generated, then we obtain an adjunction
\[(i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}})^{\mathrm{ren}}:(\mathcal{C }_{T})^{\mathrm{ren}}\xrightarrow{\mathcal{C}^{\mathrm{ren}}}:(i^{!}\otimes \mathrm{id}_{\mathcal{C}})^{\mathrm{ren}},\]
which takes place in \(\mathsf{D}(S)\)-modules by Proposition 2.7.1.
**Proposition 2.10.3**.: _Suppose that \(\mathcal{C}\) and \(\mathcal{C}_{T}\) are almost ULA generated (with respect to \(S\) and \(T\) respectively). Then the functor \((i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}})^{\mathrm{ren}}\) factors through an equivalence_
\[(\mathcal{C}_{T})^{\mathrm{ren}}\tilde{\to}(\mathcal{C}^{\mathrm{ren}})_{T}.\]
Proof.: Note that \((\mathcal{C}^{\mathrm{ren}})_{T}\) is the kernel of the monad \((j_{*}j^{*})\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}}\), which fits into the commutative square
Since \(\mathcal{C}^{\mathrm{ren},+}\tilde{\to}\mathcal{C}^{+}\), and \((\mathcal{C}_{T})^{\mathrm{coh}}\subset\mathcal{C}_{T}\) is annihilated by \((j_{*}j^{*})\otimes\mathrm{id}_{\mathcal{C}}\), it follows that \((i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}})^{\mathrm{ren}}\) factors through
\[i_{\mathrm{dR},*}\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}}:(\mathcal{C} ^{\mathrm{ren}})_{T}\longrightarrow\mathcal{C}^{\mathrm{ren}}.\]
To see that the resulting functor
\[(\mathcal{C}_{T})^{\mathrm{ren}}\longrightarrow(\mathcal{C}^{\mathrm{ren}})_{T}\]
is essentially surjective, observe that Proposition 2.8.2 and Lemma 2.10.1 together imply that \((\mathcal{C}^{\mathrm{ren}})_{T}\) is ULA generated over \(T\) by objects of the form \((i^{!}\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}})(c)\), where \(c\) is an object in \(\mathcal{C}^{\mathrm{ren}}\) which is ULA over \(S\). Now we appeal to the commutative square
which shows that \(((i_{\mathrm{dR},*}i^{!})\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}})(c)\) agrees with \(((i_{\mathrm{dR},*}i^{!})\otimes\mathrm{id}_{\mathcal{C}})(c)\) under the identification \(\mathcal{C}^{\mathrm{ren},+}\tilde{\to}\mathcal{C}^{+}\) (in the latter formula, we view \(c\) as an object of \(\mathcal{C}^{+}\)). But the object \(((i_{\mathrm{dR},*}i^{!})\otimes\mathrm{id}_{\mathcal{C}})(c)\) belongs to \((\mathcal{C}_{T})^{\mathrm{coh}}\subset\mathcal{C}\), which shows that \((i^{!}\otimes\mathrm{id}_{\mathcal{C}^{\mathrm{ren}}})(c)\) belongs to the essential image of \((\mathcal{C}_{T})^{\mathrm{ren}}\) as needed.
## 3. Preliminaries on factorization
This section covers foundational technical material on factorization categories, algebras, and modules. The key result is Proposition 3.9.2, which gives sufficient conditions under which it is possible to renormalize a factorization category.
We refer the reader to SS6 of [14] for careful definitions of the objects introduced in this section. An alternative but equivalent set of definitions can be found in SS10 of [10] (see Remark 10.5.11 there, which compares the two).
We will denote by \(\mathsf{FactCat}\) the category of unital factorization categories on \(X\), with morphisms given by strictly unital factorizable functors. We can also consider the category \(\mathsf{FactCat}^{\text{\rm lax-fact}}\) consisting of lax factorization categories. Both of these categories admit variants in which functors are allowed to be lax unital, and together these assemble into a cartesian square
Here the horizontal functors are fully faithful, while the vertical functors are only 1-fully faithful but act as the identity on objects.
By definition, we have a forgetful functor
\[\mathsf{FactCat}^{\text{\rm lax-fact}}_{\text{\rm lax-untl}}\longrightarrow \mathsf{ShvCat}(\operatorname{Ran}^{\text{\rm un}}_{\text{\rm dR}}),\]
where \(\operatorname{Ran}^{\text{\rm un}}\) denotes the unital Ran space of \(X\), i.e. the lax prestack parameterizing finite subsets of \(X\) and inclusions between them. The category \(\mathsf{FactCat}^{\text{\rm lax-fact}}_{\text{\rm lax-untl}}\) admits a natural symmetric monoidal structure which makes the forgetful functor symmetric monoidal. This restricts to a symmetric monoidal structure on each of the 1-full subcategories of \(\mathsf{FactCat}^{\text{\rm lax-fact}}_{\text{\rm lax-untl}}\) considered above.
Given an object \(\mathcal{C}\) in \(\mathsf{FactCat}^{\text{\rm lax-fact}}_{\text{\rm lax-untl}}\), we will write
\[\mathcal{C}_{Y}:=\Gamma(Y,\mathcal{C})\]
for any prestack \(Y\) mapping to \(\operatorname{Ran}^{\text{\rm un}}_{\text{\rm dR}}\). For example, for any finite set \(I\) we have the \(\mathsf{D}(X^{I})\)-module category \(\mathcal{C}_{X^{I}_{\text{\rm dR}}}\) and the \(\mathsf{IndCoh}(X^{I})\)-module category
\[\mathcal{C}_{X^{I}}=\mathsf{IndCoh}(X^{I})\underset{\mathsf{D}(X^{I})}{ \otimes}\mathcal{C}_{X^{I}_{\text{\rm dR}}}.\]
### A combinatorial presentation for factorization categories
We denote by \(\mathsf{fSet}\) the category of finite sets, and write \(\mathsf{fSet}_{\text{\rm surj}}\subset\mathsf{fSet}\) for the subcategory consisting of nonempty finite sets and surjections between them.
Recall from [14] that for a lax prestack \(Y\), there is a certain 1-full subcategory
\[\mathsf{ShvCat}(Y)^{\text{\rm naive}}\subset\mathsf{ShvCat}(Y)\]
with the same objects but fewer morphisms. In the case of unital Ran space, the subcategory
\[\mathsf{ShvCat}(\operatorname{Ran}^{\text{\rm un}}_{\text{\rm dR}})^{\text{ \rm naive}}\subset\mathsf{ShvCat}(\operatorname{Ran}^{\text{\rm un}}_{\text{ \rm dR}})\]
contains only those morphisms which are "strictly unital," i.e.
\[\mathsf{FactCat}^{\text{\rm lax-fact}}=\mathsf{FactCat}^{\text{\rm lax-fact}} _{\text{\rm lax-untl}}\underset{\mathsf{ShvCat}(\operatorname{Ran}^{\text{ \rm un}}_{\text{\rm dR}})}{\times}\mathsf{ShvCat}(\operatorname{Ran}^{\text{ \rm un}}_{\text{\rm dR}})^{\text{\rm naive}}.\]
From the colimit presentation
\[\operatorname{Ran}=\underset{I\in\mathsf{fSet}^{\text{\rm op}}_{\text{\rm surj }}}{\operatorname{colim}}X^{I}\]
and the 1-affineness of \(X^{I}_{\text{\rm dR}}\), we obtain a limit presentation
\[\mathsf{ShvCat}(\operatorname{Ran}_{\text{\rm dR}})=\lim_{I\in\mathsf{fSet}_ {\text{\rm surj}}}\mathsf{D}(X^{I})\text{\rm-mod}.\]
The following result supplies an analogous presentation of \(\mathsf{ShvCat}(\operatorname{Ran}^{\text{\rm un}}_{\text{\rm dR}})^{\text{ \rm naive}}\) as a lax limit.
**Proposition 3.2.1**.: _There is a canonical fully faithful embedding of symmetric monoidal categories_
\[\mathsf{ShvCat}(\operatorname{Ran}_{\operatorname{dR}}^{\operatorname{un}})^{ \operatorname{naive}}\longrightarrow\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{ \
Proof.: First, observe that the hypothesis in the proposition implies that
\[\mathsf{D}(X^{I})\underset{\mathsf{D}(X^{J})}{\otimes}\mathcal{C}_{X^{J}_{\text{ dR}}}\longrightarrow\mathcal{C}_{X^{I}_{\text{dR}}}\]
admits a \(\mathsf{D}(X^{I})\)-linear right adjoint for any (not necessarily injective) map \(I\to J\) in \(\mathsf{fSet}\). Namely, any such map can written as the composition of a surjection and an injection, and the functor in question is an equivalence if \(I\to J\) is surjective. It follows that the sheaf of categories underlying \(\mathcal{C}\) is dualizable as an object of \(\mathsf{ShvCat}(\operatorname{Ran}_{\text{dR}}^{\text{un}})\), with dual object given by dualizing termwise and then passing to left adjoints of the structure functors.
A lax factorization structure on \(\mathcal{C}\) induces an oplax factorization structure on \(\mathcal{C}^{\vee}\), and if the factorization structure on \(\mathcal{C}\) is strict then so is the one on \(\mathcal{C}^{\vee}\). It is not difficult to see that the evaluation
\[\mathcal{C}^{\vee}\otimes\mathcal{C}\longrightarrow\text{Vect}\]
and coevaluation
\[\text{Vect}\longrightarrow\mathcal{C}^{\vee}\otimes\mathcal{C}\]
morphisms lift from \(\mathsf{ShvCat}(\operatorname{Ran}_{\text{dR}}^{\text{un}})\) to \(\mathsf{FactCat}_{\text{lax-untl}}\).
_Remark 3.3.2_.: We emphasize that it is necessary to take the dual of \(\mathcal{C}\) in \(\mathsf{FactCat}_{\text{lax-untl}}\) rather than \(\mathsf{FactCat}\). The point is that the coevaluation functor
\[\text{Vect}\longrightarrow\mathcal{C}^{\vee}\otimes\mathcal{C}\]
is typically only lax unital.
### Commutative factorization categories
As shown in [14] SS7.5, one can functorially attach an object of \(\mathsf{FactCat}\) to any symmetric monoidal DG category. Given a symmetric monoidal DG category, we denote the associated factorization category by the same symbol to avoid cluttering the notation. We will take care to avoid confusion by specifying which object we are referring to in each case.
Let \(\mathcal{C}\) be a symmetric monoidal category. For any finite set \(I\), we have a symmetric monoidal functor
\[\operatorname{Loc}_{X^{I}}:\mathcal{C}^{\otimes I}\longrightarrow\mathcal{C} _{X^{I}_{\text{dR}}},\]
defined in Remark 6.8.4 of [14].
**Proposition 3.4.1**.: _Suppose that \(\mathcal{C}\) is a rigid symmetric monoidal category whose underlying DG category is compactly generated._
1. _The functor_ \(\operatorname{Loc}_{X^{I}}\) _sends compact objects to objects ULA over_ \(X^{I}\)_. If_ \(\{c_{\alpha}\}\) _is a set of compact generators for_ \(\mathcal{C}^{\otimes I}\)_, then_ \(\{\operatorname{Loc}_{X^{I}}(c_{\alpha})\}\) _is a set of ULA generators for_ \(\mathcal{C}_{X^{I}_{\text{dR}}}\)_._
2. _The factorization category attached to_ \(\mathcal{C}\) _is dualizable and self-dual as an object of_ \(\mathsf{FactCat}_{\text{lax-untl}}\)_._
Proof.: For (i), see Proposition 6.16.1 and the proof of Theorem 6.7.1 in _loc. cit_.
By Corollary 6.18.2 of [14], the \(\mathsf{D}(X^{I})\)-module category \(\mathcal{C}_{X^{I}_{\text{dR}}}\) is dualizable and self-dual for any finite set \(I\). According to Proposition 3.3.1, to prove (ii) it remains to show that for any injection \(J\to I\), the structure map
\[\mathsf{D}(X^{I})\underset{\mathsf{D}(X^{J})}{\otimes}\mathcal{C}_{X^{J}_{ \text{dR}}}\longrightarrow\mathcal{C}_{X^{I}_{\text{dR}}}\]
admits a \(\mathsf{D}(X^{I})\)-linear right adjoint, or equivalently preserves ULA objects. This follows from the commutative square
where the upper horizontal arrow acts as the identity on \(\mathcal{C}^{\otimes J}\) and inserts the unit in \(\mathcal{C}^{\otimes(I\setminus J)}\).
The following lemma will also be useful.
**Lemma 3.4.2**.: _Let \(F:\mathcal{C}\to\mathcal{D}\) be a symmetric monoidal functor. Assume that \(\mathcal{C}\) is rigid and compactly generated, and that the unit object in \(\mathcal{D}\) is compact, so that in particular the right adjoint \(F^{\mathrm{R}}\) of \(F\) is continuous. Then the morphism in \(\mathsf{FactCat}\) arising from \(F\) admits a right adjoint in \(\mathsf{FactCat}_{\mathrm{lax\text{-}untl}}\), which in particular agrees with \(F^{\mathrm{R}}\) fiberwise._
Proof.: It suffices to prove that for any finite set \(I\), the \(\mathsf{D}(X^{I})\)-linear functor
\[F:\mathcal{C}_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathcal{D}_{X^{I}_{ \mathrm{dR}}}\]
preserves ULA objects. Using the commutative square
and Proposition 3.4.1, it is enough to prove that the composition
\[\mathcal{C}^{\otimes I}\xrightarrow{F^{\otimes I}}\mathcal{D}^{\otimes I} \xrightarrow{\operatorname{Loc}_{X^{I}}}\mathcal{D}_{X^{I}_{\mathrm{dR}}}\]
sends compact objects to objects ULA over \(X^{I}\). This functor is symmetric monoidal, and the proof of Lemma 6.16.2 in [11] shows that the unit object in \(\mathcal{D}_{X^{I}_{\mathrm{dR}}}\) is ULA, so the claim follows from the rigidity of \(\mathcal{C}^{\otimes I}\).
### Functoriality of factorization modules
Consider the category \((\mathsf{FactCat}_{\mathrm{lax\text{-}untl}})^{\mathsf{FactAlg}^{\mathrm{op}}}\), whose objects are pairs \((\mathcal{C},A)\) consisting of a factorization category \(\mathcal{C}\) and an (always unital unless otherwise specified) factorization algebra \(A\) in \(\mathcal{C}\). A morphism \((\mathcal{C},A)\to(\mathcal{D},B)\) is given by a lax unital factorization functor \(F:\mathcal{C}\to\mathcal{D}\) and a morphism \(B\to F(A)\) of factorization algebras in \(\mathcal{D}\).
For a given pair \((\mathcal{C},A)\) as above we can consider the lax factorization category of (unital) factorization modules \(A\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})\). This construction extends to a functor
\[(\mathsf{FactCat}_{\mathrm{lax\text{-}untl}})^{\mathsf{FactAlg}^{\mathrm{op}} }\longrightarrow\mathsf{FactCat}_{\mathrm{lax\text{-}untl}}^{\mathrm{lax \text{-}fact}}. \tag{3.5.1}\]
In particular, given a morphism \((\mathcal{C},A)\to(\mathcal{D},B)\) consisting of \(F:\mathcal{C}\to\mathcal{D}\) and \(B\to F(A)\), we have the resulting functor
\[A\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})\longrightarrow F(A)\text{ -}\mathsf{mod}^{\mathrm{fact}}(\mathcal{D})\longrightarrow B\text{-} \mathsf{mod}^{\mathrm{fact}}(\mathcal{D}).\]
Here the first functor is induced by \(F\), and the second is restriction of the factorization module structure along \(B\to F(A)\) (cf. [10] SS10.6).
We observe that \((\mathsf{FactCat}_{\mathrm{lax\text{-}untl}})^{\mathsf{FactAlg}^{\mathrm{op}}}\) admits a natural symmetric monoidal structure, given on objects by
\[(\mathcal{C},A)\otimes(\mathcal{D},B)=(\mathcal{C}\otimes\mathcal{D},A \boxtimes B).\]
Moreover, the functor (3.5.1) is naturally lax symmetric monoidal: in particular, for \((\mathcal{C},A)\) and \((\mathcal{D},B)\) in \((\mathsf{FactCat}_{\text{lax-unt}})^{\mathsf{FactAlg}^{\text{op}}}\) we have a canonical morphism
\[A\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\otimes B\text{-}\mathsf{mod}^ {\text{fact}}(\mathcal{D})\longrightarrow(A\boxminus B)\text{-}\mathsf{mod}^{ \text{fact}}(\mathcal{C}\otimes\mathcal{D})\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\), which intertwines the forgetful functors to \(\mathcal{C}\otimes\mathcal{D}\).
We will also need the following lemma. Let \(\mathcal{C}\) be a factorization category and \(A\to B\) a morphism of factorization algebras in \(\mathcal{C}\). The resulting morphism
\[B\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\longrightarrow A\text{-} \mathsf{mod}^{\text{fact}}(\mathcal{C})\]
in \(\mathsf{FactCat}^{\text{lax-fact}}_{\text{lax-unt}}\) sends the vacuum module \(B\) to a factorization algebra in \(A\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\) which we denote by the same symbol \(B\).
**Lemma 3.6.1**.: _Applying (3.5.1) to the forgetful functor_
\[A\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\longrightarrow\mathcal{C}\]
_and the factorization module \(B\) induces an isomorphism_
\[B\text{-}\mathsf{mod}^{\text{fact}}(A\text{-}\mathsf{mod}^{\text{fact}}( \mathcal{C}))\longrightarrow B\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\]
_in \(\mathsf{FactCat}^{\text{lax-fact}}\)._
Proof.: The inverse is given by applying (3.5.1) to the restriction functor
\[B\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{C})\longrightarrow A\text{-} \mathsf{mod}^{\text{fact}}(\mathcal{C})\]
and the factorization module \(B\).
Observe that the projection \((\mathsf{FactCat}_{\text{lax-unt}})^{\mathsf{FactAlg}^{\text{op}}}\to\mathsf{ FactCat}_{\text{lax-unt}}\) is symmetric monoidal, and in particular induces a functor
\[\mathsf{AssocAlg}((\mathsf{FactCat}_{\text{lax-unt}})^{\mathsf{FactAlg}^{ \text{op}}})\longrightarrow\mathsf{AssocAlg}(\mathsf{FactCat}_{\text{lax-unt }}).\]
This functor admits a canonical section
\[\mathsf{AssocAlg}(\mathsf{FactCat}_{\text{lax-unt}})\longrightarrow\mathsf{ AssocAlg}((\mathsf{FactCat}_{\text{lax-unt}})^{\mathsf{FactAlg}^{\text{op}}}) \tag{3.7.1}\]
given on objects by \(\mathcal{A}\mapsto(\mathcal{A},\mathds{1}_{\mathcal{A}})\), where \(\mathds{1}_{\mathcal{A}}\) denotes the monoidal unit of \(\mathcal{A}\) viewed as a factorization algebra.
The lax symmetric monoidal structure on (3.5.1) gives rise to a functor
\[\mathsf{AssocAlg}((\mathsf{FactCat}_{\text{lax-unt}})^{\mathsf{FactAlg}^{ \text{op}}})\longrightarrow\mathsf{AssocAlg}(\mathsf{FactCat}_{\text{lax-unt }}^{\text{lax-fact}}).\]
Note that the composition of this functor with (3.7.1) factors through the \(1\)-full subcategory
\[\mathsf{AssocAlg}(\mathsf{FactCat}^{\text{lax-fact}})\longrightarrow\mathsf{ AssocAlg}(\mathsf{FactCat}_{\text{lax-unt}}^{\text{lax-fact}}),\]
and hence can be viewed as a functor
\[\mathsf{AssocAlg}(\mathsf{FactCat}_{\text{lax-unt}}) \longrightarrow\mathsf{AssocAlg}(\mathsf{FactCat}^{\text{lax-fact }}) \tag{3.7.2}\] \[\mathcal{A} \mapsto\mathds{1}_{\mathcal{A}}\text{-}\mathsf{mod}^{\text{fact} }(\mathcal{A}).\]
Namely, the factorization unit and the monoidal unit in \(\mathds{1}_{\mathcal{A}}\text{-}\mathsf{mod}^{\text{fact}}(\mathcal{A})\) coincide, both being the vacuum module.
### Unital vs. non-unital factorization modules
Fix a factorization category \(\mathcal{C}\). We denote by \(\mathsf{FactAlg}(\mathcal{C})\) and \(\mathsf{FactAlg}^{\mathrm{non-untl}}(\mathcal{C})\) the categories of unital and non-unital factorization algebras, respectively.
**Proposition 3.8.1**.: _The forgetful functor_
\[\mathrm{OblvUnit}:\mathsf{FactAlg}(\mathcal{C})\longrightarrow\mathsf{FactAlg}^ {\mathrm{non-untl}}(\mathcal{C})\]
_admits a left adjoint \(\mathrm{AddUnit}\). The comonad_
\[\mathrm{AddUnit}\circ\mathrm{OblvUnit}:\mathsf{FactAlg}(\mathcal{C}) \longrightarrow\mathsf{FactAlg}(\mathcal{C})\]
_is given by \(A\mapsto A\times\mathrm{unite}\)._
As in the setting of commutative algebras, a unital \(\mathrm{AddUnit}(A)\)-module structure is the same datum as a non-unital \(A\)-module structure.
**Proposition 3.8.2**.: _Given \(A\) in \(\mathsf{FactAlg}^{\mathrm{non-untl}}(\mathcal{C})\), the composition_
\[\mathrm{AddUnit}(A)\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{ \mathrm{dR}}}\longrightarrow\mathrm{AddUnit}(A)\mbox{-}\mathsf{mod}^{ \mathrm{fact},\mathrm{non-untl}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}} \longrightarrow A\mbox{-}\mathsf{mod}^{\mathrm{fact},\mathrm{non-untl}}( \mathcal{C})_{X^{I}_{\mathrm{dR}}}\]
_is an equivalence for any nonempty finite set \(I\)._
Modules over cartesian products of factorization algebras also behave as expected.
**Proposition 3.8.3**.: _For any \(A\), \(B\) in \(\mathsf{FactAlg}(\mathcal{C})\), the functor_
\[\mathrm{res}_{A}^{A\times B}\oplus\mathrm{res}_{B}^{A\times B}:A\mbox{-} \mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}}\times B\mbox{-} \mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}}\longrightarrow( A\times B)\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}}\]
_is an equivalence for any finite set \(I\)._
Combining the above, we obtain the following useful result.
**Proposition 3.8.4**.: _For any \(A\) in \(\mathsf{FactAlg}(\mathcal{C})\), the forgetful functor_
\[A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}} \longrightarrow A\mbox{-}\mathsf{mod}^{\mathrm{fact},\mathrm{non-untl}}( \mathcal{C})_{X^{I}_{\mathrm{dR}}}\]
_factors canonically as_
\[A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}} \xrightarrow{\mathrm{(id,0)}}A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathcal{C })_{X^{I}_{\mathrm{dR}}}\times\mathcal{C}_{X^{I}_{\mathrm{dR}}}\]
_where the vertical functor is an equivalence. In particular, this forgetful functor is fully faithful._
### Renormalization
Finally, we formulate conditions under which it is possible to renormalize a factorization category.
**Proposition 3.9.1**.: _Let \(\mathcal{C}\) be a sheaf of categories on \(\mathrm{Ran}^{\mathrm{un}}_{\mathrm{dR}}\), and suppose we are given a t-structure on \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) for each finite set \(I\), compatible with the \(\mathsf{D}(X^{I})\)-action. Assume also that the following conditions are satisfied:_
1. \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) _is almost ULA generated for any_ \(I\) _in_ \(\mathsf{fSet}\)_;_
2. _for any surjection_ \(I\to J\)_, the functor_ \(\mathcal{C}_{X^{J}_{\mathrm{dR}}}\rightarrow\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) _left adjoint to the structure functor is t-exact;_
3. _for any injection_ \(I\to J\)_, the structure functor_ \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\rightarrow\mathcal{C}_{X^{J}_{\mathrm{dR}}}\) _has finite homological amplitude._
_Then there exists a sheaf of categories \(\mathcal{C}^{\mathrm{ren}}\) on \(\mathrm{Ran}^{\mathrm{un}}_{\mathrm{dR}}\) equipped with a morphism \(\mathcal{C}^{\mathrm{ren}}\rightarrow\mathcal{C}\), such that for any finite set \(I\) the functor \((\mathcal{C}^{\mathrm{ren}})_{X^{I}_{\mathrm{dR}}}\rightarrow\mathcal{C}_{X^{I}_{ \mathrm{dR}}}\) realizes \((\mathcal{C}^{\mathrm{ren}})_{X^{I}_{\mathrm{dR}}}\) as the renormalization of \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) constructed in Proposition 2.7.1._
Proof.: The almost ULA generation condition implies in particular that each \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) is almost compactly generated, and conditions (ii-iii) imply that the structure functor \(\mathsf{D}(X^{I})\to\mathsf{D}(X^{J})\) has finite homological amplitude for any map of finite sets \(I\to J\). Applying the description of \(\mathsf{ShvCat}(\mathrm{Ran}_{\mathrm{dR}}^{\mathrm{un}})^{\mathrm{naive}}\) from Proposition 3.2.1 and the construction of Proposition 2.7.1, we obtain an object of
\[\max_{I\in\mathsf{Fst}}\mathsf{D}(X^{I})\text{--}\mathsf{mod},\]
which lies in the image of \(\mathsf{ShvCat}(\mathrm{Ran}_{\mathrm{dR}}^{\mathrm{un}})^{\mathrm{naive}}\) by Propositions 2.8.2 and 2.10.3.
**Proposition 3.9.2**.: _Let \(\mathcal{C}\) be an object of \(\mathsf{FactCat}^{\mathrm{lax-fact}}\). Suppose that we are given a t-structure on \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\) for any finite set \(I\) satisfying the conditions of Proposition 3.9.1, and that for any finite sets \(I_{1},I_{2}\), the image of the composite functor_
\[\mathcal{C}_{X^{I_{1}}_{\mathrm{dR}}}^{+}\times\mathcal{C}_{X^{I_{2}}_{ \mathrm{dR}}}^{+}\longrightarrow\mathcal{C}_{X^{I_{1}}_{\mathrm{dR}}} \otimes\mathcal{C}_{X^{I_{2}}_{\mathrm{dR}}}\longrightarrow\mathsf{D}((X^{I_ {1}}\times X^{I_{2}})_{\mathrm{disj}})\underset{\mathsf{D}(X^{I_{1}\sqcup I_{2 }})}{\otimes}\mathcal{C}_{X^{I_{1}\sqcup I_{2}}_{\mathrm{dR}}}\]
_is contained in_
\[(\mathsf{D}((X^{I_{1}}\times X^{I_{2}})_{\mathrm{disj}})\underset{\mathsf{D}(X ^{I_{1}\sqcup I_{2}})}{\otimes}\mathcal{C}_{X^{I_{1}\sqcup I_{2}}_{\mathrm{ dR}}})^{+}.\]
_Then the sheaf of categories \(\mathcal{C}^{\mathrm{ren}}\) on \(\mathrm{Ran}_{\mathrm{dR}}^{\mathrm{un}}\) from Proposition 3.9.1 naturally lifts to an object of \(\mathsf{FactCat}^{\mathrm{lax-fact}}\)._
Proof.: Apply Proposition 3.2.2 to view \(\mathcal{C}\) as a lax symmetric monoidal section of (3.2.1). By Proposition 2.9.2, for any object \(p:I\to J\) in \(\mathrm{Part}_{\mathrm{un}}\), the category
\[\mathcal{C}_{U(p)}=\mathsf{D}(U(p))\underset{\mathsf{D}(X^{I})}{\otimes} \mathcal{C}_{X^{I}_{\mathrm{dR}}}\]
is almost ULA generated over \(U(p)\) with respect to the unique t-structure that makes \(\mathcal{C}_{X^{I}_{\mathrm{dR}}}\to\mathcal{C}_{U(p)}\) t-exact. Given objects \(p_{1}:I_{1}\to J_{1}\), \(p_{2}:I_{2}\to J_{2}\) and a morphism \(\epsilon:p_{1}\to p_{2}\) in \(\mathrm{Part}_{\mathrm{un}}\), we claim that the corresponding structure functor \(\mathcal{C}_{U(p_{1})}\to\mathcal{C}_{U(p_{2})}\) has finite homological amplitude. This follows from hypotheses (ii-iii) of Proposition 3.9.1 and the commutative square
since the vertical functors are t-exact with fully faithful and left t-exact right adjoints. Similarly, the structure functor \(\mathcal{C}_{U(p_{1})}\otimes\mathcal{C}_{U(p_{2})}\to\mathcal{C}_{U(p_{1} \sqcup p_{2})}\) coming from the lax symmetric monoidal structure sends \(\mathcal{C}_{U(p_{1})}^{+}\times\mathcal{C}_{U(p_{2})}^{+}\) into \(\mathcal{C}_{U(p_{1}\sqcup p_{2})}^{+}\), which we can see from the commutative square
Combining the above observations, it follows that applying (2.6.1) termwise to the categories \(\mathcal{C}_{U(p)}\) yields another lax symmetric monoidal section of (3.2.1). This section satisfies condition (i) of Proposition 3.2.2 by Proposition 2.10.3, and condition (ii) is trivial to verify.
Let \((\mathsf{FactCat}^{\text{\rm{lax-fact}}})^{\otimes}_{\text{\rm{t-str}}}\) be the pseudo-tensor category whose objects consist of \(\mathcal{C}\) in \(\mathsf{FactCat}^{\text{\rm{lax-fact}}}\) together with a t-structure on \(\mathcal{C}_{X^{I}_{\text{\rm{dR}}}}\) for every finite set \(I\), and whose \(n\)-ary morphisms are the same as those in the symmetric monoidal category \(\mathsf{FactCat}^{\text{\rm{lax-fact}}}\), i.e. we do not impose any t-exactness conditions on functors.
Denote by
\[(\mathsf{FactCat}^{\text{\rm{lax-fact}}})^{\otimes}_{\text{\rm{aULA}}}\subset( \mathsf{FactCat}^{\text{\rm{lax-fact}}})^{\otimes}_{\text{\rm{t-str}}}\]
the pseudo-tensor subcategory whose objects \(\mathcal{C}\) satisfy the conditions of Propositions 3.9.1 and 3.9.2, and whose \(n\)-ary morphisms \((\mathcal{C}_{1},\cdots,\mathcal{C}_{n})\to\mathcal{D}\) satisfy the condition that the image of
\[(\mathcal{C}_{1})^{+}_{X^{I}_{\text{\rm{dR}}}}\times\cdots\times(\mathcal{C}_ {n})^{+}_{X^{I}_{\text{\rm{dR}}}}\longrightarrow(\mathcal{C}_{1})_{X^{I}_{ \text{\rm{dR}}}}\underset{\mathsf{D}(X^{I})}{\otimes}\cdots\underset{\mathsf{ D}(X^{I})}{\otimes}(\mathcal{C}_{n})_{X^{I}_{\text{\rm{dR}}}}\longrightarrow \mathcal{D}_{X^{I}_{\text{\rm{dR}}}}\]
is contained in \(\mathcal{D}^{+}_{X^{I}_{\text{\rm{dR}}}}\) for any finite set \(I\).
**Proposition 3.9.3**.: _The assignment \(\mathcal{C}\mapsto\mathcal{C}^{\text{\rm{ren}}}\) of Proposition 3.9.2 naturally lifts to a morphism of pseudo-tensor categories_
\[(\mathsf{FactCat}^{\text{\rm{lax-fact}}})^{\otimes}_{\text{\rm{aULA}}} \longrightarrow(\mathsf{FactCat}^{\text{\rm{lax-fact}}})^{\otimes}_{\text{ \rm{aULA}}}.\]
Proof.: This is immediate, since the construction of Proposition 3.9.2 is induced by the morphism of pseudo-tensor categories (2.6.1).
## 4. A local acyclity theorem
The main goal of this section is to prove Theorem 4.6.1, which says in particular that the vacuum factorization module for a commutative algebra almost of finite type is almost ULA. From this we deduce Corollary 4.6.1.1, which says that the category of factorization modules over a commutative algebra of finite type satisfies the conditions of Proposition 3.9.2 and hence can be renormalized, provided that the same is true for the category of commutative modules over that algebra.
### Representations as a factorization category
Below we collect some facts about the factorization category \(\mathsf{Rep}(H)\), where \(H\) is an algebraic group.
The forgetful functor \(\operatorname{oblv}_{H}:\mathsf{Rep}(H)\to\operatorname{Vect}\) is symmetric monoidal, and hence gives rise to a morphism in \(\mathsf{FactCat}\).
**Proposition 4.1.1**.: _For any finite set \(I\), the functor_
\[\operatorname{oblv}_{H}:\mathsf{Rep}(H)_{X^{I}_{\text{\rm{dR}}}}\longrightarrow \mathsf{D}(X^{I})\]
_is conservative, and there is a unique t-structure on \(\mathsf{Rep}(H)_{X^{I}_{\text{\rm{dR}}}}\) which makes this functor t-exact. The morphism \(\operatorname{oblv}_{H}\) admits a right adjoint \(\operatorname{coid}_{H}\) in \(\mathsf{FactCat}_{\text{\rm{lax-untl}}}\), and for any finite set \(I\), the functor_
\[\operatorname{coid}_{H}:\mathsf{D}(X^{I})\longrightarrow\mathsf{Rep}(H)_{X^{ I}_{\text{\rm{dR}}}}\]
_is t-exact._
Proof.: See Propositions 6.22.1 and 6.24.1 of [11].
**Proposition 4.1.2**.: _The factorization category \(\mathsf{Rep}(H)\), with the t-structure on \(\mathsf{Rep}(H)_{X^{I}_{\text{\rm{dR}}}}\) defined as above for each finite set \(I\), satisfies the conditions of Proposition 3.9.2, and_
\[\mathsf{Rep}(H)^{\text{\rm{ren}}}\longrightarrow\mathsf{Rep}(H)\]
_is an isomorphism._
Proof.: The t-structure on \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) inherits compatibility with filtered colimits and right completeness from \(\mathsf{D}(X^{I})\) via the conservative and t-exact functor \(\operatorname{oblv}_{H}\). To see that this t-structure is almost compactly generated, it remains to prove that if an object \(\mathcal{M}\) is compact in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}^{\geqslant 0}\), then it is almost compact in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). By the previous proposition, we have the \(\mathsf{D}(X^{I})\)-linear adjunction
\[\operatorname{oblv}_{H}:\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\rightleftarrows \mathsf{D}(X^{I}):\operatorname{coind}_{H}\]
with both functors t-exact. Thus \(\operatorname{oblv}_{H}(\mathcal{M})\) is compact in \(\mathsf{D}(X^{I})^{\geqslant 0}\), hence almost compact because \(\mathsf{D}(X^{I})\) is almost compactly generated. Now Lemma 2.4.1 implies that \(\mathcal{M}\) is almost compact.
If \(\{V_{\alpha}\}\) is a set of compact generators for \(\mathsf{Rep}(H)^{\otimes I}\), then \(\{\operatorname{Loc}_{X^{I}}(V_{\alpha})\}\) is a set of ULA generators for \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). Note that \(\operatorname{Loc}_{X^{I}}\) is t-exact up to shift, and in particular each of these objects is eventually coconnective. It follows that \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) is almost ULA generated and that
\[(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}})^{\operatorname{ren}\tilde{-}} \mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}.\]
It is not difficult to see that the other conditions in Propositions 3.9.1 and 3.9.2 are inherited from \(\mathsf{D}(X^{I})\) via \(\operatorname{oblv}_{H}\).
### The t-structure on factorization modules
Let \(A\) be a commutative algebra in \(\mathsf{Rep}(H)^{\leqslant 0}\).
**Proposition 4.2.1**.: _For any finite set \(I\), there is a unique t-structure on \(A\operatorname{\mathsf{-mod}fact}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) which makes the forgetful functor_
\[\operatorname{oblv}^{\mathrm{fact}}:A\operatorname{\mathsf{-mod}fact}(\mathsf{ Rep}(H))_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathsf{Rep}(H)_{X^{I}_{ \mathrm{dR}}}\]
_t-exact._
We will need the following lemma.
**Lemma 4.2.2**.: _If \(Y\) and \(Z\) are classical local complete intersection schemes and \(f:Y\to Z\) is finite flat, then \(f^{!}:\mathsf{D}(Z)\to\mathsf{D}(Y)\) is t-exact._
Proof.: First, we claim that the forgetful functor
\[\operatorname{oblv}_{Y}:\mathsf{D}(Y)\longrightarrow\mathsf{IndCoh}(Y)\]
is t-exact. Recall that for any scheme \(Y\) the functor \(\operatorname{oblv}_{Y}\) is left t-exact, so it suffices to prove right t-exactness. Since the claim is Zariski local on \(Y\), we can assume that there exists a regular closed embedding \(i:Y\to W\) where \(W\) is smooth. The composite functor
\[\mathsf{D}(Y)\xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
it is therefore enough to prove that \(f^{!}:\mathsf{IndCoh}(Z)\longrightarrow\mathsf{IndCoh}(Y)\) is t-exact. This functor is left t-exact because \(f\) is finite. On the other hand, since \(Y\) is Gorenstein, so is the morphism \(f\), which implies right t-exactness.
Now we can prove the proposition.
Proof of Proposition 4.2.1.: First, we claim that if such a t-structure exists on
\[A\mathsf{-mod}^{\mathrm{fact},\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}},\]
then the proposition will follow. Namely, by Proposition 3.8.4 we have
\[A\mathsf{-mod}^{\mathrm{fact},\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}\cong A\mathsf{-mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}\times\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}.\]
It therefore suffices to observe that the direct factor \(0\times\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) is stable under truncation functors.
Thus we are reduced to proving the proposition for non-unital factorization modules. In that case, by [13] Corollary 7.3.5 we have the Koszul duality equivalence
\[A\mathsf{-mod}^{\mathrm{fact},\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}\tilde{\phantom{\mathsf{-mod}}}(A\otimes\omega_{X}[-1])\mathsf{ -mod}^{\mathrm{ch}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}},\]
where \((A\otimes\omega_{X}[-1])\) is the commutative chiral algebra attached to \(A\). This equivalence intertwines the forgetful functors to \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). If \(\mathcal{M}\) is an object of
\[(A\otimes\omega_{X}[-1])\mathsf{-mod}^{\mathrm{ch}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}},\]
we claim that the truncation \(\tau^{\leq 0}\mathcal{M}\) taken in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) admits a chiral \((A\otimes\omega_{X}[-1])\)-module structure compatible with the morphism \(\tau^{\leq 0}\mathcal{M}\to\mathcal{M}\). It will then follow that
\[\mathrm{Hom}_{(A\otimes\omega_{X}[-1])\mathsf{-mod}^{\mathrm{ch}}(\mathsf{Rep }(H))_{X^{I}_{\mathrm{dR}}}}(\mathcal{N},\tau^{\leq 0}\mathcal{M})\tilde{ \phantom{\mathsf{-mod}}}\mathrm{Hom}_{(A\otimes\omega_{X}[-1])\mathsf{-mod}^ {\mathrm{ch}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}}(\mathcal{N},\mathcal{M})\]
for any \(\mathcal{N}\) which is connective in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\), since
\[\mathrm{Hom}_{\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}}(\mathcal{N},\tau^{>0} \mathcal{M})=0\]
and \(\mathrm{oblv}_{A\otimes\omega_{X}[-1]}\) is conservative.
Let \(i:Z_{I}\to X\times X^{I}\) denote the inclusion of the incidence divisor and \(j:(X\times X^{I})_{\mathrm{disj}}\to X\times X^{I}\) its complement, and write \(p:Z_{I}\to X^{I}\) for the projection. Note that by Lemma 4.2.2, the functor
\[i_{\mathrm{dR},*}p^{!}:\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\longrightarrow \mathsf{Rep}(H)_{X_{\mathrm{dR}}}\otimes\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\]
is t-exact. The structure map of \(\mathcal{M}\) is a morphism
\[j_{*}j^{*}((A\otimes\omega_{X}[-1])\boxtimes\mathcal{M})\longrightarrow i_{ \mathrm{dR},*}p^{!}\mathcal{M}\]
in \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\otimes\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). The object
\[j_{*}j^{*}((A\otimes\omega_{X}[-1])\boxtimes\tau^{\leq 0}\mathcal{M})\]
is connective, and hence the composition
\[j_{*}j^{*}((A\otimes\omega_{X}[-1])\boxtimes\tau^{\leq 0}\mathcal{M}) \longrightarrow j_{*}j^{*}((A\otimes\omega_{X}[-1])\boxtimes\mathcal{M}) \longrightarrow i_{\mathrm{dR},*}p^{!}\mathcal{M}\]
factors uniquely through
\[\tau^{\leq 0}i_{\mathrm{dR},*}p^{!}\mathcal{M}=i_{\mathrm{dR},*}p^{!}\tau^{ \leq 0}\mathcal{M}.\]
The same reasoning applies to \(n\)-ary chiral operations for \(n>1\), which yields the desired chiral module structure on \(\tau^{\leq 0}\mathcal{M}\).
### Commutative factorization modules
We continue to assume that \(A\) is connective. Recall that \(A\) determines a commutative factorization algebra in the commutative factorization category \(\mathsf{Rep}(H)\), so that in particular we obtain an object \(A_{X^{I}}\) in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) for any finite set \(I\).
**Lemma 4.3.1**.: _For any finite set \(I\), we have \(H^{i}(A_{X^{I}})=0\) for all \(i>-\#I\). If \(A\) is eventually coconnective, then so is \(A_{X^{I}}\)._
Proof.: It suffices to prove both claims after applying the t-exact and conservative functor
\[\mathrm{oblv}_{H}:\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathsf{ D}(X^{I}),\]
so we can assume that \(H\) is trivial.
The first assertion can be proved by the same argument as Lemma 6.24.3 of [10].
The second assertion is clear from the Cousin filtration, whose subquotients have the form \(A^{\otimes J}\otimes_{\mathcal{J}\bullet}j^{!}\omega_{Z(p)}\) where \(p:I\to J\) is surjective and \(j:Z(p)\to X^{I}\) is the inclusion of the corresponding stratum.
The symmetric monoidal functor
\[\mathrm{ind}_{A}:\mathsf{Rep}(H)\longrightarrow A\text{--}\mathsf{mod}( \mathsf{Rep}(H))\]
gives rise to a morphism of factorization categories. Since these symmetric monoidal categories are rigid, Lemma 3.4.2 says that \(\mathrm{ind}_{A}\) admits a right adjoint
\[\mathrm{oblv}_{A}:A\text{--}\mathsf{mod}(\mathsf{Rep}(H))\longrightarrow \mathsf{Rep}(H)\]
in \(\mathsf{FactCat}_{\text{\rm{lax-untl}}}\).
**Proposition 4.3.2**.: _For any finite set \(I\), the category \(A\text{--}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) admits a unique t-structure such that_
\[\mathrm{oblv}_{A}:A\text{--}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}} \longrightarrow\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\]
_is t-exact._
Proof.: The functor \(\mathrm{oblv}_{A}\) admits the \(\mathsf{D}(X^{I})\)-linear left adjoint \(\mathrm{ind}_{A}\), and the monad \(\mathrm{oblv}_{A}\circ\mathrm{ind}_{A}\) on \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) is given by tensoring with the commutative algebra \(A_{X^{I}}\). Since the tensor product in \(\mathsf{D}(X^{I})\) is has homological amplitude \(\#I\), the same holds for \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\), so Lemma 4.3.1 implies that this monad is right t-exact. The proposition follows.
The forgetful functor from commutative to factorization \(A\)-modules is a morphism
\[\mathrm{oblv}^{\mathrm{com}\to\mathrm{fact}}:A\text{--}\mathsf{mod}(\mathsf{ Rep}(H))\longrightarrow A\text{--}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))\]
in \(\mathsf{FactCat}\). It is compatible with the forgetful functors to \(\mathsf{Rep}(H)\), and in particular the functor
\[\mathrm{oblv}^{\mathrm{com}\to\mathrm{fact}}:A\text{--}\mathsf{mod}(\mathsf{ Rep}(H))_{X^{I}_{\mathrm{dR}}}\longrightarrow A\text{--}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\]
is t-exact for any finite set \(I\).
By definition, the _vacuum factorization module_\(\mathrm{Vac}_{A}\) in \(A\text{--}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))\) is the unit object for the factorization structure. The unitality of \(\mathrm{oblv}^{\mathrm{com}\to\mathrm{fact}}\) implies that
\[\mathrm{Vac}_{A}=\mathrm{oblv}^{\mathrm{com}\to\mathrm{fact}}A.\]
Let \(\mathcal{A}\) be a symmetric monoidal DG category, \(\mathcal{C}\) an \(\mathcal{A}\)-module category, and \(\mathcal{C}_{0}\subset\mathcal{C}\) a (not necessarily stable) full subcategory. For any associative algebra \(A\) in \(\mathcal{A}\), we can consider
\[A\text{-}\mathsf{mod}(\mathcal{C}_{0}):=A\text{-}\mathsf{mod}(\mathcal{C}) \underset{\mathcal{C}}{\times}\mathcal{C}_{0}.\]
We will use the following formal lemma.
**Lemma 4.4.1**.: _Suppose that \(A\to B\) is a morphism of associative algebras in \(\mathcal{A}\) with the property that_
\[\operatorname{Hom}_{\mathcal{C}}(B^{\otimes n}\otimes M,N)\longrightarrow \operatorname{Hom}_{\mathcal{C}}(A^{\otimes n}\otimes M,N)\]
_is an isomorphism for all \(n\geqslant 1\) and all \(M,N\) in \(\mathcal{C}_{0}\). Then the restriction of scalars functor_
\[B\text{-}\mathsf{mod}(\mathcal{C}_{0})\longrightarrow A\text{-}\mathsf{mod}( \mathcal{C}_{0})\]
_is an equivalence._
**Lemma 4.4.2**.: _Fix \(m\geqslant 0\). Let \(A\to B\) be a morphism of connective commutative algebras in \(\mathsf{Rep}(H)\) such that \(H^{i}(A)\tilde{\to}H^{i}(B)\) for all \(i\geqslant-m\). Then for any finite set \(I\), we have_
\[H^{i}(A_{X^{I}})\tilde{\to}H^{i}(B_{X^{I}})\]
_for all \(i\geqslant-\#I-m\)._
Proof.: As in the proof of Lemma 4.3.1, we can assume that \(H\) is trivial.
Let us treat the case \(\#I=2\) for simplicity, the general case being similar. Fix \(i\geqslant-2-m\). Our hypothesis implies that
\[H^{i+1}(A\otimes\!\Delta_{\mathrm{dR},*}\omega_{X})=H^{i+2}(A)\otimes\!\Delta_ {\mathrm{dR},*}\omega_{X}[-1]\longrightarrow H^{i+2}(B)\otimes\!\Delta_{ \mathrm{dR},*}\omega_{X}[-1]=H^{i+1}(B\otimes\!\Delta_{\mathrm{dR},*}\omega_{X})\]
is an isomorphism. Since \(A\) and \(B\) are connective, we have
\[H^{i+2}(A\otimes A)\tilde{\to}H^{i+2}(B\otimes B),\]
from which we deduce that
\[H^{i}(A\otimes A\otimes j_{*}j^{*}\omega_{X^{2}})\tilde{\to}H^{i}(B\otimes B \otimes j_{*}j^{*}\omega_{X^{2}}).\]
On the other hand, the surjectivity of \(H^{i+2}(A\otimes A)\to H^{i+2}(A)\) implies that the connecting homomorphism
\[H^{i}(A\otimes A\otimes j_{*}j^{*}\omega_{X^{2}})\longrightarrow H^{i+1}(A \otimes\Delta_{\mathrm{dR},*}\omega_{X})\]
for the Cousin triangle is surjective, and likewise for \(B\), which implies the desired isomorphism.
**Lemma 4.4.3**.: _In the situation of the previous lemma, the restriction of scalars functors_
\[B\text{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\leqslant 0, \geqslant-m}\longrightarrow A\text{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}^{\leqslant 0,\geqslant-m}\]
_and_
\[B\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{ \leqslant 0,\geqslant-m}\longrightarrow A\text{-}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\leqslant 0,\geqslant-m}\]
_are equivalences._
Proof.: For the commutative case, we saw in the proof of Proposition 4.3.2 that
\[A_{X^{I}}\text{-}\mathsf{mod}(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}})\tilde{ \to}A\text{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}.\]
For any \(\mathcal{M}\) in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}^{\leqslant 0,\geqslant-m}\) and any \(n\geqslant 1\), Lemmas 4.3.1 and 4.4.2 imply that
\[H^{i}(A_{X^{I}}^{\otimes n}\overset{!}{\otimes}\mathcal{M})\tilde{\to}H^{i}(B _{X^{I}}^{\otimes n}\overset{!}{\otimes}\mathcal{M})\]
for all \(i\geqslant-m\), since the tensor product in \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) has cohomological amplitude \(\#I\). Now apply Lemma 4.4.1.
For the second equivalence, note that it suffices to prove the corresponding claim for non-unital factorization modules. Namely, by Proposition 3.8.4 and the definition of the t-structure on unital modules, we have
\[(A\operatorname{\mathsf{--mod}}_{X^{I}_{\mathrm{dR}}}^{\mathrm{fact}})^{\leq 0, \geqslant-m}=A\operatorname{\mathsf{--mod}}_{X^{I}_{\mathrm{dR}}}^{\mathrm{fact}} \cap(A\operatorname{\mathsf{--mod}}_{X^{I}_{\mathrm{dR}}}^{\mathrm{fact,non- untl}})^{\leq 0,\geqslant-m}.\]
As for the non-unital case, we use the equivalence
\[A\operatorname{\mathsf{--mod}}^{\mathrm{fact,non-untl}}(\mathsf{Rep}(H))_{X^{I} _{\mathrm{dR}}}\operatorname{\tilde{\longrightarrow}}(A\otimes\omega_{X}[-1]) \operatorname{\mathsf{--mod}}^{\mathrm{ch}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR }}},\]
which commutes with the forgetful functors to \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\) and in particular is t-exact. The hypotheses imply that \(A\otimes\omega_{X}[-1]\) and \(B\otimes\omega_{X}[-1]\) are connective in \(\mathsf{D}(X)\), and that
\[H^{i}(A\otimes\omega_{X}[-1])\operatorname{\tilde{\longrightarrow}}H^{i}(B \otimes\omega_{X}[-1])\]
for \(i\geqslant-m\). Letting \(\operatorname{Ran}_{I}\) denote the space of finite subsets of \(X\) equipped with an \(I\)-tuple of marked points, the map
\[(\operatorname{Ran}\times\operatorname{Ran}_{I})_{\mathrm{disj}}\longrightarrow \operatorname{Ran}_{I}\]
given by disjoint union is etale. It follows that for any \(\mathcal{M}\) in \(\mathsf{Rep}(H)^{\leqslant 0}_{X^{I}_{\mathrm{dR}}}\), any \(n\geqslant 1\), and any nonempty finite set \(I^{\prime}\), we have
\[H^{i}(((A\otimes\omega_{X}[-1])^{\otimes n}\operatorname{\stackrel{{ \mathrm{ch}}}{{\otimes}}}\mathcal{M})|^{!}_{X^{I^{\prime}\sqcup I}}) \operatorname{\tilde{\longrightarrow}}H^{i}(((B\otimes\omega_{X}[-1])^{ \otimes n}\operatorname{\stackrel{{\mathrm{ch}}}{{\otimes}}} \mathcal{M})|^{!}_{X^{I^{\prime}\sqcup I}})\]
for all \(i\geqslant-m\). Here we are restricting along the natural map \(X^{I^{\prime}\sqcup I}\to\operatorname{Ran}_{I}\), and \(\operatorname{\stackrel{{\mathrm{ch}}}{{\otimes}}}\) denotes the chiral action of \(\mathsf{Rep}(H)_{\operatorname{Ran}_{\mathrm{dR}}}\) on \(\mathsf{Rep}(H)_{\operatorname{Ran}_{I,\mathrm{dR}}}\) (cf. [12] SS7.2.4). Now the claim follows from Lemma 4.4.1.
**Proposition 4.4.4**.: _For any commutative algebra \(A\) in \(\mathsf{Rep}(H)^{\leqslant 0}\), the t-exact functor \(\operatorname{oblv}^{\operatorname{\mathsf{com}}\to\operatorname{fact}}\) restricts to an equivalence_
\[(A\operatorname{\mathsf{--mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}})^{ \heartsuit}\operatorname{\tilde{\longrightarrow}}(A\operatorname{\mathsf{-- mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}})^{\heartsuit}.\]
Proof.: Consider the commutative square
\[\begin{CD}H^{0}(A)\operatorname{\mathsf{--mod}}(\mathsf{Rep}(H))^{\heartsuit}_{X ^{I}_{\mathrm{dR}}}@>{\operatorname{oblv}^{\operatorname{\mathsf{com}}\to \operatorname{fact}}}>{}>H^{0}(A)\operatorname{\mathsf{--mod}}^{\mathrm{fact}}( \mathsf{Rep}(H))^{\heartsuit}_{X^{I}_{\mathrm{dR}}}\\ @V{}V{}V@V{}V{}V\\ A\operatorname{\mathsf{--mod}}(\mathsf{Rep}(H))^{\heartsuit}_{X^{I}_{\mathrm{dR}}}@>{ \operatorname{oblv}^{\operatorname{\mathsf{com}}\to\operatorname{fact}}}>{}>A \operatorname{\mathsf{--mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))^{\heartsuit}_{X^{I }_{\mathrm{dR}}}\end{CD}\]
where the vertical functors, given by restriction of scalars along \(A\to H^{0}(A)\), are equivalences by Lemma 4.4.3. Thus we can assume that \(A\) is classical, in which case it is immediate from the definitions that \(\operatorname{oblv}^{\operatorname{\mathsf{com}}\to\operatorname{fact}}\) is fully faithful at the level of abelian categories. It remains to prove essential surjectivity.
Recall the t-exact equivalence
\[A\operatorname{\mathsf{--mod}}^{\mathrm{fact,non-untl}}(\mathsf{Rep}(H))_{X^{I} _{\mathrm{dR}}}\operatorname{\tilde{\longrightarrow}}(A\otimes\omega_{X}[-1]) \operatorname{\mathsf{--mod}}^{\mathrm{ch}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR }}}.\]
Writing \(i:Z_{I}\to X\times X^{I}\) and \(p:Z_{I}\to X^{I}\) for the inclusion of the incidence divisor and the projection as before, the structure map of a chiral module \(\mathcal{M}\) has the form
\[j_{*}j^{*}((A\otimes\omega_{X}[-1])\boxtimes\mathcal{M})\longrightarrow i_{ \mathrm{dR},*}p^{!}\mathcal{M}.\]
Objects in the essential image of
\[\operatorname{oblv}^{\operatorname{\mathsf{com}}\to\operatorname{fact}}:A \operatorname{\mathsf{--mod}}^{\mathrm{non-untl}}(\mathsf{Rep}(H))^{\heartsuit}_{X^{I }_{\mathrm{dR}}}\longrightarrow A\operatorname{\mathsf{--mod}}^{\mathrm{fact,non- untl}}(\mathsf{Rep}(H))^{\heartsuit}_{X^{I}_{\mathrm{dR}}}\]
correspond under the above equivalence to those \(\mathcal{M}\) such that the composite map
\[(A\otimes\omega_{X}[-1])\boxtimes\mathcal{M}\longrightarrow j_{*}j^{*}((A \otimes\omega_{X}[-1])\boxtimes\mathcal{M})\longrightarrow i_{\mathrm{dR},*}p^ {i}\mathcal{M} \tag{4.4.1}\]
vanishes. But notice that the functor \(i^{*}_{\mathrm{dR}}\) is well-defined on \((A\otimes\omega_{X}[-1])\boxtimes\mathcal{M}\), and moreover
\[i^{*}_{\mathrm{dR}}((A\otimes\omega_{X}[-1])\boxtimes\mathcal{M})=A\otimes \mathcal{M}[1].\]
Since \(A\) is classical and the \(\operatorname{D}\)-module underlying \(\mathcal{M}\) belongs to \(\operatorname{D}(X^{I})^{\heartsuit}\), the map
\[A\otimes\mathcal{M}[1]\longrightarrow\mathcal{M}\]
corresponding to (4.4.1) vanishes. Thus
\[A\operatorname{-\mathsf{mod}}^{\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}^{\heartsuit}=A\operatorname{-\mathsf{mod}}^{\mathrm{fact}, \mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\heartsuit}.\]
The assertion for unital modules follows immediately, since
\[A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\heartsuit }=A\operatorname{-\mathsf{mod}}^{\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}^{\heartsuit}\cap A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_ {X^{I}_{\mathrm{dR}}}\]
as subcategories of
\[A\operatorname{-\mathsf{mod}}^{\mathrm{non-untl}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}},\]
and likewise for factorization modules.
### Local acyclity of the vacuum module
We continue to let \(A\) denote a commutative algebra in \(\mathsf{Rep}(H)^{\leq 0}\). For any finite set \(I\), the functor
\[\operatorname{oblv}^{\mathrm{com}\to\operatorname{fact}}:A\operatorname{- \mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\longrightarrow A \operatorname{-\mathsf{mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}\]
is t-exact, and hence its (not necessarily continuous) right adjoint \(\operatorname{coind}^{\mathrm{fact}\to\operatorname{com}}\) is left t-exact.
**Proposition 4.5.1**.: _For any finite set \(I\), the following conditions are equivalent:_
1. _the_ \(\operatorname{D}(X^{I})\)_-linear functor_ \[\operatorname{oblv}^{\mathrm{com}\to\operatorname{fact}}:A\operatorname{- \mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\longrightarrow A \operatorname{-\mathsf{mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}\] _preserves almost ULA objects;_
2. _the functor_ \[\operatorname{coind}^{\mathrm{fact}\to\operatorname{com}}:A\operatorname{- \mathsf{mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{+} \longrightarrow A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}^{+}\] _preserves filtered colimits bounded uniformly from below and is_ \(\operatorname{D}(X^{I})^{+}\)_-linear;_
3. _the vacuum module_ \(\operatorname{Vac}_{A,X^{I}}\) _is almost ULA in_ \[A\operatorname{-\mathsf{mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{ \mathrm{dR}}}.\]
_Moreover, these equivalent conditions hold for the pair \((A,H)\) if and only if they hold for \((A,1)\), i.e. \(A\) viewed as a commutative algebra in \(\operatorname{Vect}^{\leq 0}\)._
Proof.: Since \(A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is ULA generated, Lemma 4.5.2 below implies that (i) is equivalent to (ii).
Condition (i) implies (iii) because the unit object in \(A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is ULA.
Consider the commutative square of \(\operatorname{D}(X^{I})\)-modules
\[A\operatorname{-\mathsf{mod}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}} \xrightarrow{\operatorname{oblv}_{H}}A\operatorname{-\mathsf{mod}}_{X^{I}_{ \mathrm{dR}}}\] \[\big{\lvert}\operatorname{oblv}^{\mathrm{com}\to\operatorname{fact}} \big{\lvert}\operatorname{oblv}^{\mathrm{com}\to\operatorname{fact}}\] \[A\operatorname{-\mathsf{mod}}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I }_{\mathrm{dR}}}\xrightarrow{\operatorname{oblv}_{H}}A\operatorname{-\mathsf{ mod}}_{X^{I}_{\mathrm{dR}}}^{\mathrm{fact}}.\]
The horizontal functors are t-exact and conservative, and both admit \(\mathsf{D}(X^{I})\)-linear and t-exact right adjoints, induced by the right adjoint
\[\operatorname{coind}_{H}:\mathsf{D}(X^{I})\longrightarrow\mathsf{Rep}(H)_{X^{I}_ {\operatorname{dR}}}\]
of \(\operatorname{oblv}_{H}\). In particular, the horizontal functors preserve almost ULA objects. Applying Lemma 2.4.1, it follows immediately that (ii) implies (i), and that (iii) is equivalent to (iv).
As shown in the proof of Proposition 4.3.2, the object \(A_{X^{I}}\) is ULA, hence almost ULA, in \(A\operatorname{\mathsf{-mod}}^{\operatorname{fact}}(\mathsf{Rep}(H))_{X^{I}_{ \operatorname{dR}}}\). Thus (i) implies (iii), and for the same reason (ii) implies (iv).
To complete the proof, it suffices to show that (iv) implies (ii). We will prove that the \(\mathsf{IndCoh}(X^{I})\)-linear functor
\[\operatorname{oblv}^{\operatorname{com}\to\operatorname{fact}}:A\operatorname {\mathsf{-mod}}_{X^{I}}\longrightarrow A\operatorname{\mathsf{-mod}}^{ \operatorname{fact}}_{X^{I}}\]
preserves almost compact objects, which will follow if we can show that
\[\operatorname{oblv}^{\operatorname{com}\to\operatorname{fact}}:A \operatorname{\mathsf{-mod}}^{\geqslant m}_{X^{I}}\longrightarrow(A \operatorname{\mathsf{-mod}}^{\operatorname{fact}}_{X^{I}})^{\geqslant m}\]
preserves compact objects for any \(m\in\mathbb{Z}\). As the claim is local on \(X\), we can assume for simplicity that \(X\) is affine. Then the DG category \(A\operatorname{\mathsf{-mod}}_{X^{I}}\) is compactly generated by the object \(A_{X^{I}}\), and hence the (non-stable) category \(A\operatorname{\mathsf{-mod}}^{\geqslant m}_{X^{I}}\) is compactly generated by the objects \(\tau^{\geqslant m}(A_{X^{I}}[-i])\) for \(i\geqslant 0\). Since \(\operatorname{oblv}^{\operatorname{com}\to\operatorname{fact}}\) is t-exact, we have
\[\operatorname{oblv}^{\operatorname{com}\to\operatorname{fact}}\tau^{ \geqslant m}(A_{X^{I}}[-i])=\tau^{\geqslant m}(\operatorname{Vac}_{A,X^{I}}),\]
and the right side is compact in \((A\operatorname{\mathsf{-mod}}^{\operatorname{fact}}_{X^{I}})^{\geqslant m}\) by hypothesis.
**Lemma 4.5.2**.: _Suppose that \(\mathcal{C}\) and \(\mathcal{D}\) are \(\mathsf{D}(S)\)-modules with compatible t-structures, and that \(F:\mathcal{C}\to\mathcal{D}\) is \(\mathsf{D}(S)\)-linear and t-exact. In particular, the (not necessarily continuous) right adjoint \(F^{\operatorname{R}}:\mathcal{D}\longrightarrow\mathcal{C}\) is left t-exact. If_
\[F^{\operatorname{R}}:\mathcal{D}^{+}\longrightarrow\mathcal{C}^{+}\]
_preserves filtered colimits bounded uniformly from below and is \(\mathsf{D}(S)^{+}\)-linear, then \(F\) preserves almost ULA objects. If we assume in addition that \(\mathcal{C}\) is ULA generated, then the converse holds._
Proof.: Cf. Proposition B.7.1 in [11] for the corresponding statement about ULA objects. Assume that \(F^{\operatorname{R}}\) preserves filtered colimits bounded uniformly from below, which implies that \(F\) preserves almost compact objects. We have the commutative square of t-exact functors
where the vertical functors are conservative and have the continuous right adjoints \(\operatorname{oblv}_{\mathcal{C}}\) and \(\operatorname{oblv}_{\mathcal{D}}\). Thus Lemma 2.4.1 implies that the upper horizontal functor preserves almost compact objects, from which it follows that \(F\) preserves almost ULA objects.
Now assume that \(\mathcal{C}\) is ULA generated and that \(F\) preserves almost ULA objects. The latter assumption implies that for any ULA object \(c\) in \(\mathcal{C}\), the object
\[F(\operatorname{oblv}_{\mathcal{C}}(c))=\operatorname{oblv}_{\mathcal{D}}(F(c))\]
is almost compact. Since \(\mathsf{IndCoh}(S)\otimes_{\mathsf{D}(S)}\mathcal{C}\) is generated as an \(\mathsf{IndCoh}(S)\)-module by objects of the form \(\operatorname{oblv}_{\mathcal{C}}(c)\), where \(c\) is ULA in \(\mathcal{C}\), it follows that
\[\operatorname{id}\otimes F:\mathsf{IndCoh}(S)\underset{\mathsf{D}(S)}{ \otimes}\mathcal{C}\longrightarrow\mathsf{IndCoh}(S)\underset{\mathsf{D}(S)}{ \otimes}\mathcal{D}\]
preserves almost compact objects, and hence \(F\) does as well by the above commutative square. It follows immediately that
\[F^{\mathrm{R}}:\mathcal{D}^{+}\longrightarrow\mathcal{C}^{+}\]
preserves filtered colimits bounded uniformly from below. This functor is automatically lax \(\mathsf{D}(S)^{+}\)-linear, so it remains to prove that this structure is strict. Passing to right adjoints in the square above, we obtain another commutative square
where the vertical functors are conservative and \(\mathsf{D}(S)^{+}\)-linear. Thus it suffices to show that the lower horizontal functor is strictly \(\mathsf{IndCoh}(S)^{+}\)-linear. The claim is local on \(S\), which we can therefore assume is affine. But then \(\mathsf{IndCoh}(S)^{+}\) is generated by the unit object under finite colimits and filtered colimits bounded uniformly from below.
The following theorem plays a key technical role in this paper.
**Theorem 4.6.1**.: _If \(A\) is almost of finite type, then the equivalent conditions of Proposition 4.5.1 hold._
We also record the following important consequence of the theorem here.
**Corollary 4.6.1.1**.: _If \(A\) is of finite type and \(A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is coherent for every finite set \(I\), then \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))\) satisfies the hypotheses of Theorem 3.9.2. In particular, under these hypotheses \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is almost ULA generated for every finite set \(I\)._
Proof.: First, observe that the hypothesis that \(A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is coherent implies that it is almost ULA generated. This follows from Proposition 4.1.2 and the \(\mathsf{D}(X^{I})\)-linear adjunction
\[\operatorname{ind}_{A}:\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}} \xleftrightarrow{A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}} }:\operatorname{oblv}_{A}.\]
Next, we claim that \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is coherent. By Lemma 2.4.7, it suffices to show that if \(M\) is compact in \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^ {\heartsuit}\), then \(M\) almost compact. By Proposition 4.4.4, we have \(M=\operatorname{oblv}^{\mathrm{com}\to\mathrm{fact}}N\) for some \(N\) in \(A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\heartsuit}\), which is therefore almost compact since \(A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) was assumed coherent. Now the theorem implies that \(M\) is almost compact.
Proposition 4.4.4 implies that the essential image of
\[\operatorname{oblv}^{\mathrm{com}\to\mathrm{fact}}:A\mbox{-}\mathsf{mod}( \mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\heartsuit 0}\longrightarrow A\mbox{-} \mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}^{\heartsuit 0}\]
generates the target under colimits, which implies that \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) is almost ULA generated.
Condition (ii) of Proposition 3.9.1 is satisfied by \(A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))\) because it holds for \(\mathsf{Rep}(H)\), and the morphism
\[\operatorname{oblv}_{A}:A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H) )\longrightarrow\mathsf{Rep}(H)\]
in \(\mathsf{ShvCat}(\operatorname{Ran}^{\mathrm{un}}_{\mathrm{dR}})\) is conservative over \(X^{I}_{\mathrm{dR}}\) for every finite set \(I\). As for condition (iii), note that Lemma 4.3.1 implies that it holds for \(A\mbox{-}\mathsf{mod}(\mathsf{Rep}(H))\) because \(A\) was assumed to be of finite type, and in particular eventually coconnective. Since
\[\operatorname{oblv}^{\mathrm{com}\to\mathrm{fact}}:A\mbox{-}\mathsf{mod}( \mathsf{Rep}(H))\longrightarrow A\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{ Rep}(H))\]
is a morphism in \(\mathsf{ShvCat}(\mathrm{Ran}^{\mathrm{un}}_{\mathrm{dR}})\), the condition for \(A\)-\(\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(H))\) follows from Proposition 4.4.4. The remaining condition in Proposition 3.9.2 follows from the corresponding property of \(A\)-\(\mathsf{mod}(\mathsf{Rep}(H))\) in a similar fashion.
We remark that the hypothesis in the corollary holds if \(\mathrm{Spec}\,A\) is a homogeneous space for \(H\). Indeed, in that case we have
\[A\mbox{--}\mathsf{mod}(\mathsf{Rep}(H))\cong\mathsf{Rep}(H_{y})\]
where \(y\in(\mathrm{Spec}\,A)(k)\) is any geometric point and \(H_{y}\) is its stabilizer. On the other hand, if \(H\) is the trivial group, then \(A\mbox{--}\mathsf{mod}_{X^{I}_{\mathrm{dR}}}\) is usually not coherent.
### The case of a symmetric algebra
For the remainder of this section, we assume that \(H\) is trivial unless otherwise specified. As shown in Proposition 4.5.1, there no loss of generality when proving Theorem 4.6.1 under this assumption.
Recall that the forgetful functor from chiral algebras to Lie-\(*\) algebras on \(X\) admits a left adjoint \(\mathrm{U}^{*\to\mathrm{ch}}\), called the _chiral enveloping algebra_. We emphasize that in this paper, all chiral algebras are non-unital, and hence our \(\mathrm{U}^{*\to\mathrm{ch}}\) is the augmentation ideal of the unital version of the chiral enveloping algebra.
Below we prove the key special case of Theorem 4.6.1 where \(A\) is a symmetric algebra with finitely many generators. An important point is that in this case, the chiral algebra corresponding to \(A\) is also the chiral enveloping algebra of a Lie-\(*\) algebra.
**Lemma 4.7.1**.: _Let \(A:=\mathrm{Sym}\,V\) where \(V\) is a complex of vector spaces. Write \(A^{+}:=\mathrm{Sym}^{>0}\,V\) for the augmentation ideal of \(A\), viewed as a non-unital commutative algebra. Then we have a canonical identification of chiral algebras_
\[\mathrm{U}^{*\to\mathrm{ch}}(V\otimes\omega_{X}[-1])\smash{\mathop{\smash{ \mathop{\smash{\mathop{\smash{\mathop{\smash{\mathop{\mathop{\smash{\mathop{ \mathop{\mathop{\mathop{\leftleftleftleft({\leftleftleft({\leftleftleftleft({{ \leftleftleftleftleft({{\leftleftleftleftleftleftleft({{ \leftleftleftleftleftleftleft({{ \leftleftleft}}}{{{{ \leftleftleft({{\leftleftleftleftleftleftleft({{ \leftleft}}}{{{{ \leftleftleftleftleft({{\leftleftleftleftleft}{{{ \leftleftleft}}{{{{ \leftleftleftleft({{\leftleft}}}}{{{{{ \leftleftleftleft({\leftleftleftleftleft{{}}}}{{{\leftleft{{ \leftleftleft({\leftleftleftleftleftleftleft({{}}}}{{{\left{{ \leftleftleftleftleftleftleft({{\leftleftleftleftleftleftleftleft({{ }}{\leftleftleftleftleftleftleftleftleftleft({{{ }}{\leftleftleftleftleftleftleftleftleftleft({{{ }{\leftleftleftleftleftleftleft({{}{}{}{{\leftleftleftleft{{ \leftleftleftleftleft({{}{\leftleftleftleftleftleftleftleft({{ }{}{\leftleftleftleftleftleftleftleft({{}{}{{ \leftleftleftleftleftleftleft({{}{}{\leftleftleftleftleft{{ }{\leftleftleftleftleftleftleft({{}{{\leftleftleftleftleft{{ }{}{\leftleftleftleftleftleft{{}{\leftleftleftleft{{}{ \leftleftleftleftleftleft{{}{\leftleftleftleft{{}{}{ \leftleftleftleftleft{{\leftleftleftleftleft{{}{}{}{{ \leftleftleftleftleftleft{{\leftleftleftleftleft{{}{}{}{ \leftleftleftleftleft{{\leftleftleftleftleft{{}{\leftleftleftleft{{ }{\leftleftleftleftleftleftleft{{}{\leftleftleftleftleft{{ }{\leftleftleftleftleft{{}{\leftleftleftleft{{}{ \leftleftleftleftleft{{}{\leftleftleftleft{{}{}{ \leftleftleftleftleft{{\leftleftleft{{}{\leftleftleft{{ }{\leftleftleft{{\leftleft{{}}{{}}{{ \leftleftleftleftleftleft{{\leftleft{{\leftleft{{{ }}{\leftleftleftleft{{{{ }}{\leftleftleft{{{{{ }}}{{{ \leftleftleftleft{{{{{ }}}{{ }{{{ }}}{{{ \leftleftleftleftleftleftleftleft{{{{ }}{{ \leftleftleftleftleft{{{{\leftleftleftleft{{ }}{{ }{{ }}{{ \leftleftleftleftleftleft{{{{ }}{{ }}{{ \leftleftleftleft{{{ }}{{ \leftleftleft{{}}{{ \leftleft{{}}{ \leftleft{{\leftleft{}}{{\left{}}{{ \left{\left{}}{\left{{}}{\left{{}}{{ \leftleft{\leftleft{}}{{\leftleft{\leftleft{}{}{}{ \left{\leftleft{}{\left{}{\left{}{\left{}{\left{}{}{ \leftleft{\left{}{\left{\leftleft{{}}{\left{}{}{}{ \left{\left{\leftleft{}{\left{}{\left{\left{}{}{}{}{ \left{\left{\leftleft{}{\left{\left{}{\left{}{\left{}{}{}{ \left{\left{\left{}{\left{}}{\left{\left{}{}{{ \left{\left{}}{\left{{}}{\left{}{\left{}{ \left{\left{\left{{}}{\left{\left{}{\left{}{}{}{{ \left{\left{\left{}{}}{\left{\left{}{}{{ \left{\left{}{\left{}}{\left{{}}{{}^{{ \left{\left{\left{}{}{\left{}{\left{}{}{}{ \left{\left{}{\left{}{\left{}{}{\left{}{}{}{ \left{\left{\left{}{\left{}{\left{}{}{\left{}{}{}^{ \left{\left{\left{}{\left{}{\left{}{\left{}{}{}^{{ \left{\left{}{\left{}{}{\left{}{}^{{}{ \left{\left{}{\left{}{}{\left{}{}^{{}^{{{ \left}{{}}{}{\left{}{}^{{{ \left}{}{}^{{{{}}^{{{}}{}^{{}} \left{\left{}{\left{}^{{{\left}{\left{}{}^{{{}^{{\left{}}{}^{{}} {\left{}{}^{{{\left{}^{\left{{}}}{}^{{{\left{}^{{}}^{{}}{}^{{ \left{{}^{\left{{}}{}^{{{\left{}}^{{{}}^{{}}^{{{}} {\left{}^{{{}^{\left{}}{}^{{{}^{}^{{}^{{}}^{{{}}^{{}}^{{} \left{}^{{{}^{{}}^{{}^{{}^{}}{}^{}^{{}^{{}^{}} {\left{}^{{}^{{\left{{}^{}}^{{{}}^{{}^{}^{{}^{{}}^{{}^{{}}^{{}}^{{}^{ }^{{}^{{}^{}^{{}}^{{}^{{}^{}^{{}^{}^{}^{{}}^{{}^{}^{}^{{}}^{{}^{ }^{{}^{{}^{{}^{{}^{}^{}^{}^{{}^{}^{{}}^{{}^{}^{}^{{}^{}^{{}}^{ }^{{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}}^{}^{{}^{}^{ }^{{{}^{}^{{}^{}^{}^{}^{{}^{{}^{}^{{}}^{{}^{}^{}^{{}}^{{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{}^{{}^{ }^{{}^{{}^{}^{{}^{{}}^{}^{{}^{{}^{}^{}^{}^{{}^{}^{}^{{}}^{{}^{ }^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}}^{{}^{}^{{}}^{{}^{ }^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}}^{{}^{}^{}^{{}}^{{}^{ }^{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{{}}^{{}^{{}}^{{}}^{{}^{{}^{}}^{{ }^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{{}^{}^{{}}^{}^{}^{{}^{{}^{}^{}^{}^{{}^{}^{ }^{{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{}^{{}^{{}^{}^{{}}^{{}^{}^{{}^{}^{{}}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}}^{{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{{}^{}^{}^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{{}^{}
the resulting factorization structure on \(B\text{-}{\sf mod}^{\text{ch}}\) corresponds to \(\operatorname{Vac}_{\operatorname{C}^{\text{ch}}(B)}\), and in particular the same object of \(\mathsf{D}(X^{I})\) underlies both.
For any Lie-\(*\) algebra \(L\) on \(X\), we have an adjunction
\[\operatorname{ind}^{*\to\operatorname{ch}}:L\text{-}{\sf mod}^{*}\xleftarrow{ \sf U}^{*\to\operatorname{ch}}(L)\text{-}{\sf mod}^{\text{ch}}:\operatorname {oblv}^{\text{ch}\to*}\]
in \(\operatorname{\sf FactCat}_{\operatorname{lax-}\operatorname{unt}}^{ \operatorname{lax-fact}}\), with \(\operatorname{ind}^{*\to\operatorname{ch}}\) being strictly unital.
**Lemma 4.7.2**.: _If \(A=\operatorname{Sym}V\) where \(V\) is a connective complex of vector spaces with finite-dimensional total cohomology, then \(\operatorname{Vac}_{A,X^{I}}\) is almost ULA in \(A\text{-}{\sf mod}^{\text{fact}}_{X^{I}_{\text{dR}}}\)._
Proof.: Let \(A^{+}:=\operatorname{Sym}^{>0}V\) be as in Lemma 4.7.1, so by combining the previous lemma and Koszul duality, we obtain a \(\mathsf{D}(X^{I})\)-linear equivalence
\[A\text{-}{\sf mod}^{\text{fact}}_{X^{I}_{\text{dR}}}\tilde{\ \ }\text{-}(A^{+}\otimes\omega_{X}[-1])\text{-}{\sf mod}^{\text{ch}}_{X^{I}_{ \text{dR}}},\]
which is compatible with the forgetful functors to \(\mathsf{D}(X^{I})\) and in particular is \(\mathsf{t}\)-exact. Now consider the \(\mathsf{D}(X^{I})\)-linear adjunction
\[\operatorname{ind}^{*\to\operatorname{ch}}:(V\otimes\omega_{X}[-1])\text{-}{ \sf mod}^{*}_{X^{I}_{\text{dR}}}\xrightarrow{\ \ }\text{-}(A^{+}\otimes\omega_{X}[-1])\text{-}{\sf mod}^{\text{ch}}_{X^{I}_{\text {dR}}}:\operatorname{oblv}^{\text{ch}\to*}.\]
The functor \(\operatorname{oblv}^{\text{ch}\to*}\) intertwines the forgetful functors to \(\mathsf{D}(X^{I})\) and hence is \(\mathsf{t}\)-exact. It follows formally that \(\operatorname{ind}^{*\to\operatorname{ch}}\) sends objects which are almost ULA and eventually coconnective to almost ULA objects. Since \(\operatorname{ind}^{*\to\operatorname{ch}}\) is unital, it therefore suffices to show that \(\operatorname{triv}_{V\otimes\omega_{X}[-1]}(\omega_{X^{I}})\) is almost ULA in \((V\otimes\omega_{X}[-1])\text{-}{\sf mod}^{*}_{X^{I}_{\text{dR}}}\) (of course it is eventually coconnective, being concentrated in cohomological degree \(-\#I\)).
For this, we recall that \(\operatorname{triv}_{V\otimes\omega_{X}[-1]}(\omega_{X^{I}})\) has the natural nonnegative filtration whose \(n^{\text{th}}\) associated graded piece is
\[M_{n}:=\operatorname{ind}_{V\otimes\omega_{X}[-1]}(\operatorname{Sym}^{*,n}(V \otimes\omega_{X})\operatorname{\stackrel{{*}}{{\otimes}}}( \Delta_{I})_{\text{dR},*}\omega_{X^{I}}),\]
where \(\Delta_{I}:X^{I}\to\operatorname{Ran}_{I}\) is the main diagonal and
\[\operatorname{ind}_{V\otimes\omega_{X}[-1]}:\mathsf{D}(\operatorname{Ran}_{I} )\longrightarrow(V\otimes\omega_{X}[-1])\text{-}{\sf mod}^{*}(\mathsf{D}( \operatorname{Ran}_{I}))\]
is the induction functor.
The lemma will follow if we prove these two assertions:
1. For any \(n\geq 0\), the object \(M_{n}\) is ULA over \(X^{I}\).
2. Given \(m\in\mathbb{Z}\), for \(n\) sufficiently large we have \[\operatorname{Hom}(M_{n},N)=0\] for any \(N\) in \(((V\otimes\omega_{X}[-1])\text{-}{\sf mod}^{*}_{X^{I}_{\text{dR}}})^{\geq m}\).
For claim (i), note that \(\operatorname{ind}_{V\otimes\omega_{X}[-1]}\) preserves ULA objects, being left adjoint to the \(D(X^{I})\)-linear functor \(\operatorname{oblv}_{V\otimes\omega_{X}[-1]}\). The object
\[(V\otimes\omega_{X})\operatorname{\stackrel{{*}}{{\otimes}}}^{ \operatorname{\stackrel{{*}}{{\otimes}}}}\operatorname{\stackrel{{ *}}{{\otimes}}}(\Delta_{I})_{\text{dR},*}\omega_{X^{I}}\]
is the de Rham direct image of \(V^{\otimes n}\otimes\omega_{X^{n}\times X^{I}}\) along the proper map
\[X^{n}\times X^{I}\longrightarrow\operatorname{Ran}\times\operatorname{Ran}_{ I}\longrightarrow\operatorname{Ran}_{I},\]
hence is ULA over \(X^{I}\). It follows that the object
\[\operatorname{Sym}^{*,n}(V\otimes\omega_{X})\operatorname{\stackrel{{ *}}{{\otimes}}}(\Delta_{I})_{\text{dR},*}\omega_{X^{I}}\]
obtained by taking coinvariants for the symmetric group \(\Sigma_{n}\) is also ULA over \(X^{I}\).
Finally, for (ii) it is enough to show that given \(m\in\mathbb{Z}\), for \(n\) sufficiently large we have
\[\operatorname{Hom}_{\mathsf{D}(\operatorname{Ran}_{I})}((V\otimes\omega_{X}) \overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{\otimes\!\!\!\!\!\!\! \otimes n}\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{\otimes\!\! \!\!\!\!\!\otimes n}\,\Delta_{I,*}\omega_{X^{I}},(\Delta_{I})_{\operatorname{ dR},*}N)=0\]
for any \(N\in\mathsf{D}(X^{I})^{\geqslant m}\) (since the same will therefore hold for the \(\Sigma_{n}\)-coinvariants
\[\operatorname{Sym}^{*,n}(V\otimes\omega_{X})\overset{\raisebox{-0.5pt}{\scalebox {0.5}{$\bullet$}}}{\otimes\!\!\!\!\!\!\otimes}\,(\Delta_{I})_{\operatorname{ dR},*}\omega_{X^{I}}\]
of the first object). Given nonempty finite \(I^{\prime}\), define \(Z\) to be the fiber product
By base change, the object
\[((V\otimes\omega_{X})\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{ \otimes\!\!\!\!\!\!\otimes n}\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{ \otimes\!\!\!\!\!\!\otimes n}\,(\Delta_{I})_{\operatorname{dR},*}\omega_{X^{I} })|_{X^{I^{\prime}\sqcup I}}^{!}\]
is the de Rham direct image of \(V^{\otimes n}\otimes\omega_{X^{n}\times X^{I}}\) along the finite map \(Z\to X^{I^{\prime}\sqcup I}\), hence is concentrated in cohomological degrees \(\leqslant-n-\#I\). Considering the same cartesian square in the case \(n=0\) shows that \(((\Delta_{I})_{\operatorname{dR},*}N)|_{X^{I^{\prime}\sqcup I}}^{!}\) is concentrated in cohomological degrees \(\geqslant-m\). The claim follows.
### Filtrations and the associated graded
Let \(A\) be a commutative algebra in \(\operatorname{Vect}^{\leqslant 0}\) equipped with a nonnegative filtration. The functor
\[\operatorname{gr}:A\text{-}\mathsf{mod}^{\operatorname{fil}}\to\operatorname {gr}(A)\text{-}\mathsf{mod}^{\operatorname{gr}}\]
which takes the associated graded module is symmetric monoidal, as is the forgetful functor
\[\operatorname{obly}^{\operatorname{fil}}:A\text{-}\mathsf{mod}^{ \operatorname{fil}}\longrightarrow A\text{-}\mathsf{mod}.\]
In particular, both give rise to morphisms in \(\operatorname{\mathsf{FactCat}}\).
We will write
\[A\text{-}\mathsf{mod}^{\operatorname{fil}}_{\geqslant 0}\subset A\text{-} \mathsf{mod}^{\operatorname{fil}}\]
for the full symmetric monoidal subcategory consisting of nonnegatively filtered modules (not to be confused with a constraint on cohomological degree, which we indicate with a superscript), and likewise for graded modules.
We also observe that the category \((\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{\operatorname{dR}}}\) has a grading by "total degree fiberwise over \(X^{I}\)," characterized by the condition that the natural conservative functor
\[(\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{\operatorname{dR}}} \longrightarrow\operatorname{Vect}^{\operatorname{gr}}\operatorname{\otimes \!\!\!\!\!\!\!\otimes}D(X^{I})\]
respects this grading. We will write \(M_{\operatorname{td}=i}\) for the graded component of \(M\) of total degree \(i\).
**Lemma 4.8.1**.: _Let \(B\) be a nonnegatively graded connective commutative algebra, and suppose we are given an inverse system_
\[\cdots\to M_{3}\to M_{2}\to M_{1}\]
_in \(B\text{-}\mathsf{mod}^{\operatorname{fact}}(\operatorname{Vect}^{ \operatorname{gr}})_{X^{I}_{\operatorname{dR}}}\) such that for each \(i\in\mathbb{Z}\), the inverse system_
\[\cdots\to(\operatorname{obly}_{B}M_{3})_{\operatorname{td}=i}\to( \operatorname{obly}_{B}M_{2})_{\operatorname{td}=i}\to(\operatorname{obly}_{ B}M_{1})_{\operatorname{td}=i}\]
_in \((\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{\operatorname{dR}}}\) stabilizes. Then the forgetful functor_
\[\operatorname{obly}_{B}:B\text{-}\mathsf{mod}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{\operatorname{dR}}} \longrightarrow(\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{\operatorname {dR}}}\]
_preserves the limit \(\lim_{n}M_{n}\)._
Proof.: By Proposition 3.8.4, the inclusion
\[B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}\longrightarrow B\text{-}\mathsf{mod}^{\text{fact},\text{non-} \text{untl}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{\text{dR}}}\]
preserves limits. Applying the equivalence
\[B\text{-}\mathsf{mod}^{\text{fact},\text{non-}\text{untl}}(\operatorname{Vect}^ {\text{gr}})_{X^{I}_{\text{dR}}}\overset{\rightharpoonup}{\longrightarrow}(B \otimes\omega_{X}[-1])\text{-}\mathsf{mod}^{\text{ch}}(\operatorname{Vect}^{ \text{gr}})_{X^{I}_{\text{dR}}},\]
it suffices to check the corresponding claim in the category of non-unital chiral modules. The forgetful functor
\[(B\otimes\omega_{X}[-1])\text{-}\mathsf{mod}^{\text{ch}}((\operatorname{Vect} ^{\text{gr}})_{\operatorname{Ran}_{I,\text{dR}}})\longrightarrow( \operatorname{Vect}^{\text{gr}})_{\operatorname{Ran}_{I,\text{dR}}}\]
preserves limits, so it is enough to prove that the limit \(\lim_{n}\operatorname{oblv}_{B}M_{n}\) taken in \((\operatorname{Vect}^{\text{gr}})_{\operatorname{Ran}_{I,\text{dR}}}\) belongs to the full subcategory \((\operatorname{Vect}^{\text{gr}})_{X^{I}_{\text{dR}}}\). But this is clear, since for each graded component of fixed total degree the limit stabilizes and hence belongs to \((\operatorname{Vect}^{\text{gr}})_{X^{I}_{\text{dR}}}\).
**Corollary 4.8.1.1**.: _With \(B\) as in the lemma, suppose we are given a sequence of objects \(Q_{1},Q_{2},\cdots\) in \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}\) such that for each \(i\geq 1\), the object \(\operatorname{oblv}_{B}Q_{i}\) in \((\operatorname{Vect}^{\text{gr}})_{X^{I}_{\text{dR}}}\) is concentrated in total degree \(\geq i\). Then the map_
\[\bigoplus_{i\geq 1}Q_{i}\longrightarrow\prod_{i\geq 1}Q_{i}\]
_in \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}\) is an isomorphism._
Proof.: Applying the lemma to the inverse system with terms
\[\prod_{i=1}^{n}Q_{i},\]
we see that the product
\[\prod_{i\geq 1}Q_{i}\]
is preserved by \(\operatorname{oblv}_{B}\). Now the corollary follows from the observation that the map
\[\bigoplus_{i\geq 1}\operatorname{oblv}_{B}Q_{i}\longrightarrow\prod_{i\geq 1 }\operatorname{oblv}_{B}Q_{i}\]
in \(\operatorname{Vect}^{\text{gr}}_{X^{I}_{\text{dR}}}\) is an isomorphism, since in each graded component of fixed total degree, all but finitely many factors in the product vanish.
**Lemma 4.8.2**.: _Fix a nonnegatively filtered connective commutative algebra \(A\). If the conditions of Proposition 4.5.1 hold for \(\operatorname{gr}(A)\), then they hold for \(\operatorname{oblv}^{\text{fil}}A\)._
Proof.: _Step 1:_ With \(B\) as in Lemma 4.8.1, fix \(m\in\mathbb{Z}\) and suppose that \(P\) is a compact object in \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}^{\geq m}\). We claim that there exists \(r\geq 0\) such that for any object \(Q\) of \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}^{>m}\) with \(\operatorname{oblv}_{B}Q\) concentrated in total degree \(\geq r\), we have
\[\operatorname{Hom}_{B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^ {\text{gr}})_{X^{I}_{\text{dR}}}}(P,Q)=0.\]
Otherwise we could choose a sequence \(Q_{1},Q_{2},\cdots\) in \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}^{\geq m}\) such that \(\operatorname{oblv}_{B}Q_{i}\) is concentrated in total degree \(\geq i\) and
\[\operatorname{Hom}_{B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^ {\text{gr}})_{X^{I}_{\text{dR}}}}(P,Q_{i})\neq 0\]
for any \(i\geq 1\). By Corollary 4.8.1.1, this contradicts the compactness of \(P\) in \(B\text{-}\mathsf{mod}^{\text{fact}}(\operatorname{Vect}^{\text{gr}})_{X^{I}_{ \text{dR}}}^{\geq m}\).
_Step 2:_ By Lemma 3.4.2, the functor
\[\operatorname{gr}:(\operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR }}}\longrightarrow(\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{ \operatorname{dR}}}\]
admits a right adjoint which is \(\mathsf{D}(X^{I})\)-linear and compatible with factorization. In particular, this lifts to a \(\mathsf{D}(X^{I})\)-linear adjunction
\[A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}(\operatorname{Vect}^{ \operatorname{fil}})_{X^{I}_{\operatorname{dR}}}\xleftarrow{\operatorname{gr }}(A)\operatorname{\mathsf{--mod}}^{\operatorname{fact}}(\operatorname{Vect}^{ \operatorname{gr}})_{X^{I}_{\operatorname{dR}}}.\]
We define a functorial _descending_ filtration on any \(M\) in \(A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}(\operatorname{Vect}^{ \operatorname{fil}})_{X^{I}_{\operatorname{dR}}}\) as follows. Let \(F_{0}:=M\), and for any \(n\geq 0\) define
\[F_{n+1}:=\operatorname{fib}(F_{n}\to\operatorname{gr}F_{n})\]
(here we are suppressing from the notation the right adjoint of \(\operatorname{gr}\), which commutes with the forgetful functors to \(\mathsf{D}(X^{I})\)). We claim that if \(M\) is concentrated in total degree \(\geq 0\) then
\[(\operatorname{oblv}_{A}M)_{\operatorname{td}=0}\longrightarrow(\operatorname {oblv}_{A}\operatorname{gr}M)_{\operatorname{td}=0}\]
is an isomorphism. It will then follow inductively that \(F_{n}\) is concentrated in total degree \(\geq n\) for any \(n\geq 0\), and hence that
\[M\smash{\mathop{\rightharpoonup}\limits^{\cdot}}\lim_{n}M/F_{n}\]
by Lemma 4.8.1.
Recall that \(X^{I}\) is stratified with strata \(Z(p)\) indexed by surjections \(p:I\to J\). Using the Cousin filtration, it is enough to check the claim over each \(Z(p)\). There we are working in the multifiltered category \((\operatorname{Vect}^{\operatorname{fil}})^{\otimes J}\otimes\mathsf{D}(Z(p))\) with its standard grading by total degree, and the claim is evident.
_Step 3:_ Let \(V:=\operatorname{ind}_{X^{I}}(\operatorname{vac}_{A,X^{I}})\), an object of \(A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}(\operatorname{Vect}^{ \operatorname{fil}})_{X^{I}_{\operatorname{dR}}}\), so the lemma will follow if we prove that \(\operatorname{oblv}^{\operatorname{fil}}V\) is almost compact in \(A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}_{X^{I}_{\operatorname{dR }}}\). Fixing \(m\in\mathbb{Z}\), we must show that
\[\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}_{X^{ I}_{\operatorname{dR}}}}(\operatorname{oblv}^{\operatorname{fil}}V,N)\]
preserves filtered colimits when viewed as a functor of \(N\) in \((A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}_{X^{I}_{\operatorname{dR }}})^{\geq m}\). Note that the functor
\[\operatorname{oblv}^{\operatorname{fil}}:\operatorname{Vect}^{ \operatorname{fil}}_{\geq 0}\longrightarrow\operatorname{Vect}\]
admits a symmetric monoidal right adjoint \(\operatorname{triv}^{\operatorname{fil}}_{\geq 0}\), which equips a vector space with the trivial nonnegative filtration. In particular, this lifts to a \(\mathsf{D}(X^{I})\)-linear adjunction
\[\operatorname{oblv}^{\operatorname{fil}}:A\operatorname{\mathsf{--mod}}^{ \operatorname{fact}}(\operatorname{Vect}^{\operatorname{fil}}_{\geq 0})_{X^{I}_{ \operatorname{dR}}}\xleftarrow{A\operatorname{\mathsf{--mod}}^{ \operatorname{fact}}_{X^{I}_{\operatorname{dR}}}}:\operatorname{triv}^{ \operatorname{fil}}_{\geq 0}.\]
Thus we have
\[\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}_{X^{ I}_{\operatorname{dR}}}}(\operatorname{oblv}^{\operatorname{fil}}V,N)= \operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR}}}}(V, \operatorname{triv}^{\operatorname{fil}}_{\geq 0}N).\]
Fix \(r\) as in Step 1 for \(B:=\operatorname{gr}(A)\) and \(P:=\tau^{\geq m}\operatorname{gr}(V)\). Recalling the descending filtration on \(M:=\operatorname{triv}^{\operatorname{fil}}_{\geq 0}N\) constructed in Step 2, we claim that
\[\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR}}}}(V,F_{r})=0.\]
Since \(F_{r}=\lim_{k}F_{r}/F_{r+k}\), it suffices to show that
\[\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR}}}}(V,F_{n}/F _{n+1})=0\]
for any \(n\geq r\). By construction we have
\[\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR}}}}(V,F_{n}/F _{n+1}) =\operatorname{Hom}_{A\operatorname{\mathsf{--mod}}^{\operatorname{fact}}( \operatorname{Vect}^{\operatorname{fil}})_{X^{I}_{\operatorname{dR}}}}(V, \operatorname{gr}F_{n})\] \[=\operatorname{Hom}_{\operatorname{gr}(A)\operatorname{\mathsf{-- mod}}^{\operatorname{fact}}(\operatorname{Vect}^{\operatorname{gr}})_{X^{I}_{ \operatorname{dR}}}}(\operatorname{gr}(V),\operatorname{gr}F_{n}).\]
Note that \(M=\operatorname{triv}_{\geqslant 0}^{\operatorname{fil}}N\) implies that \(\operatorname{gr}F_{n}\) is concentrated in cohomological degrees \(\geqslant m\) for any \(n\geqslant 0\) (this can be checked over each stratum \(Z(p)\), where it becomes an obvious statement about the trivial nonnegative multifiltration). Now the claim follows from Step 1.
_Step 4:_ Continuing the notation of Step 3, we have
\[\operatorname{Hom}_{A\operatorname{\mathsf{-mod}}_{X_{\operatorname{dR}}^{ \operatorname{flat}}}^{\operatorname{flat}}}(\operatorname{oblv}^{ \operatorname{fil}}V,N)=\operatorname{Hom}_{A\operatorname{\mathsf{-mod}}^{ \operatorname{flat}}(\operatorname{Vect}^{\operatorname{fil}})}_{X_{ \operatorname{dR}}^{I}}(V,M/F_{r}).\]
Thus it suffices to show that for any \(n\geqslant 0\),
\[\operatorname{Hom}_{A\operatorname{\mathsf{-mod}}^{\operatorname{flat}}( \operatorname{Vect}^{\operatorname{fil}})}_{X_{\operatorname{dR}}^{I}}(V,F_{n} /F_{n+1})=\operatorname{Hom}_{\operatorname{gr}(A)\operatorname{\mathsf{-mod }}^{\operatorname{flat}}(\operatorname{Vect}^{\operatorname{gr}})}_{X_{ \operatorname{dR}}^{I}}(\operatorname{gr}(V),\operatorname{gr}F_{n})\]
preserves filtered colimits when viewed as a functor of \(N\) in \((A\operatorname{\mathsf{-mod}}_{X_{\operatorname{dR}}^{\operatorname{flat}}}^ {\operatorname{flat}})^{\geqslant m}\). Since \(\operatorname{gr}F_{n}\) belongs to \(\operatorname{gr}(A)\operatorname{\mathsf{-mod}}^{\operatorname{flat}}( \operatorname{Vect}^{\operatorname{gr}})_{X_{\operatorname{dR}}^{I}}^{ \geqslant m}\) by Step 3, we are done by the hypothesis on \(\operatorname{gr}(A)\).
### Finitely presented algebras
A connective commutative algebra \(A\) is called \(0\)_-finitely presented_ if it is isomorphic to a classical finitely generated polynomial algebra. For \(n\geqslant 1\), we say that \(A\) is \(n\)_-finitely presented_ if
\[A\cong B\underset{\operatorname{Sym}(V[n-1])}{\otimes}k\]
where \(B\) is \((n-1)\)-finitely presented and \(V\) is a finite-dimensional classical vector space equipped with a \(k\)-linear map \(V[n-1]\to B\). If \(A\) is \(n\)-finitely presented for some \(n\geqslant 0\), we say that \(A\) is _finitely presented_.
For example, an algebra \(A\) is \(1\)-finitely presented if and only if its spectrum is a derived global complete intersection.
**Lemma 4.9.1**.: _The conditions of Proposition 4.5.1 hold for \(A\) finitely presented._
Proof.: We prove by induction on \(n\) that the conditions hold for any \(A\) of the form \(B\otimes\operatorname{Sym}V\), where \(B\) is \(n\)-finitely presented and \(V\) is a connective complex of vector spaces with finite-dimensional total cohomology (for varying \(n\), this statement is equivalent to the assertion of the lemma). The case \(n=0\) is precisely Lemma 4.7.2. In general, we have
\[B\cong C\underset{\operatorname{Sym}(W[n-1])}{\otimes}k\]
where \(C\) is \((n-1)\)-finitely presented and \(W\) is finite-dimensional and classical. Thus \(B\) admits a nonnegative filtration with associated graded algebra
\[\operatorname{gr}B\cong C\otimes\operatorname{Sym}(W[n]).\]
Consequently \(A\) admits a nonnegative filtration with associated graded
\[\operatorname{gr}A\cong C\otimes\operatorname{Sym}(W[n])\otimes\operatorname{ Sym}V\cong C\otimes\operatorname{Sym}(W[n]\oplus V).\]
Theorem 4.6.1 holds for \(\operatorname{gr}A\) by the inductive hypothesis, whence the theorem holds for \(A\) by Lemma 4.8.2.
### Algebras almost of finite type
We will use the following approximation lemma to prove Theorem 4.6.1.
**Lemma 4.10.1**.: _If \(A\) is connective and almost of finite type, there exists a directed system_
\[B_{0}\longrightarrow B_{1}\longrightarrow B_{2}\longrightarrow\cdots\]
_of connective commutative algebras mapping to \(A\) such that_
1. \(B_{n}\) _is_ \(n\)_-finitely presented for every_ \(n\geqslant 0\)_,_
2. \(H^{-i}(B_{n})\tilde{\to}H^{-i}(A)\) _for_ \(i<n\)
_._
3. \(H^{-n}(B_{n})\to H^{-n}(A)\) _is surjective._
_In particular \(\operatorname{colim}_{n}B_{n}\tilde{\to}A\)._
Finally, we can prove the theorem.
Proof of Theorem 4.6.1.: Choose a filtration \(\operatorname{colim}_{n}B_{n}\tilde{\to}A\) as in Lemma 4.10.1. Fix \(m\geq 0\). Lemma 4.4.3 implies that for \(n>m\), the restriction of scalars induces an equivalence
\[(A\text{-}\mathsf{mod}^{\operatorname{fact}}_{X_{\operatorname{dR}}^{I}})^{ \leq 0,\geq-m}\tilde{\to}(B_{n}\text{-}\mathsf{mod}^{\operatorname{fact}}_{X_{ \operatorname{dR}}^{I}})^{\leq 0,\geq-m}.\]
Moreover, by Lemmas 4.3.1 and 4.4.2, this equivalence sends
\[\tau^{\geq-m}(\operatorname{Vac}_{A,X^{I}}[-\#I])\mapsto\tau^{\geq-m}( \operatorname{Vac}_{B_{n},X^{I}}[-\#I]).\]
The inclusion
\[(A\text{-}\mathsf{mod}^{\operatorname{fact}}_{X^{I}})^{\leq 0,\geq-m}\longrightarrow( A\text{-}\mathsf{mod}^{\operatorname{fact}}_{X^{I}})^{\geq-m}\]
preserves compact objects because the t-structure on \(A\text{-}\mathsf{mod}^{\operatorname{fact}}_{X^{I}}\) is compatible with filtered colimits. Since \(\operatorname{Vac}_{B_{n},X^{I}}\) is almost ULA by Lemma 4.9.1, it follows that
\[\tau^{\geq-m}(\operatorname{oblv}_{X^{I}}\operatorname{Vac}_{A,X^{I}}[-\#I])\]
is compact in \((A\text{-}\mathsf{mod}^{\operatorname{fact}}_{X^{I}})^{\geq-m}\). Thus \(\operatorname{Vac}_{A,X^{I}}\) is almost ULA as desired.
## 5. Spectral Hecke categories
In this section, we construct the factorization categories which appear on the spectral (a.k.a. Langlands dual) side. These arise as renormalizations of certain categories of factorization modules.
### Chiral induction
Recall that the Lie bracket on \(\mathfrak{n}_{P}\) induces the structure of Lie-\(*\) algebra on the constant sheaf \(\mathfrak{n}_{P}\otimes k_{X}\). The category of Lie-\(*\) modules
\[(\mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{*}(\mathsf{Rep}(M))\]
forms an object in \(\mathsf{FactCat}^{\operatorname{lax\text{-}fact}}\). Lemma 7.7.1 in [11] says that \((\mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{*}(\mathsf{Rep}(M))\) is equivalent to the factorization category attached to the symmetric monoidal category \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\), and in particular belongs to \(\mathsf{FactCat}\). In what follows we will pass freely between the two.
Following [11], we write
\[\Upsilon(\mathfrak{n}_{P},-):\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep} (M))\longrightarrow\mathsf{Rep}(M)\]
for the morphism in \(\mathsf{FactCat}_{\operatorname{lax\text{-}unl}}\) defined as the composition
\[\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\xrightarrow{\operatorname {ind}^{*}\to\operatorname{ch}}\mathrm{U}^{*}\text{$\to\operatorname{ch}$}( \mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{\operatorname{ch}}( \mathsf{Rep}(M))\xrightarrow{\operatorname{oblv}}\mathsf{Rep}(M)\]
(this is the chiral analogue of the \(\mathbb{E}_{2}\)-monoidal functor denoted by \(\operatorname{Chev}_{\Upsilon}\) in the introduction).
Denote by \(\Upsilon(\mathfrak{n}_{P})\) the factorization algebra in \(\mathsf{Rep}(M)\) obtained by applying \(\Upsilon(\mathfrak{n}_{P},-)\) to the unit object in \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\). In particular, applying (3.5.1) to \(\Upsilon(\mathfrak{n}_{P},-)\) yields a morphism
\[\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\longrightarrow\Upsilon (\mathfrak{n}_{P})\text{-}\mathsf{mod}^{\operatorname{fact}}(\mathsf{Rep}(M)) \tag{5.1.1}\]
in \(\mathsf{FactCat}^{\operatorname{lax\text{-}fact}}\).
**Proposition 5.1.1**.: _We have a canonical identification_
\[\operatorname{C}^{\operatorname{ch}}(\mathrm{U}^{*\to\operatorname{ch}}( \mathfrak{n}_{P}\otimes k_{X}))\tilde{\to}\Upsilon(\mathfrak{n}_{P})\]
of factorization algebras in \(\mathsf{Rep}(M)\), i.e. \(\Upsilon(\mathfrak{n}_{P})\) is the factorization algebra corresponding to the chiral enveloping algebra of the Lie-\(\ast\) algebra \(\mathfrak{n}_{P}\otimes k_{X}\). Moreover, we have a commutative triangle_
_in \(\mathsf{FactCat}^{\mathrm{lax\text{-}fact}}\)._
Proof.: Since the functor \(\mathrm{ind}^{\ast\to\mathrm{ch}}\) is unital, the factorization algebra \(\Upsilon(\mathfrak{n}_{P})\) is by definition the object of \(\mathsf{Rep}(M)\) underlying the unit in
\[\mathrm{U}^{\ast\to\mathrm{ch}}(\mathfrak{n}_{P}\otimes k_{X})\text{-} \mathsf{mod}^{\mathrm{ch}}(\mathsf{Rep}(M)).\]
As explained in SS4.7, the latter is \(\mathrm{C}^{\mathrm{ch}}(\mathrm{U}^{\ast\to\mathrm{ch}}(\mathfrak{n}_{P} \otimes k_{X}))\), whence the first assertion of the proposition.
As for the commutative triangle, we construct the horizontal equivalence as the composition
\[\Upsilon(\mathfrak{n}_{P})\text{-}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(M))\overset{\sim}{\longrightarrow}\mathrm{C}^{\mathrm{ch}}_{+}( \mathrm{U}^{\ast\to\mathrm{ch}}(\mathfrak{n}_{P}\otimes k_{X}))\text{-} \mathsf{mod}^{\mathrm{fact},\mathrm{non\text{-}unt}}(\mathsf{Rep}(M))\] \[\overset{\sim}{\longrightarrow}\mathrm{U}^{\ast\to\mathrm{ch}}( \mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{\mathrm{ch}}(\mathsf{Rep}(M )).\]
The commutativity of the triangle is then immediate from the construction of (5.1.1.
Invoking the identifications from Proposition 5.1.1, we have an adjunction
\[\mathrm{ind}^{\ast\to\mathrm{ch}}:\mathfrak{n}_{P}\text{-}\mathsf{mod}( \mathsf{Rep}(M))\rightleftarrows\Upsilon(\mathfrak{n}_{P})\text{-}\mathsf{mod }^{\mathrm{fact}}(\mathsf{Rep}(M)):\mathrm{oblv}^{\mathrm{ch\to}\ast} \tag{5.1.2}\]
in \(\mathsf{FactCat}^{\mathrm{lax\text{-}fact}}_{\mathrm{lax\text{-}unt}}\). Note that the right adjoint is compatible with the forgetful functors to \(\mathsf{Rep}(M)\), and in particular is conservative. As in [11] Corollary 7.8.1, this implies that \(\Upsilon(\mathfrak{n}_{P})\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M))\) belongs to \(\mathsf{FactCat}\), i.e. factorizes strictly.
Note that the symmetric monoidal functor
\[\mathrm{res}^{N_{P}}_{\mathfrak{n}_{P}}:\mathsf{Rep}(P)\longrightarrow\mathfrak{ n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\]
is fully faithful because \(\mathfrak{n}_{P}\) is nilpotent.
**Lemma 5.2.1**.: _For any finite set \(I\), the functor_
\[\mathrm{res}^{N_{P}}_{\mathfrak{n}_{P}}:\mathsf{Rep}(P)_{X^{I}_{\mathrm{dR}}} \longrightarrow\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))_{X^{I}_{ \mathrm{dR}}}\]
_is fully faithful._
Proof.: Since the symmetric monoidal category \(\mathsf{Rep}(P)\) is rigid and the unit object in \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\) is compact, Lemma 3.4.2 says that corresponding morphism in \(\mathsf{FactCat}\) admits a right adjoint in \(\mathsf{FactCat}_{\mathrm{lax\text{-}unt}}\). The unit of this adjunction is an isomorphism because it is compatible with factorization and is an isomorphism over \(X\).
In what follows, we will freely identify \(\mathsf{Rep}(P)_{X^{I}_{\mathrm{dR}}}\) with its essential image in \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))_{X^{I}_{\mathrm{dR}}}\).
**Proposition 5.2.2**.: _The monad on \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))_{X^{I}_{\mathrm{dR}}}\) arising from the adjunction (5.1.2) preserves the full subcategory \(\mathsf{Rep}(P)_{X^{I}_{\mathrm{dR}}}\) for any finite set \(I\)._
We will apply a general lemma, which we now prepare to state. Let \(\mathcal{C}\) be a commutative lax factorization category, i.e. an object of \(\mathsf{ComAlg}(\mathsf{FactCat}^{\mathrm{lax\text{-}fact}})\), and \(L\) a Lie-\(\ast\) algebra in \(\mathcal{C}\). Then we have adjoint functors
\[\mathrm{ind}^{\ast\to\mathrm{ch}}:L\text{-}\mathsf{mod}^{\ast}(\mathcal{C})_{ X^{I}_{\mathrm{dR}}}\overset{\sim}{\leftarrows}\mathrm{U}^{\ast\to\mathrm{ch}}(L)\text{-} \mathsf{mod}^{\mathrm{ch}}(\mathcal{C})_{X^{I}_{\mathrm{dR}}}:\mathrm{oblv}^{ \mathrm{ch\to}\ast}\]
for any finite set \(I\).
Recall that \(L\text{--mod}^{*}(\mathcal{C})\) has a canonical structure of commutative lax factorization category compatible with the forgetful functor
\[\operatorname{oblv}_{L}:L\text{--mod}^{*}(\mathcal{C})\longrightarrow\mathcal{C}.\]
In particular, for any finite set \(I\) the category
\[L\text{--mod}^{*}(\mathcal{C})_{X^{I}_{\operatorname{dR}}}\]
is equipped with a symmetric monoidal structure lifting the one on \(\mathcal{C}_{X^{I}_{\operatorname{dR}}}\), which will also be denoted by \(\otimes^{!}\).
Viewing \(L\) with the adjoint action as an object of the symmetric monoidal category \(L\text{--mod}^{*}(\mathcal{C})_{X_{\operatorname{dR}}}\), the symmetric algebra \(\operatorname{Sym}^{!}(L[1])\) is a commutative algebra in this category and hence gives rise to a commutative factorization algebra in \(L\text{--mod}^{*}(\mathcal{C})\).
**Lemma 5.2.3**.: _The monad \(\operatorname{oblv}^{\operatorname{ch}\to*}\operatorname{ind}^{*\to \operatorname{ch}}\) on \(L\text{--mod}^{*}(\mathcal{C})_{X^{I}_{\operatorname{dR}}}\) admits a canonical filtration with associated graded isomorphic to_
\[\operatorname{Sym}^{!}(L[1])_{X^{I}}\overset{!}{\otimes}(-).\]
Proof.: First, we record the apparently weaker assertion that the functor
\[\operatorname{oblv}_{\operatorname{U}^{*}\to\operatorname{ch}(L)}\operatorname {ind}^{*\to\operatorname{ch}}:L\text{--mod}^{*}(\mathcal{C})_{X^{I}_{ \operatorname{dR}}}\longrightarrow\mathcal{C}_{X^{I}_{\operatorname{dR}}}\]
is filtered with associated graded
\[\operatorname{oblv}_{L}(\operatorname{Sym}^{!}(L[1])_{X^{I}}\overset{!}{ \otimes}(-)).\]
This is a variant of Corollary 6.5.2 in [11] and is proved in the same way.
Now we will deduce the lemma. Observe that the adjoint action canonically upgrades \(L\) to a Lie-\(*\) algebra in the commutative lax factorization category \(L\text{--mod}^{*}(\mathcal{C})\). It follows immediately that \(\operatorname{U}^{*\to\operatorname{ch}}(L)\) upgrades to a chiral algebra in \(L\text{--mod}^{*}(\mathcal{C})\). Moreover, the morphism
\[\operatorname{oblv}_{L}:L\text{--mod}^{*}(\mathcal{C})\longrightarrow\mathcal{C}\]
of commutative lax factorization categories induces isomorphisms
\[L\text{--mod}^{*}(L\text{--mod}^{*}(\mathcal{C}))\text{--}\!\!\!\!\!\!\!\!\!\!\! \!\!
For any finite set \(I\), define
\[\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep}(M)) _{X_{\text{dR}}^{I}}\subset\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}^{ \text{fact}}(\mathsf{Rep}(M))_{X_{\text{dR}}^{I}}\]
to be the full subcategory generated by the image of \(\mathsf{Rep}(P)_{X_{\text{dR}}^{I}}\) under \(\operatorname{ind}^{\bullet\to\text{ch}}\). In view of Proposition 5.2.2, it follows that these assemble into an object of \(\mathsf{FactCat}\), and we have a monadic adjunction
\[\operatorname{ind}^{\bullet\to\text{ch}}:\mathsf{Rep}(P)\rightleftarrows \Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep}( M)):\operatorname{oblv}^{\text{ch}\to\text{$\bullet$}} \tag{5.3.1}\]
in \(\mathsf{FactCat}_{\text{lax-untl}}\) with the left adjoint belonging to \(\mathsf{FactCat}\).
We equip
\[\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep} (M))_{X_{\text{dR}}^{I}}\]
with the unique t-structure such that \(\operatorname{oblv}^{\text{ch}\to\text{$\bullet$}}\) is left t-exact. Equivalently, the connective objects are generated by the image of \(\mathsf{Rep}(P)^{\leq 0}\) under \(\operatorname{ind}^{\bullet\to\text{ch}}\). Proposition 7.11.1(2) of [11] says that \(\operatorname{ind}^{\bullet\to\text{ch}}\) is actually t-exact.
**Proposition 5.3.1**.: _For any finite set \(I\), the t-structure on_
\[\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep }(M))_{X_{\text{dR}}^{I}}\]
_is coherent._
We will apply the following general lemma.
**Lemma 5.3.2**.: _Let \(F:\mathcal{C}\to\mathcal{D}\) be a t-exact functor whose restriction_
\[F:\mathcal{C}^{\heartsuit}\longrightarrow\mathcal{D}^{\heartsuit}\]
_is fully faithful. Assume also that the right adjoint \(F^{\text{R}}\) is continuous and conservative. If the t-structure on \(\mathcal{C}\) is coherent, then so is the t-structure on \(\mathcal{D}\)._
Proof.: Standard arguments show that any compact object in \(\mathcal{D}^{\heartsuit}\) admits a finite filtration with subquotients of the form \(F(c)\), where \(c\) is compact in \(\mathcal{C}^{\heartsuit}\). If the t-structure on \(\mathcal{C}\) is coherent, then any compact object in \(\mathcal{C}^{\heartsuit}\) is almost compact in \(\mathcal{C}\). Since \(F\) preserves almost compact objects and any extension of almost compact objects is almost compact, the lemma follows.
Proof of Proposition 5.3.1.: Lemma 7.16.1 of [11] says that
\[\operatorname{ind}^{\bullet\to\text{ch}}:\mathsf{Rep}(P)_{X_{\text{dR}}^{I}} \longrightarrow\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact} }(\mathsf{Rep}(M))_{X_{\text{dR}}^{I}}\]
is fully faithful on the heart, and the t-structure \(\mathsf{Rep}(P)_{X_{\text{dR}}^{I}}\) is coherent by Proposition 4.1.2. Thus the hypotheses of Lemma 5.3.2 apply to the adjunction (5.3.1).
### The spectral Chevalley functor as a factorization algebra
We will abuse notation slightly by also writing \(\Upsilon(\mathfrak{n}_{P},-)\) for the composition
\[\mathsf{Rep}(G)\xrightarrow{\operatorname{res}_{P}^{G}}\mathsf{Rep}(P) \xrightarrow{\operatorname{res}_{P}^{Np}}\mathfrak{n}_{P}\text{--}\mathsf{mod }(\mathsf{Rep}(M))\xrightarrow{\Upsilon(\mathfrak{n}_{P},-)}\mathsf{Rep}(M).\]
This is a morphism in \(\mathsf{FactCat}_{\text{lax-untl}}\), and hence can be viewed as a factorization algebra in \(\underline{\operatorname{Hom}}_{\mathsf{FactCat}_{\text{lax-untl}}}(\mathsf{ Rep}(G),\mathsf{Rep}(M))\). But \(\mathsf{Rep}(G)\) is canonically self-dual in \(\mathsf{FactCat}_{\text{lax-untl}}\) by Proposition 3.4.1, so we can identify
\[\underline{\operatorname{Hom}}_{\mathsf{FactCat}_{\text{lax-untl}}}(\mathsf{ Rep}(G),\mathsf{Rep}(M))\cong\mathsf{Rep}(G)\otimes\mathsf{Rep}(M)\cong\mathsf{ Rep}(G\times M).\]
We will write \(\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\) for the factorization algebra in \(\mathsf{Rep}(G\times M)\) corresponding to \(\Upsilon(\mathfrak{n}_{P},-)\). As the notation suggests, this factorization algebra can also be obtained by applying the morphism
\[\operatorname{id}_{\mathsf{Rep}(G)}\otimes\Upsilon(\mathfrak{n}_{P},-): \mathsf{Rep}(G)\otimes\mathsf{Rep}(G)\longrightarrow\mathsf{Rep}(G)\otimes \mathsf{Rep}(M)\cong\mathsf{Rep}(G\times M)\]
in \(\mathsf{FactCat}_{\text{lax-untl}}\) to \(\mathcal{O}_{G}\), viewed as a commutative factorization algebra in \(\mathsf{Rep}(G)\otimes\mathsf{Rep}(G)\).
By construction, there is a canonical morphism of factorization algebras
\[\operatorname{unit}_{\mathsf{Rep}(G)}\boxtimes\Upsilon(\mathfrak{n}_{P}) \longrightarrow\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\]
in \(\mathsf{Rep}(G\times M)\). By Lemma 3.6.1, this induces an equivalence
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}^{\text{fact}}( \mathsf{Rep}(G\times M))\tilde{\longrightarrow}\Upsilon(\mathfrak{n}_{P}, \mathcal{O}_{G})\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G)\otimes \Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(M))). \tag{5.4.1}\]
By definition, the factorization algebra \(\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\) is the image of \(\operatorname{res}_{G\times P}^{G\times G}\) under the functor
\[\operatorname{id}_{\mathsf{Rep}(G)}\otimes(\Upsilon(\mathfrak{n}_{P},-) \circ\operatorname{res}_{\mathfrak{n}_{P}}^{N_{P}}):\mathsf{Rep}(G\times P) \longrightarrow\mathsf{Rep}(G)\otimes\Upsilon(\mathfrak{n}_{P})\text{--} \mathsf{mod}^{\text{fact}}(\mathsf{Rep}(M))),\]
and in particular belongs to the full subcategory
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}^{\text{fact}}( \mathsf{Rep}(G)\otimes\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{ \text{fact}}(\mathsf{Rep}(M))).\]
We define
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}_{0}^{\text{fact }}(\mathsf{Rep}(G\times M))\subset\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G}) \text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times M))\]
to be the full subcategory corresponding to
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}^{\text{fact}}( \mathsf{Rep}(G)\otimes\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{ \text{fact}}(\mathsf{Rep}(M)))\]
under the equivalence (5.4.1).
The adjunction (5.3.1) induces an adjunction
\[\mathsf{Rep}(G\times P)\rightleftarrows\mathsf{Rep}(G)\otimes\Upsilon( \mathfrak{n}_{P})\text{--}\mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep}(M))\]
in \(\mathsf{FactCat}_{\text{lax-untl}}\) after tensoring with \(\operatorname{id}_{\mathsf{Rep}(G)}\). Applying the construction (3.5.1) yields an adjunction
\[\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times P)) \rightleftarrows\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{ mod}_{0}^{\text{fact}}(\mathsf{Rep}(G\times M)) \tag{5.4.2}\]
in \(\mathsf{FactCat}_{\text{lax-untl}}^{\text{lax-fact}}\), with the left adjoint belonging to \(\mathsf{FactCat}^{\text{lax-fact}}\). Note that by construction, the right adjoint makes the square
commute, and in particular is conservative over \(X^{I}_{\text{dR}}\) for any finite set \(I\).
We now define a t-structure on the category
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}_{0}^{\text{fact }}(\mathsf{Rep}(G\times M))_{X^{I}_{\text{dR}}}\]
for any finite set \(I\). Recall that by Proposition 4.2.1, the category
\[\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X ^{I}_{\text{dR}}}\]
admits a unique t-structure which makes the forgetful functor to \(\mathsf{Rep}(G\times P)_{X^{I}_{\text{dR}}}\) t-exact.
**Lemma 5.5.1**.: _For any finite set \(I\), there is a unique t-structure on the category_
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}_{0}^{\text{fact }}(\mathsf{Rep}(G\times M))_{X^{I}_{\text{dR}}}\]
_which makes the left adjoint in (5.4.2) \(t\)-exact over \(X^{I}_{\text{dR}}\), and this functor moreover restricts to an equivalence on the hearts of the t-structures._
Proof.: Since the right adjoint in (5.4.2) is conservative, there is a unique t-structure which makes the left adjoint right t-exact. To see that it is left t-exact, it suffices to check that the corresponding monad on
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X^{I }_{\text{\rm{dR}}}}}\]
is left t-exact. The forgetful functor
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X^{ I}_{\text{\rm{dR}}}}}\longrightarrow\mathsf{Rep}(G\times P)_{X^{I}_{\text{\rm{dR}}}}\]
is t-exact and intertwines the monad in question with the monad
\[\operatorname{id}_{\mathsf{Rep}(G)}\otimes\!(\operatorname{obly}^{\text{\rm{ th}}\to\ast}\circ\operatorname{ind}^{\ast-\text{\rm{th}}}\circ\operatorname{res}^{N_{P}}_{ \mathsf{n}_{P}})\]
on the target. Thus it suffices to show that the latter monad is left t-exact, which follows from the PBW theorem for factorization modules as in the proof of Proposition 7.11.1 in [10].
The left adjoint in (5.4.2) is an equivalence on hearts by a similar calculation using the PBW theorem (see the proof of Lemma 7.16.1 in _loc. cit._; cf. also Proposition 4.4.4 above).
By Proposition 4.4.4, the functor
\[\mathsf{Rep}(P)_{X^{I}_{\text{\rm{dR}}}}\cong\mathcal{O}_{\text{\rm{$G$--$mod }$}(\mathsf{Rep}(G\times P))_{X^{I}_{\text{\rm{dR}}}}}\xrightarrow{ \operatorname{obly}^{\text{\rm{com}}\to\text{\rm{fact}}}}\mathcal{O}_{\text{ \rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X^{I}_{\text{\rm{dR}}}}}\]
restricts to an equivalence on the hearts of the t-structures. Combining this with Lemma 5.5.1, we obtain an equivalence
\[(\mathsf{Rep}(P)_{X^{I}_{\text{\rm{dR}}}})^{\heartsuit}\tilde{\longrightarrow} (\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{\text{\rm{$G$}}})\text{--}\mathsf{mod }^{\text{fact}}_{0}(\mathsf{Rep}(G\times M))_{X^{I}_{\text{\rm{dR}}}})^{ \heartsuit}.\]
**Proposition 5.5.2**.: _For any finite set \(I\), the t-structure on_
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{\text{\rm{$G$}}})\text{--}\mathsf{mod }^{\text{fact}}_{0}(\mathsf{Rep}(G\times M))_{X^{I}_{\text{\rm{dR}}}}\]
_satisfies the hypotheses of Proposition 3.9.2. In particular, this category is almost ULA generated over \(X^{I}\)._
Proof.: First, we show that the t-structure is almost ULA generated. Coherence follows from Lemma 5.5.1 using the criterion of Lemma 2.4.7, since the left adjoint in (5.4.2) preserves almost compact objects, the category
\[\mathsf{Rep}(P)^{\heartsuit}_{X^{I}_{\text{\rm{dR}}}}\]
is compactly generated, and
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X^ {I}_{\text{\rm{dR}}}}}\]
is almost compactly generated by Corollary 4.6.1.1. Recall that
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))_{X^ {I}_{\text{\rm{dR}}}}}\]
admits a set of almost ULA generators by Corollary 4.6.1.1, and since the right adjoint in (5.4.2) is left t-exact, conservative, and \(\mathsf{D}(X^{I})\)-linear, the claim follows.
Hypothesis (ii) of Proposition 3.9.1 follows from the corresponding property of
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))}\]
via the right adjoint in (5.4.2), which is left t-exact and conservative over \(X^{I}_{\text{\rm{dR}}}\) for all \(I\). As for hypothesis (iii), by right completeness it suffices to check that the structure functors in question are left t-exact on the heart of the t-structure. Since
\[\mathcal{O}_{\text{\rm{$G$--$mod}$}^{\text{fact}}(\mathsf{Rep}(G\times P))}\]
has this property by Corollary 4.6.1.1, the claim follows from Lemma 5.5.1.
Similarly, the hypothesis in Proposition 3.9.2 is satisfied because of Lemma 5.5.1 and the corresponding property of
\[\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times P)).\]
### Definition of the spectral Hecke categories
Applying Propositions 3.9.2 and 5.5.2, we obtain an object
\[\mathsf{Sph}^{\text{spec}}_{G,P}:=\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G}) \text{--}\mathsf{mod}^{\text{fact}}_{0}(\mathsf{Rep}(G\times M))^{\text{ren}}\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\). Composing the morphism
\[\mathsf{Rep}(P)\cong\mathcal{O}_{G}\text{--}\mathsf{mod}(\mathsf{Rep}(G\times P ))\xrightarrow{\text{\rm\tiny{obly}}^{\text{\rm\tiny{com}}\to\text{\rm\tiny{fact }}}}\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times P))\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\) with the left adjoint in (5.4.2), we obtain a morphism
\[\mathsf{Rep}(P)\longrightarrow\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G}) \text{--}\mathsf{mod}^{\text{fact}}_{0}(\mathsf{Rep}(G\times M)). \tag{5.6.1}\]
This functor is t-exact over \(X^{I}_{\text{dR}}\) for every finite set \(I\), and hence renormalizes to a morphism
\[\mathsf{Rep}(P)\longrightarrow\mathsf{Sph}^{\text{spec}}_{G,P} \tag{5.6.2}\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\), where we used Proposition 4.1.2 to identify \(\mathsf{Rep}(P)^{\text{ren}}\cong\mathsf{Rep}(P)\). By Lemma 5.5.1, this induces an equivalence
\[\mathsf{Rep}(P)^{\heartsuit}_{X^{I}_{\text{dR}}}\tilde{\longrightarrow}( \mathsf{Sph}^{\text{spec}}_{G,P})^{\heartsuit}_{X^{I}_{\text{dR}}}\]
for any finite set \(I\).
**Proposition 5.6.1**.: _The morphism (5.6.2) admits a right adjoint in \(\mathsf{FactCat}^{\text{lax-fact}}_{\text{lax-unt}}\), which is conservative over \(X^{I}_{\text{dR}}\) for any finite set \(I\). In particular, the object \(\mathsf{Sph}^{\text{spec}}_{G,P}\) belongs to \(\mathsf{FactCat}\)._
Proof.: Theorem 4.6.1, together with the existence of the adjunction (5.4.2), implies that
\[\mathsf{Rep}(P)_{X^{I}_{\text{dR}}}\longrightarrow(\mathsf{Sph}^{\text{spec}}_ {G,P})_{X^{I}_{\text{dR}}}\]
preserves ULA objects for any finite set \(I\). Moreover, the essential image of this functor generates its target, since it induces an equivalence on the hearts of the t-structures. Since \(\mathsf{Rep}(P)_{X^{I}_{\text{dR}}}\) is ULA generated by Proposition 4.1.2, we obtain the desired right adjoint. It follows that \(\mathsf{Sph}^{\text{spec}}_{G,P}\) factorizes strictly, since \(\mathsf{Rep}(P)\) does and we have just shown that (5.6.2) has a monadic right adjoint.
### Associative algebra and bimodule structures
In the case \(G=P\), we will write
\[\mathsf{Sph}^{\text{spec}}_{G}:=\mathsf{Sph}^{\text{spec}}_{G,G}\]
to simplify the notation. Here the factorization algebra \(\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})=\mathcal{O}_{G}\) is the monoidal unit in
\[\mathsf{Rep}(G\times G)\cong\underline{\mathsf{End}}_{\mathsf{FactCat}_{\text {lax-unt}}}(\mathsf{Rep}(G)).\]
The object
\[\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times G))\]
is obtained by applying the construction (3.7.2) to \(\mathsf{Rep}(G\times G)\), hence inherits an associative algebra structure in \(\mathsf{FactCat}^{\text{lax-fact}}\). Similarly, the object
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{--}\mathsf{mod}^{\text{fact}}( \mathsf{Rep}(G\times M))\]
is naturally a bimodule for the pair
\[(\mathcal{O}_{G}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(G\times G)), \mathcal{O}_{M}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}(M\times M))).\]
**Proposition 5.7.1**.: _The object \(\mathsf{Sph}^{\mathrm{spec}}_{G}\) admits a unique structure of associative algebra in \(\mathsf{FactCat}\) compatible with the morphism_
\[\mathsf{Sph}^{\mathrm{spec}}_{G}\longrightarrow\mathcal{O}_{G}\mbox{-} \mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times G)).\]
_Similarly, the object \(\mathsf{Sph}^{\mathrm{spec}}_{G,P}\) admits a unique structure of \((\mathsf{Sph}^{\mathrm{spec}}_{G},\mathsf{Sph}^{\mathrm{spec}}_{M})\)-bimodule in \(\mathsf{FactCat}\) compatible with the morphism_
\[\mathsf{Sph}^{\mathrm{spec}}_{G,P}\longrightarrow\Upsilon(\mathfrak{n}_{P}, \mathcal{O}_{G})\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times M))\]
_in \(\mathsf{FactCat}^{\mathrm{lax\mbox{-}fact}}\)._
Proof.: For the first claim, by Proposition 3.9.3 it suffices to show that the unit and multiplication morphisms for the associative algebra
\[\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times G))\]
are unary and binary morphisms, respectively, in the pseudo-tensor category \((\mathsf{FactCat}^{\mathrm{lax\mbox{-}fact}})^{\otimes}_{\mathrm{aULA}}\). This means that for any finite set \(I\), the monoidal unit object in
\[\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times G))_ {X^{I}_{\mathrm{dR}}}\]
is eventually coconnective, and that eventually coconnective objects are stable under the monoidal operation. By Lemmas 4.3.1 and 4.4.2, the unit object \(\mathcal{O}_{G,X^{I}}\) is concentrated in cohomological degree \(-\#I\), and in particular is eventually coconnective. As for the multiplication, the t-exact monoidal functor
\[\mathrm{oblv}^{\mathrm{com}\to\mathrm{fact}}:\mathsf{Rep}(G)_{X^{I}_{\mathrm{ dR}}}\longrightarrow\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G \times G))_{X^{I}_{\mathrm{dR}}}\]
induces an equivalence on the hearts by Proposition 4.4.4, and monoidal structure on \(\mathsf{Rep}(G)_{X^{I}_{\mathrm{dR}}}\) is left t-exact in each variable separately. The claim now follows by right completeness of the t-structure on
\[\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times G))_ {X^{I}_{\mathrm{dR}}}.\]
Recall that the morphism (5.6.1) in \(\mathsf{FactCat}^{\mathrm{lax\mbox{-}fact}}\) is t-exact and induces an equivalence on the hearts over each \(X^{I}_{\mathrm{dR}}\). By right completeness of the t-structures, it follows that the action of the pair
\[(\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times G) ),\mathcal{O}_{M}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M\times M)))\]
preserves the full subcategory
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\mbox{-}\mathsf{mod}^{\mathrm{fact}} _{0}(\mathsf{Rep}(G\times M))\subset\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G}) \mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times M)),\]
and that the action of the pair on this subcategory takes place in \((\mathsf{FactCat}^{\mathrm{lax\mbox{-}fact}})^{\otimes}_{\mathrm{aULA}}\). Then we are done by Proposition 3.9.3.
## 6. Construction of the derived Satake transform
In this section, we construct the factorization categories that appear on the geometric (a.k.a. automorphic) side. We then construct functors to the spectral side, and formulate our main results as Theorems 6.6.1 and 6.12.3.
### Canonical twists
Let \(\rho\) denote the half-sum of the simple coroots of \(G\). Choose a square root \((\Omega^{1}_{X})^{\otimes\frac{1}{2}}\) of the canonical line bundle \(\Omega^{1}_{X}\) on \(X\), and put
\[\rho(\Omega^{1}_{X}):=2\rho((\Omega^{1}_{X})^{\otimes\frac{1}{2}}),\]
which is a well-defined \(T\)-bundle on \(X\) because \(2\rho\in\Lambda\) is an integral coweight.
In what follows, we replace \(G\) by its _canonical twist_ by \(\rho(\Omega^{1}_{X})\). By definition, this is the group scheme of automorphisms of the \(G\)-bundle on \(X\) defined by
\[G\mathop{\times}^{T}\rho(\Omega^{1}_{X}).\]
Similarly, we replace \(P\) and \(P^{-}\) by their canonical twists, defined in the same way (the corresponding twist of \(T\) itself is trivial because \(T\) is commutative). These are subgroup schemes of the canonical twist of \(G\). We remark that by construction, all of these twists are pure inner forms (cf. SS2.15 of [10] of the corresponding constant group schemes \(G\), \(P\), and \(P^{-}\). In particular, replacing each of these groups with its canonical twist does not affect the corresponding space of principal bundles.
The canonical twist of \(N\) is defined as
\[N\mathop{\times}^{T}\rho(\Omega^{1}_{X})\]
using the adjoint action of \(T\) on \(N\), and similarly for \(N^{-}\). These are normal subgroup schemes of the canonical twists of \(B\) and \(B^{-}\) respectively, with the quotient in either case being the constant group scheme \(T\). The canonical twists of \(N\) and \(N^{-}\) are _not_ pure inner forms of the constant group schemes, and their spaces of principal bundles generally differ. One can similarly define the canonical twist of any subgroup of \(N\) stable under the adjoint action of \(T\), such as \(N_{P}\) or \(N_{M}:=N\cap M\).
These twists will be suppressed below in order to simplify the notation. Their presence can be safely ignored in most situations, with the notable exception of defining the Whittaker condition.
### The spherical Hecke stack
Write \(\mathcal{H}_{G}:=\mathfrak{L}^{+}G\backslash\mathfrak{L}G/\mathfrak{L}^{+}G\) for the spherical Hecke stack, a groupoid in corr-unital factorization spaces (i.e. factorization spaces with a unit correspondence rather than a single morphism, cf. [11] SS10). The groupoid structure on \(\mathcal{H}_{G}\) induces a monoidal structure on
\[\mathsf{D}(\mathcal{H}_{G,X^{I}}):=\mathsf{D}((\mathfrak{L}G)_{X^{I}})^{ \mathfrak{L}^{+}G\times\mathfrak{L}^{+}G}\]
for each finite set \(I\), and moreover these assemble into an object \(\mathsf{D}(\mathcal{H}_{G})\) of \(\mathsf{AssocAlg}(\mathsf{FactCat})\) satisfying
\[\mathsf{D}(\mathcal{H}_{G})_{X^{I}_{\mathrm{dR}}}=\mathsf{D}(\mathcal{H}_{G, X^{I}}).\]
For any finite set \(I\), the Beilinson-Drinfeld affine Grassmannian
\[\mathrm{Gr}_{G,X^{I}}=(\mathfrak{L}G/\mathfrak{L}^{+}G)_{X^{I}}\]
is an ind-scheme locally of finite type, and in particular \(\mathsf{D}(\mathrm{Gr}_{G,X^{I}})\) is equipped with a natural t-structure. We give \(\mathsf{D}(\mathcal{H}_{G,X^{I}})\) the unique t-structure such that the forgetful functor
\[\mathsf{D}(\mathcal{H}_{G,X^{I}})=\mathsf{D}(\mathrm{Gr}_{G,X^{I}})^{ \mathfrak{L}^{+}G}\longrightarrow\mathsf{D}(\mathrm{Gr}_{G,X^{I}})\]
is t-exact.
### Construction of the spherical Hecke category
The naive Satake transform, as constructed in SS6 of [14], is a morphism
\[\operatorname{Sat}_{G}^{\operatorname{naive}}:\operatorname{\mathsf{Rep}}( \check{G})\longrightarrow\mathsf{D}(\mathcal{H}_{G})\]
in \(\operatorname{\mathsf{AssocAlg}}(\operatorname{\mathsf{FactCat}})\). We recall from _loc. cit._ that for any finite set \(I\), the corresponding functor
\[\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}_{\operatorname{dR}}} \longrightarrow\mathsf{D}(\mathcal{H}_{G,X^{I}})\]
is t-exact and induces an equivalence
\[\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}_{\operatorname{dR}}}^{ \heartsuit}\overset{\rightharpoonup}{\longrightarrow}\mathsf{D}(\mathcal{H}_{G,X^{I}})^{\heartsuit}.\]
**Lemma 6.3.1**.: _For any finite set \(I\), the functor_
\[\operatorname{Sat}_{G}^{\operatorname{naive}}:\operatorname{\mathsf{Rep}}( \check{G})_{X^{I}_{\operatorname{dR}}}\longrightarrow\mathsf{D}(\mathcal{H}_ {G,X^{I}})\]
_preserves almost ULA objects._
Proof.: It suffices to show that the \(\operatorname{\mathsf{IndCoh}}(X^{I})\)-linear functor
\[\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}}\longrightarrow\mathsf{D}( \mathcal{H}_{G})_{X^{I}}\]
preserves almost compact objects. By Proposition 4.1.2, almost compact objects in \(\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}}\) are compact. Moreover, the symmetric monoidal structure is rigid, which means that compactness in \(\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}}\) is equivalent to dualizability. Since \(\operatorname{Sat}_{G}^{\operatorname{naive}}\) is monoidal, it preserves dualizable objects, so it remains only to show that the unit object \(\delta_{\mathfrak{L}^{+}G,X^{I}}\) is almost compact in \(\mathsf{D}(\mathcal{H}_{G})_{X^{I}}\). For this, recall that the forgetful functor
\[\operatorname{oblv}_{\mathfrak{L}^{+}G}:\mathsf{D}(\mathcal{H}_{G})_{X^{I}} \longrightarrow\mathsf{D}(\operatorname{Gr}_{G})_{X^{I}}\]
is t-exact, conservative, and admits an \(\operatorname{\mathsf{IndCoh}}(X^{I})\)-linear right adjoint, namely \(*\)-averaging with respect to \(\mathfrak{L}^{+}G\). The image of \(\delta_{\mathfrak{L}^{+}G,X^{I}}\) under this functor is the direct image of \(\omega_{X^{I}}\) along the unit section
\[X^{I}\longrightarrow X^{I}\times_{X^{I}_{\operatorname{dR}}}(\operatorname{ Gr}_{G,X^{I}})_{\operatorname{dR}},\]
which is ind-proper. It follows that \(\operatorname{oblv}_{\mathfrak{L}^{+}G}(\delta_{\mathfrak{L}^{+}G,X^{I}})\) is compact in \(\mathsf{D}(\operatorname{Gr}_{G})_{X^{I}}\), and hence \(\delta_{\mathfrak{L}^{+}G,X^{I}}\) is almost compact in \(\mathsf{D}(\mathcal{H}_{G})_{X^{I}}\) by Lemma 2.4.1.
**Proposition 6.3.2**.: _The t-structure on \(\mathsf{D}(\mathcal{H}_{G,X^{I}})\) satisfies the hypotheses of Proposition 3.9.2. Moreover, the object_
\[\operatorname{\mathsf{Sph}}_{G}:=\mathsf{D}(\mathcal{H}_{G})^{\operatorname{ ren}}\]
_belongs to \(\operatorname{\mathsf{FactCat}}\) and admits a unique associative algebra structure compatible with_
\[\operatorname{\mathsf{Sph}}_{G}\longrightarrow\mathsf{D}(\mathcal{H}_{G}).\]
Proof.: We first show that that the t-structure on \(\mathsf{D}(\mathcal{H}_{G,X^{I}})\) is coherent, or equivalently that any object compact in \(\mathsf{D}(\mathcal{H}_{G,X^{I}})^{\geq 0}\) is almost compact in \(\mathsf{D}(\mathcal{H}_{G,X^{I}})\). The t-structure on \(\mathsf{D}(\operatorname{Gr}_{G,X^{I}})\) is coherent because \(\operatorname{Gr}_{G,X^{I}}\) is an ind-scheme locally of finite type. Thus the claim follows from Lemma 2.4.1 applied to the functor
\[\operatorname{oblv}_{\mathfrak{L}^{+}G}:\mathsf{D}(\mathcal{H}_{G,X^{I}}) \longrightarrow\mathsf{D}(\operatorname{Gr}_{G,X^{I}}),\]
which is t-exact, conservative, and admits a continuous right adjoint.
The functor
\[\operatorname{Sat}_{G}^{\operatorname{naive}}:\operatorname{\mathsf{Rep}}( \check{G})_{X^{I}_{\operatorname{dR}}}\longrightarrow\mathsf{D}(\mathcal{H}_ {G,X^{I}})\]
induces an equivalence on the hearts and preserves almost ULA objects by Lemma 6.3.1, and \(\operatorname{\mathsf{Rep}}(\check{G})_{X^{I}_{\operatorname{dR}}}\) is ULA generated by Proposition 4.1.2. It follows readily that \(\mathsf{D}(\mathcal{H}_{G,X^{I}})\) is almost ULA
generated. Conditions (ii-iii) of Proposition 3.9.1, as well as the hypothesis of Proposition 3.9.2, can be deduced from the corresponding properties of \(\mathsf{D}(\operatorname{Gr}_{G,X^{I}})\).
By Proposition 4.1.2, renormalizing \(\operatorname{Sat}_{G}^{\operatorname{naive}}\) yields a morphism
\[\mathsf{Rep}(\check{G})\longrightarrow\mathsf{Sph}_{G}\]
in \(\operatorname{AssocAlg}(\operatorname{\mathsf{FactCat}}^{\operatorname{ lax-fact}})\). Moreover, by Lemma 6.3.1, this morphism admits a right adjoint in \(\operatorname{\mathsf{FactCat}}^{\operatorname{iax-fact}}_{\operatorname{ lax-unt}^{1}}\). Since \(\operatorname{Sat}_{G}^{\operatorname{naive}}\) is an equivalence on the hearts, this right adjoint is conservative and hence monadic over each \(X^{I}_{\operatorname{dR}}\). It follows that \(\mathsf{Sph}_{G}\) belongs to \(\operatorname{\mathsf{FactCat}}\), i.e. factorizes strictly, since \(\mathsf{Rep}(\check{G})\) has this property.
For the last claim, by Proposition 3.9.3 it suffices to show that \(\mathsf{D}(\mathcal{H}_{G})\) is an associative algebra in the pseudo-tensor category \((\operatorname{\mathsf{FactCat}}^{\operatorname{iax-fact}})^{\otimes}_{ \operatorname{aULA}}\). This follows from right completeness and the corresponding property of \(\mathsf{Rep}(\check{G})\), once more using the fact that the monoidal functor
\[\operatorname{Sat}_{G}^{\operatorname{naive}}:\mathsf{Rep}(\check{G})_{X^{ I}_{\operatorname{dR}}}\longrightarrow\mathsf{Sph}_{G,X^{I}_{\operatorname{dR}}}\]
is t-exact and induces an equivalence on the hearts for any finite set \(I\).
_Remark 6.3.3_.: The above result may be compared with [13] Corollary 1.2 in the Betti setting.
### Whittaker (co)invariants
We have an isomorphism
\[N^{-}/[N^{-},N^{-}]\cong\prod_{i\in\mathcal{I}_{G}}\mathbb{G}_{a}^{\Omega^{ \mathsf{1}}_{X}}\]
of group schemes over \(X\), where
\[\mathbb{G}_{a}^{\Omega^{\mathsf{1}}_{X}}:=\mathbb{G}_{a}\overset{\mathbb{G}_{ m}}{\times}\Omega^{\mathsf{1}}_{X}\]
is the canonical twist of \(\mathbb{G}_{a}\). Summing over \(\mathcal{I}_{G}\), we obtain a homomorphism \(N^{-}\to\mathbb{G}_{a}^{\Omega^{\mathsf{1}}_{X}}\) and hence a homomorphism of factorizable group ind-schemes
\[\mathfrak{L}N^{-}\longrightarrow\mathfrak{L}(\mathbb{G}_{a}^{\Omega^{\mathsf{ 1}}_{X}}).\]
Finally, we compose with the canonical residue homomorphism
\[\mathfrak{L}(\mathbb{G}_{a}^{\Omega^{\mathsf{1}}_{X}})\longrightarrow\mathbb{ G}_{a}\]
to obtain
\[\psi:\mathfrak{L}N^{-}\longrightarrow\mathbb{G}_{a}.\]
In particular, we obtain a multiplicative D-module (a.k.a. character D-module of rank 1) \(\psi^{!}\exp[1]\) on \(\mathfrak{L}N^{-}\). For any category \(\mathcal{C}\) acted on by \(\mathsf{D}((\mathfrak{L}G)_{X^{I}})\), we can consider the associated Whittaker invariants
\[\mathcal{C}^{\mathfrak{L}N^{-},\psi}:=\mathcal{C}^{\mathfrak{L}N^{-},\psi^{!} \exp[1]},\]
and similarly for coinvariants (we suppress \(X^{I}\) from the notation here for simplicity). Theorem 2.2.1 of [11] asserts that there is a canonical equivalence
\[\mathcal{C}_{\mathfrak{L}N^{-},\psi}\overset{\longrightarrow}{\longrightarrow }\mathcal{C}^{\mathfrak{L}N^{-},\psi}.\]
### The spherical Whittaker category
As shown in [14] Corollary 2.31.2, the \(\mathsf{D}(X^{I})\)-module categories
\[\mathsf{D}(\operatorname{Gr}_{G,X^{I}})^{\mathfrak{L}N^{-},\psi}\]
assemble into an object
\[\mathsf{D}(\operatorname{Gr}_{G})^{\mathfrak{L}N^{-},\psi}\]
of \(\mathsf{FactCat}\), with unit given by the so-called Whittaker vacuum object. Convolution on the right defines an action of \(\mathsf{D}(\mathcal{H}_{G})\) on this category, and in particular, we obtain a morphism
\[\mathsf{D}(\mathcal{H}_{G})\longrightarrow\mathsf{D}(\operatorname{Gr}_{G}) ^{\mathfrak{L}N^{-},\psi}\]
in \(\mathsf{FactCat}\) by acting on the unit.
**Theorem 6.5.1**.: _The composite functor_
\[\mathsf{Rep}(\check{G})\xrightarrow{\operatorname{Sat}_{G}^{\operatorname{ naive}}}\mathsf{D}(\mathcal{H}_{G})\longrightarrow\mathsf{D}(\operatorname{Gr}_{G})^{ \mathfrak{L}N^{-},\psi}\]
_is an isomorphism in \(\mathsf{FactCat}\)._
Proof.: This is Theorem 6.36.1 of [14].
In particular, the equivalence in the theorem induces an action of \(\mathsf{D}(\mathcal{H}_{G})\) on
\[\mathsf{Rep}(\check{G})\cong\mathsf{D}(\operatorname{Gr}_{G})^{\mathfrak{L} N^{-},\psi}\]
in \(\mathsf{FactCat}\). By Proposition 3.4.1, this action corresponds to a morphism
\[\mathsf{D}(\mathcal{H}_{G})\longrightarrow\underline{\operatorname{End}_{ \mathsf{FactCat}_{\operatorname{last-unit}}}}(\mathsf{Rep}(\check{G}))\cong \mathsf{Rep}(\check{G}\times\check{G}) \tag{6.5.1}\]
in \(\mathsf{AssocAlg}(\mathsf{FactCat}_{\operatorname{last-unit}})\).
We will need the following observation.
**Lemma 6.5.2**.: _For any finite set \(I\), the functor_
\[\mathsf{D}(\mathcal{H}_{G})_{X^{I}_{\operatorname{dR}}}\xrightarrow{\eqref{ eqn:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
### Derived Satake transform
Note that the functor (6.5.1) is not strictly unital with respect to the factorization structure. Namely, it preserves monoidal units, but the monoidal and factorization units in \(\mathsf{Rep}(\tilde{G}\times\tilde{G})\) do not coincide.
Applying the functor (3.7.2), we obtain a morphism
\[\mathsf{D}(\mathcal{H}_{G})\longrightarrow\mathcal{O}_{\tilde{G}}\mathsf{mod }^{\text{fact}}(\mathsf{Rep}(\tilde{G}\times\tilde{G})) \tag{6.6.1}\]
in \(\mathsf{AssocAlg}(\mathsf{FactCat}^{\text{last-fact}})\)
Observe that the monoidal functor
\[\mathsf{D}(\mathcal{H}_{G})_{X^{I}_{\text{dR}}}\xrightarrow{\eqref{eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq
Since \(\mathfrak{L}N_{M}^{-}\) is connected, this automorpism of \(\mathsf{D}(\operatorname{Gr}_{M})\) preserves the Whittaker condition. Compose this automorphism with the previously constructed morphism
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L}N ^{-},\psi}\longrightarrow\mathsf{D}(\operatorname{Gr}_{M})^{\mathfrak{L}N_{ M}^{-},\psi}\]
to obtain another such morphism. Finally, applying Theorem 6.5.1 to \(M\), we have
\[\mathsf{D}(\operatorname{Gr}_{M})^{\mathfrak{L}N_{M}^{-},\psi}\tilde{ \longrightarrow}\mathsf{Rep}(\tilde{M}).\]
This completes the construction of the morphism (6.7.1).
**Theorem 6.7.1**.: _The morphism (6.7.1) sends the unit object to the factorization algebra \(\Upsilon(\tilde{\mathfrak{n}}_{\check{P}})\) in \(\mathsf{Rep}(\tilde{M})\)._
Proof.: In the case \(P=G\), the morphism (6.7.1) is inverse to the equivalence of Theorem 6.5.1, and \(\Upsilon(\tilde{\mathfrak{n}}_{\check{P}})\) is the unit object in \(\mathsf{Rep}(\tilde{M})=\mathsf{Rep}(\check{G})\), so there is nothing to show.
The case \(P=B\) is Theorem 4.4.1 of [10].
The general case will be proved in forthcoming work by Faergeman and Hayashi.
In particular, applying the construction (3.5.1) to (6.7.1), we obtain a morphism
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} N^{-},\psi}\longrightarrow\Upsilon(\tilde{\mathfrak{n}}_{\check{P}})\text{-} \mathsf{mod}^{\operatorname{fact}}(\mathsf{Rep}(\tilde{M})) \tag{6.7.2}\]
in \(\mathsf{FactCat}\).
### The accessible Whittaker category
The morphism
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} N^{-},\psi}\xrightarrow{\operatorname{oblv}_{\mathfrak{L}N^{-},\psi}}\mathsf{D}( \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\xrightarrow{\operatorname {Av}_{\ast}^{\mathbbm{v}^{+}G}}\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\]
in \(\mathsf{FactCat}_{\text{lax-untl}}\) admits a left adjoint (see [10] Proposition 5.4.1), which we denote by \(\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}\). Moreover, by the construction of unital structures in _loc. cit._, the morphism \(\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}\) is strictly unital.
Define the factorization subcategory
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} N^{-},\psi,\operatorname{acc}}\subset(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})^{\mathfrak{L}N^{-},\psi}\]
to be generated under colimits by the essential image of
\[\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}:\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\longrightarrow( \mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L }N^{-},\psi}.\]
As explained in _loc. cit._, this subcategory admits a unique unital factorization structure which makes the inclusion a morphism in \(\mathsf{FactCat}\).
**Theorem 6.8.1**.: _The morphism (6.7.2) restricts to an equivalence_
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} N^{-},\psi,\operatorname{acc}}\tilde{\longrightarrow}\Upsilon(\tilde{ \mathfrak{n}}_{\check{P}})\text{-}\mathsf{mod}_{0}^{\operatorname{fact}}( \mathsf{Rep}(\tilde{M})).\]
Proof.: In the case \(P=G\), the accessibility condition is vacuous, and this is inverse to the equivalence of Theorem 6.5.1.
In the case \(P=B\), Theorem 5.7.1 of [10] says that this functor is fully faithful. To prove essential surjectivity, we claim that for arbitrary \(P\), the image of the functor generates
\[\Upsilon(\tilde{\mathfrak{n}}_{\check{P}})\text{-}\mathsf{mod}_{0}^{ \operatorname{fact}}(\mathsf{Rep}(\tilde{M}))\]
under colimits. The functor in question is \(\mathsf{Rep}(\tilde{M})\)-linear, where \(\mathsf{Rep}(\tilde{M})\) acts on
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} N^{-},\psi,\operatorname{acc}}\]
via \(\operatorname{Sat}_{M}^{\operatorname{naive}}\). We have a commutative triangle
in \(\operatorname{\mathsf{FactCat}}\), where both diagonal morphisms are given by acting on the unit for the factorization structure. Similarly, the triangle
commutes because both circuits are \(\operatorname{\mathsf{Rep}}(\tilde{M})\)-linear morphisms in \(\operatorname{\mathsf{FactCat}}\). Since the images of \(\operatorname{\operatorname{triv}}_{\tilde{N}_{\tilde{P}}}\) and (5.3.1) generate their targets under colimits, the claim follows.
For the general case, by the argument above it suffices to show fully faithfulness. This can be deduced from Theorem 6.7.1 by adapting the argument of [14] to the parabolic case. One also needs a parabolic version of the Arkhipov-Bezrukavnikov equivalence [1], which can be deduced from the case \(P=B\), as explained in SS6 of [1] (cf. also [1]).
### The \(\mathbf{t}\)-structure on the semi-infinite spherical category
We now prepare to define the spherical Hecke category \(\mathsf{Sph}_{G,P}\) attached to \(P\). Appealing again to [14] SS2, we recall that the categories
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{\operatorname{\mathcal{X}}_{\operatorname{dR}}^{I}}\]
assemble into an object of \(\operatorname{\mathsf{FactCat}}\). We will denote the unit object by \(\Delta^{0}\).
Consider the correspondence of corr-unital factorization spaces
The composite morphism
\[\mathsf{D}(\mathcal{H}_{M})\xrightarrow{\mathfrak{q}^{!}}\mathsf{D}(\mathfrak{ L}^{+}P\backslash\mathfrak{L}P/\mathfrak{L}^{+}M)\longrightarrow\mathsf{D}( \mathfrak{L}^{+}P\backslash\mathfrak{L}P)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\]
is an isomorphism in \(\operatorname{\mathsf{FactCat}}\). We will abuse notation and denote it simply by \(\mathfrak{q}^{!}\), and its inverse by \(\mathfrak{q}_{!}\).
The morphism
admits a left adjoint \(\mathfrak{p}_{!}\). Explicitly, the composite
\[\mathsf{D}(\mathcal{H}_{M})\xrightarrow{\mathfrak{q}^{!}}\mathsf{D}(\mathfrak{ L}^{+}P\backslash\mathfrak{L}P)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M} \xrightarrow{\mathfrak{p}_{!}}\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\]
is given by acting on the unit object.
Now consider the adjunction
\[\mathfrak{p}_{!}\mathfrak{q}^{!}[-\ell_{P}(\deg_{M})]:\mathsf{D}(\mathcal{H}_ {M})\rightleftarrows\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{ \mathfrak{L}N_{P}\mathfrak{L}^{+}M}:\mathfrak{q}_{!}\mathfrak{p}^{!}[\ell_{P}( \deg_{M})], \tag{6.9.1}\]
which takes place in \(\operatorname{\mathsf{FactCat}}_{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \operatorname{ \operatorname{ \operatorname{ \mid
We equip \((\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X^{I}_{\mathrm{dR}}}\) with the t-structure characterized by the requirement that
\[\mathfrak{q}_{!}\mathfrak{p}^{!}[\ell_{P}(\mathrm{deg}_{M})]:(\mathsf{D}( \mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M })_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathsf{D}(\mathcal{H}_{M})_{X^{I}_{ \mathrm{dR}}}\]
be left t-exact. Equivalently, the connective objects are generated by the image of \(\mathsf{D}(\mathcal{H}_{M,X^{I}})^{\leqslant 0}\) under \(\mathfrak{p}_{!}\mathfrak{q}^{!}[-\ell_{P}(\mathrm{deg}_{M})]\).
We remark that this does _not_ agree with the t-structure on this category defined (in the case \(P=B\)) by Gaitsgory in [1]. Namely, unlike the t-structure introduced in _loc. cit._, our t-structure is local on the curve \(X\) (since the t-structure on \(\mathsf{D}(\mathcal{H}_{M,X^{I}})\) has this feature). However, the two t-structures do agree when restricted to objects supported at a single point \(x\in X(k)\).
**Lemma 6.9.1**.: _For any finite set \(I\), the functor_
\[\mathfrak{p}_{!}\mathfrak{q}^{!}[-\ell_{P}(\mathrm{deg}_{M})]:\mathsf{D}( \mathcal{H}_{M})_{X^{I}_{\mathrm{dR}}}\longrightarrow(\mathsf{D}(\mathfrak{L }^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{X^{I}_{ \mathrm{dR}}}\]
_is t-exact._
Proof.: This functor is right t-exact by definition, so it suffices to show left t-exactness. Since its right adjoint is conservative, it is enough to prove that the resulting monad on \(\mathsf{D}(\mathcal{H}_{M,X^{I}})\) is left t-exact. This monad is \(\mathsf{D}(\mathcal{H}_{M,X^{I}})\)-linear, so its value on an object \(\mathcal{M}\) in \(\mathsf{D}(\mathcal{H}_{M,X^{I}})\) is
\[(\mathfrak{q}_{!}\mathfrak{p}^{!}\Delta^{0}_{X^{I}})[\ell_{P}(\mathrm{deg}_{M })]\star\mathcal{M}.\]
Using \(\mathsf{D}(X^{I})\)-linearity, we can reduce to the case that \(I\) has one element. So we must show that if \(\mathcal{M}\) belongs to \(\mathsf{D}(\mathcal{H}_{M,X})^{\geqslant 0}\), then
\[(\mathfrak{q}_{!}\mathfrak{p}^{!}\Delta^{0}_{X})[\ell_{P}(\mathrm{deg}_{M})] \star\mathcal{M}\]
is coconnective. Since \(\mathsf{D}(\mathcal{H}_{M,X})\) is right complete, we can even assume that \(\mathcal{M}\) belongs to
\[\mathsf{D}(\mathcal{H}_{M,X})^{\heartsuit}\cong\mathsf{Rep}(\check{M})^{\heartsuit}_ {X^{\mathrm{dR}}}\cong(\mathsf{Rep}(\check{M})\otimes\mathsf{D}(X))^{\heartsuit}.\]
Choose an etale coordinate on \(X\), so that \(\Delta^{0}_{X}\cong\Delta^{0}_{x}\boxtimes\omega_{X}\) for a fixed point \(x\in X(k)\). Thus it suffices to show that for any \(V\in\mathsf{Rep}(\check{M})^{\heartsuit}\), the object
\[(\mathfrak{q}_{!}\mathfrak{p}^{!}\Delta^{0}_{x})[\ell_{P}(\mathrm{deg}_{M})] \star\mathrm{Sat}^{\mathrm{naive}}_{M,x}(V)\]
belongs to \(\mathsf{D}(\mathcal{H}_{M,x})^{\geqslant 0}\). Theorem 1.5.5 in [1] says that
\[(\mathfrak{q}_{!}\mathfrak{p}^{!}\Delta^{0}_{x})[\ell_{P}(\mathrm{deg}_{M})]\]
is coconnective when \(P=B\), and the general case can be proved similarly. Now we are done by t-exactness of the convolution product on \(\mathsf{D}(\mathcal{H}_{M,x})\).
### The t-structure on the semi-infinite Whittaker category
Consider the morphism
\[\mathsf{Rep}(\check{M})\longrightarrow(\mathsf{D}(\mathfrak{L}N^{-},\psi \backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{\mathrm{acc}} \tag{6.10.1}\]
in \(\mathsf{FactCat}\) given by acting on the unit of the factorization structure via \(\mathrm{Sat}^{\mathrm{naive}}_{M}\). For any finite set \(I\), we equip the category
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L} ^{N-},\psi,\mathrm{acc}}_{X^{I}_{\mathrm{dR}}}\]
with the t-structure whose connective objects are generated under colimits by the image of \(\mathsf{Rep}(\check{M})^{\leqslant 0}\) under (6.10.1).
**Proposition 6.10.1**.: _The equivalence of Theorem 6.8.1 is exact over \(X^{I}_{\mathrm{dR}}\) for every finite set \(I\)._
Proof.: By definition, the connective objects in
\[\Upsilon(\tilde{\mathfrak{n}}_{\tilde{P}})\text{--mod}_{0}^{\text{fact}}(\mathsf{ Rep}(\tilde{M}))_{X^{I}_{\text{dR}}}\]
are generated by the image of \(\mathsf{Rep}(\tilde{P})_{X^{I}_{\text{dR}}}^{\leqslant 0}\) under the left adjoint in (5.3.1). But the latter is generated by the image of \(\mathsf{Rep}(\tilde{M})_{X^{I}_{\text{dR}}}^{\leqslant 0}\) under
\[\operatorname{triv}_{\tilde{N}}:\mathsf{Rep}(\tilde{M})_{X^{I}_{\text{dR}}} \longrightarrow\mathsf{Rep}(\tilde{P})_{X^{I}_{\text{dR}}},\]
so it suffices to show that the triangle
\[(\mathsf{D}(\mathfrak{L}N^{-},\psi\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{\text{acc}}\xrightarrow{\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small \begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small \begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small \begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}[}{ \text{\small\begin{array}[}{\text{\small\begin{array}[}[]{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c} \text{\begin{array}{c}\text{\small\begin{array}[{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}[{c}\text{\small\begin{array}{c} \text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}[{ \begin{array}{c}\text{\small\begin{array}[}\text{\small\begin{array}{c}\text{\small\begin{array}[ ]{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}[ ]{c}\text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}[ \text{\begin{array}[}\text{\small\begin{array}{c}\text{\small\begin{array}{c}\text{\small\begin{array}[ \text{\begin{array}[\right]{c}\text{\begin{array}\small\begin{array}[ \right]{c}\text{\begin{array}\begin{array}[\]{c}\text{\small\begin{array}[ \right]{c}\text{\begin{array}\text{\begin{array}[\right]{c}\text{\begin{array}[ \right]{c}\text{\begin{array}[\right]{c}\text{\begin{array}\begin{array}[ \right]{c}\text{\begin{array}[\text{\begin{array}[\right]{c}\text{\begin{array}[ \begin{\begin{array}\begin{array}[\right]{\begin{\begin{array}\begin{\lst}[\begin{\lst}[\right]{ \begin{\begin}\begin{array}[\right]{\begin{\lst}[\begin{\begin{array}\begin{array}[\right]{\begin{\lst}[\begin{ \begin}\begin{array}[\right]{\begin{\lst}[\begin{\begin{\begin}[\right]{\begin{\begin}[ \begin{\begin}\begin}{\lst}[\right]{\begin{\lfloor}\begin{array}[\right]{ \begin{\lfloor}\begin{array}[\right]{c}\text{\begin{array}\begin{array}[\right]{c}\text{\begin{array}[ \begin{array}{\begin{array}\lfloor}\text{\begin{array}[\right]{c}\text{\begin{array}\begin{array}[ \right]{c}\text{\begin{array}\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor \begin{array}[\right]{c}\text{\begin{array}\begin{array}[\right]{c}\text{\begin{array}\begin{array} \lfloor\begin{array}[\right]{c}\text{\begin{array}\begin{array}[\right]{c}\text{\begin{array}\lfloor \begin{array}\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array} \begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array} \begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array} \begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array} \lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor \begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array} \lfloor\begin{array}\lfloor\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array} \lfloor\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array}\lfloor\begin{array}[\right]{c}\text{\begin{array} \begin{\lfloor\begin}\lfloor\begin}\lfloor\begin{array}{c}\text{\begin{array}\lfloor \begin{array}\lfloor\begin{array}{c}\text{\begin{array}\lfloor\begin}[]{c}\text{\begin{array}\lfloor \begin}{array}\lfloor\hline\hline\end{array}[]{c}\text{\begin{array}\hfill\begin{array}{c}\text{\begin{array}\hfill\begin{array}{c}\text{\hfill\begin{array}\lfloor\begin{array}{c} \text{\begin{array}\lfloor\begin}{array}\hfill\hfill\begin{array}{c}\text{\hfill\begin{array}\hfill\hfill\begin{array}{c}\text{\hfill\begin}{array}\hfill
for a fixed \(x\in X(k)\), and similarly for
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L}N^ {-},\psi,\mathrm{acc}}_{X_{\mathrm{dR}}}.\]
Thus, by Lemma 2.2.1 we can even reduce to proving the claim at a single point \(x\). Since the t-structure on \((\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{x}\) is right complete, it is enough to prove that the functor is left t-exact when restricted to the heart. By (parabolic variants of) Proposition 1.5.7 and Theorem 4.2.3 in [1], there is a functor
\[\mathsf{Rep}(\check{P})\longrightarrow(\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x} \tag{6.11.1}\]
which induces an equivalence
\[\mathsf{Rep}(\check{P})^{\heartsuit}\tilde{\longrightarrow}(\mathsf{D}( \mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M })^{\heartsuit}_{x}.\]
Since \(\mathsf{Rep}(\check{P})^{\heartsuit}\) is generated under extensions and direct sums by the image of
\[\operatorname{triv}_{\check{N}_{P}}:\mathsf{Rep}(\check{M})^{\heartsuit} \longrightarrow\mathsf{Rep}(\check{P})^{\heartsuit},\]
the claim follows from Proposition 6.10.2.
Finally, we prove conservativity on eventually coconnective objects. Again, using factorization we can reduce to the case that \(I\) has a single element. The functor \(\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}\) is left adjoint to \(\operatorname{Av}_{*}^{\mathfrak{L}^{+}G}\), and by the t-exactness statements proved above, the resulting monad on
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X_{\mathrm{dR}}}\]
is left t-exact. Thus it suffices to show that this monad is conservative on eventually coconnective objects. Observe that the monad is given by the action of the object
\[\operatorname{Av}_{*}^{\mathfrak{L}^{+}G}\operatorname{Av}_{!}^{\mathfrak{L} N^{-},\psi}\delta_{1,X}\]
in \(\mathsf{D}(\mathcal{H}_{G,X})\). After etale localization on \(X\) we have \((\mathfrak{L}G)_{X}\cong(\mathfrak{L}G)_{x}\times X\) for a fixed \(x\in X(k)\), from which it follows that the object
\[\operatorname{Av}_{*}^{\mathfrak{L}^{+}G}\operatorname{Av}_{!}^{\mathfrak{L} N^{-},\psi}\delta_{1,X}\]
has the form \(\mathcal{M}\boxtimes\omega_{X}\) with respect to the decomposition
\[\mathsf{D}(\mathcal{H}_{G,X})\cong\mathsf{D}(\mathcal{H}_{G,x})\otimes \mathsf{D}(X).\]
The naive Satake equivalence implies that for any \(i\in\mathbb{Z}\), we have \(H^{i}\mathcal{M}\cong\operatorname{Sat}_{G,x}^{\operatorname{naive}}V\) for some \(V\) in \(\mathsf{Rep}(\check{G})^{\heartsuit}\). Since the monad in question is left t-exact, we have \(H^{i}\mathcal{M}=0\) for \(i<0\), and Theorem 6.5.1 implies that \(H^{0}\mathcal{M}=\delta_{1,x}\). Thus it suffices show that the action of the symmetric monoidal category \(\mathsf{Rep}(\check{G})\) on \((D(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L} ^{+}M})_{x}\) is left t-exact. This follows from the fact that the functor (6.11.1) is \(\mathsf{Rep}(\check{G})\)-linear and induces an equivalence on the hearts.
**Lemma 6.11.2**.: _For any finite set \(I\), the t-structure on \((\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X_{\mathrm{dR}}^{I}}\) satisfies the conditions of Proposition 3.9.2._
Proof.: Most of the conditions can be easily deduced from the corresponding properties of the t-structure on \(\mathsf{D}(\mathcal{H}_{M})_{X_{\mathrm{dR}}^{I}}\), with the exception of coherence.
For coherence, we consider the adjunction of \(\mathsf{D}(X^{I})\)-module categories
\[\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}:(\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{X_{\mathrm{dR} }^{I}}\rightleftarrows(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{ L}^{+}M})^{\mathfrak{L}N^{-},\psi,\mathrm{acc}}_{X_{\mathrm{dR}}^{I}}: \operatorname{Av}_{*}^{\mathfrak{L}^{+}G}.\]
Using Theorem 6.8.1, we can replace the right side by its spectral counterpart:
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X_{\mathrm{dR}}^{I}}\rightleftarrows\Upsilon(\tilde{ \mathfrak{n}}_{\check{P}})\mathsf{-mod}_{0}^{\mathrm{fact}}(\mathsf{Rep}( \check{M}))_{X_{\mathrm{dR}}^{I}}.\]
Proposition 6.11.1 says that the left adjoint is t-exact, and also conservative on eventually coconnective objects. By Lemma 2.4.1, an object in \((\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X_{\mathrm{dR}}^{I}}\) is almost compact if and
only if it is so after applying the left adjoint. Now coherence of the left side follows from coherence of the right side, which was proved in Proposition 5.3.1.
Thus we can define
\[\mathsf{Sph}_{G,P}:=\left(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G) _{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\right)^{\mathrm{ren}},\]
which is _a priori_ an object of \(\mathsf{FactCat}^{\mathrm{lax-fact}}\). Note that the adjunction (6.9.1) renormalizes to a monadic adjunction
\[\mathsf{Sph}_{M}\rightleftarrows\mathsf{Sph}_{G,P},\]
and in particular \(\mathsf{Sph}_{G,P}\) belongs to \(\mathsf{FactCat}\) because \(\mathsf{Sph}_{M}\) does.
**Proposition 6.11.3**.: _The object \(\mathsf{Sph}_{G,P}\) admits a unique structure of \((\mathsf{Sph}_{G},\mathsf{Sph}_{M})\)-bimodule in \(\mathsf{FactCat}\) compatible with the morphism_
\[\mathsf{Sph}_{G,P}\longrightarrow\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}.\]
Proof.: By Proposition 3.9.3, it suffices to show that the \((\mathsf{D}(\mathcal{H}_{G}),\mathsf{D}(\mathcal{H}_{M}))\)-bimodule structure on \(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M}\) takes place in the pseudo-tensor category \(\mathsf{FactCat}^{\otimes}_{\mathrm{aULA}}\). For \(\mathsf{D}(\mathcal{H}_{M})\), this follows from the existence of the right adjoint in (6.9.1), which is \(\mathsf{D}(\mathcal{H}_{M})\)-linear as well as conservative and left t-exact over \(X^{I}_{\mathrm{dR}}\) for every finite set \(I\). As for \(\mathsf{D}(\mathcal{H}_{G})\), by right completeness of the t-structure and the naive Satake equivalence, it is enough to show that any object of \(\mathsf{Rep}(\tilde{G})^{\heartsuit}_{X^{I}_{\mathrm{dR}}}\) acts by a left t-exact functor. Using factorization, this immediately reduces to the corresponding claim at a point \(x\in X(k)\), which was established in the proof of Lemma 6.11.1.
### Derived Satake transform for a parabolic
The first step is to construct a morphism
\[\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M}\longrightarrow\mathsf{Rep}(\tilde{G}\times\tilde{M}) \tag{6.12.1}\]
in \(\mathsf{FactCat}_{\mathrm{lax-untl}}\). By Proposition 3.4.1, this is the same datum as a morphism
\[\mathsf{Rep}(\tilde{G})\otimes\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\longrightarrow\mathsf{ Rep}(\tilde{M}).\]
By Theorem 6.5.1, we can identify
\[\mathsf{Rep}(\tilde{G})\tilde{\longrightarrow}\mathsf{D}(\mathrm{Gr}_{G})^{ \mathfrak{L}N^{-},\psi},\]
so we will construct a morphism
\[\mathsf{D}(\mathrm{Gr}_{G})^{\mathfrak{L}N^{-},\psi}\otimes\mathsf{D}( \mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M }\longrightarrow\mathsf{Rep}(\tilde{M}).\]
First, convolve in the middle to obtain a morphism
\[\mathsf{D}(\mathrm{Gr}_{G})^{\mathfrak{L}N^{-},\psi}\otimes\mathsf{D}( \mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+} M}\longrightarrow\left(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+} M}\right)^{\mathfrak{L}N^{-},\psi}.\]
Then compose with (6.7.1) to obtain the desired morphism (6.12.1).
**Theorem 6.12.1**.: _The image of the unit under the morphism (6.12.1) is canonically isomorphic to \(\Upsilon(\tilde{\mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\) as a factorization algebra._
Proof.: We have already seen this in the case \(P=G\), where the functor (6.12.1) is monoidal and the \(\Upsilon(\tilde{\mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\) is the monoidal unit \(\mathcal{O}_{\check{G}}\) in \(\mathsf{Rep}(\tilde{G}\times\check{G})\).
For the general case, a parabolic variant of Theorem 7.9.1 in [11] says that the triangle
in \(\mathsf{FactCat}_{\text{lax-untl}}\) commutes. Here the left diagonal morphism is the composite
\[\mathsf{Rep}(\check{G})\longrightarrow\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\xrightarrow{\text{Av}_{ \mathfrak{l}}^{\mathfrak{L}^{N-},\psi}}(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L }N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L}N^{-},\psi},\]
where the first functor is given by acting on the unit via \(\text{Sat}_{G}^{\text{naive}}\). Since \(\Upsilon(\tilde{\mathfrak{n}},-)\) corresponds to \(\Upsilon(\tilde{\mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\) under the equivalence
\[\underline{\text{Hom}}_{\mathsf{FactCat}_{\text{lax-untl}}}(\mathsf{Rep}( \check{G}),\mathsf{Rep}(\check{M}))\cong\mathsf{Rep}(\check{G}\times\check{M}),\]
the claim follows.
Applying (3.5.1), we therefore obtain a morphism
\[\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M}\longrightarrow\Upsilon(\tilde{\mathfrak{n}}_{\check{P}}, \mathcal{O}_{\check{G}})\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}( \check{G}\times\check{M})). \tag{6.12.2}\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\). By construction, it is a morphism of \((\mathsf{D}(\mathcal{H}_{G}),\mathsf{D}(\mathcal{H}_{M}))\)-bimodules, where the bimodule structure on the right side comes from the action of
\[(\mathcal{O}_{\check{G}}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}( \check{G}\times\check{G})),\mathcal{O}_{\check{M}}\text{--}\mathsf{mod}^{ \text{fact}}(\mathsf{Rep}(\check{M}\times\check{M})))\]
via (6.6.1).
**Proposition 6.12.2**.: _The image of the morphism (6.12.2) is contained in_
\[\Upsilon(\tilde{\mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\text{--} \mathsf{mod}_{0}^{\text{fact}}(\mathsf{Rep}(\check{G}\times\check{M})).\]
_For any finite set \(I\), the resulting functor_
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{X^{I}_{\text{dR}}}\longrightarrow\Upsilon(\tilde{ \mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\text{--}\mathsf{mod}_{0}^{ \text{fact}}(\mathsf{Rep}(\check{G}\times\check{M}))_{X^{I}_{\text{dR}}}\]
_is t-exact._
Proof.: The first claim follows from the fact that the image of the left adjoint in (6.9.1) generates the target under colimits, and hence the image of (6.12.2) is contained in the \(\mathsf{D}(\mathcal{H}_{M})\)-submodule category generated by the unit.
Right t-exactness of the functor in question follows from the commutative triangle
where the diagonal functors are given by acting on the unit objects via \(\text{Sat}_{M}^{\text{naive}}\). Namely, over each \(X^{I}_{\text{dR}}\), the diagonal functors are t-exact and generate their targets under colimits.
Using factorization, left t-exactness reduces to the corresponding claim at a point \(x\in X\). By right completeness of the t-structure, it suffices to show that (6.12.2) sends any object in
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{x}^{\heartsuit}\]
to a coconnective object. As in the proof of Proposition 6.11.1, we have an equivalence
\[(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{\mathfrak{L}N_{P} \mathfrak{L}^{+}M})_{x}^{\heartsuit}\cong\mathsf{Rep}(\check{P})^{\heartsuit},\]
and \(\mathsf{Rep}(\check{P})^{\heartsuit}\) is generated under extensions and filtered colimits by the image of
\[\text{triv}_{\check{N}_{\check{P}}}:\mathsf{Rep}(\check{M})^{\heartsuit} \longrightarrow\mathsf{Rep}(\check{P})^{\heartsuit}.\]
In view of the commutative triangle above, the claim follows from Lemma 5.5.1.
Thus (6.12.2) renormalizes to a morphism
\[\operatorname{Sat}_{G,P}:\operatorname{\mathsf{Sph}}_{G,P}\longrightarrow \operatorname{\mathsf{Sph}}_{\check{G},\check{P}}^{\operatorname{spec}}\]
in \(\operatorname{\mathsf{FactCat}}\). By construction, it is a morphism of \((\operatorname{\mathsf{Sph}}_{G},\operatorname{\mathsf{Sph}}_{M})\)-bimodules, where the bimodule structure on the target comes from \(\operatorname{Sat}_{G}\) and \(\operatorname{Sat}_{M}\).
The following is our second main theorem.
**Theorem 6.12.3**.: _The morphism \(\operatorname{Sat}_{G,P}\) is an isomorphism._
The proof of this theorem is contained in SS10.
### Reduction to a point
Before proceeding, we show that in order to prove Theorem 6.12.3 (and hence its special case Theorem 6.6.1), it is enough to show that
\[\operatorname{Sat}_{G,P,x}:\operatorname{\mathsf{Sph}}_{G,P,x}\longrightarrow \operatorname{\mathsf{Sph}}_{\check{G},\check{P},x}^{\operatorname{spec}}\]
is an equivalence for any \(x\in X(k)\).
Consider the commutative triangle
in \(\operatorname{\mathsf{FactCat}}\), where the diagonal morphisms are uniquely characterized by \(\operatorname{\mathsf{Rep}}(\check{M})\)-linearity. For any finite set \(I\), the diagonal functors preserve ULA objects over \(X^{I}\) and generate their targets, which implies that the functor \(\operatorname{Sat}_{G,P}\) preserves ULA objects. Thus, by Proposition B.8.1 in [11], it suffices to show that this functor is an equivalence over each stratum in \(X^{I}\). Using factorization, we can therefore assume that \(I\) is a singleton, i.e. we must show that \(\operatorname{Sat}_{G,P,X_{\operatorname{dR}}}\) is an equivalence.
Choosing an etale coordinate on \(X\), we can moreover assume that we have \(\mathsf{D}(X)\)-linear equivalences
\[\operatorname{\mathsf{Sph}}_{G,P,X_{\operatorname{dR}}}\cong\operatorname{ \mathsf{Sph}}_{G,P,x}\otimes\mathsf{D}(X)\]
and
\[\operatorname{\mathsf{Sph}}_{\check{G},\check{P},X_{\operatorname{dR}}}^{ \operatorname{spec}}\cong\operatorname{\mathsf{Sph}}_{\check{G},P,x}^{ \operatorname{spec}}\otimes\mathsf{D}(X)\]
for a fixed \(x\in X(k)\), and that the \(\mathsf{D}(X)\)-linear functor \(\operatorname{Sat}_{G,P,X_{\operatorname{dR}}}\) is induced from \(\operatorname{Sat}_{G,P,x}\). Thus it suffices to show that
\[\operatorname{Sat}_{G,P,x}:\operatorname{\mathsf{Sph}}_{G,P,x}\longrightarrow \operatorname{\mathsf{Sph}}_{\check{G},\check{P},x}^{\operatorname{spec}}\]
is an equivalence.
## 7. The case of a torus
The goal of this section is to formulate and prove Theorem 7.3.1, a factorizable version of local geometric class field theory in the de Rham setting, then deduce Theorem 6.6.1 in the case \(G=T\).
### Gauge forms
Let \(H\) denote an arbitrary algebraic group. For any finite set \(I\), the space \((\mathfrak{h}\otimes\Omega^{1}_{D})_{X^{I}_{\operatorname{dR}}}\) is defined as follows: a point \(S\rightarrow(\mathfrak{h}\otimes\Omega^{1}_{D})_{X^{I}_{\operatorname{dR}}}\) consists of \(x_{I}:S\to X^{I}_{\operatorname{dR}}\) together with a section of the vector bundle \((\mathfrak{h}\otimes\Omega^{1}_{X})|_{D_{x_{I}}}\). The space \((\mathfrak{h}\otimes\Omega^{1}_{D})_{X^{I}_{\operatorname{dR}}}^{x\text{-ram}}\) is defined similarly, but the section is allowed to have a pole along \(\{x\}\times S\).
These assemble into factorization spaces \(\mathfrak{h}\otimes\Omega^{1}_{D}\) and \((\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}\) which are co-unital and corr-unital, respectively. The inclusion
\[\mathfrak{h}\otimes\Omega^{1}_{D}\longrightarrow(\mathfrak{h}\otimes\Omega^ {1}_{D})^{x\text{-ram}}\]
is a morphism of corr-unital factorization spaces.
Fix a point \(x\in X(k)\). We let \((\mathfrak{L}^{+}H)^{x\text{-ram}}_{X^{\text{f}}_{\text{dR}}}\) be the group classifying maps \(D_{x_{I}}\to H\) which are possibly meromorphic along \(\{x\}\times S\). It also forms a corr-unital factorization space, and receives a homomorphism
\[\mathfrak{L}^{+}H\longrightarrow(\mathfrak{L}^{+}H)^{x\text{-ram}}\]
compatible with this structure.
The group \((\mathfrak{L}^{+}H)^{x\text{-ram}}\) acts on \((\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}\), compatibly with the factorization structure, via the gauge action (cf. SS1.12 of [14]). The subgroup \(\mathfrak{L}^{+}H\) preserves the subspace \(\mathfrak{h}\otimes\Omega^{1}_{D}\). Define
\[\text{LS}_{H}(D):=(\mathfrak{h}\otimes\Omega^{1}_{D})/\mathfrak{L}^{+}H\]
and
\[\text{LS}_{H}(D)^{x\text{-ram}}:=(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{ -ram}}/(\mathfrak{L}^{+}H)^{x\text{-ram}},\]
a co-unital and corr-unital factorization space respectively. We have a canonical morphism of corr-unital factorization spaces
\[\iota:\text{LS}_{H}(D)\longrightarrow\text{LS}_{H}(D)^{x\text{-ram}}.\]
The co-unital factorization structure on \(\text{LS}_{H}(D)\) makes \(\mathsf{QCoh}(\text{LS}_{H}(D))\) into an object of \(\mathsf{FactCat}^{\text{lax-fact}}\). We recall from [14], Lemma 9.8.1 that we have a canonical equivalence
\[\mathsf{QCoh}(\text{LS}_{H}(D))\tilde{\longrightarrow}\mathsf{Rep}(H)\]
in \(\mathsf{FactCat}^{\text{lax-fact}}\), and in particular \(\mathsf{QCoh}(\text{LS}_{H}(D))\) actually belongs to \(\mathsf{FactCat}\).
### Unital structures
We would like to upgrade \(\mathsf{QCoh}(\text{LS}_{H}(D)^{x\text{-ram}})\) to a unital factorization category. Given an injection of finite sets \(J\to I\), the corr-unital structure on \(\text{LS}_{H}(D)^{x\text{-ram}}\) determines a correspondence
The morphism \(\beta\) is ind-schematic, but generally not schematic unless \(H\) is unipotent. In particular, the usual direct image of quasicoherent sheaves along this map is poorly behaved.
**Proposition 7.2.1**.: _If \(H\) is reductive, then for any injection of finite sets \(J\to I\), the functor_
\[\beta^{*}:\mathsf{QCoh}(\text{LS}_{H}(D)^{x\text{-ram}}_{X^{\text{f}}_{\text{dR }}})\longrightarrow\mathsf{QCoh}(\text{LS}_{H}(D)^{x\text{-ram}}_{J\to I})\]
_admits a left adjoint._
Proof.: Writing \(\text{LS}_{H}(D)^{x\text{-ram},\log}:=(\mathfrak{h}\otimes\Omega^{1}_{D})^{x \text{-ram}}/\mathfrak{L}^{+}H\), we can factorize \(\beta\) as
\[\text{LS}_{H}(D)^{x\text{-ram}}_{J\to I}\xrightarrow{\beta_{1}}\text{LS}_{H}(D )^{x\text{-ram},\log}_{X^{\text{-ram}}_{\text{dR}}}\xrightarrow{\beta_{2}} \text{LS}_{H}(D)^{x\text{-ram}}_{X^{\text{f}}_{\text{dR}}}.\]
To see that \(\beta_{1}^{*}\) admits a left adjoint, it suffices to show the same for inverse image along the inclusion
\[(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}_{J\to I}\longrightarrow( \mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}_{X^{\text{f}}_{\text{dR}}}.\]
For any \(n\geqslant 0\), let
\[(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram},\leqslant n}_{J\to I} \subset(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}_{J\to I}\]
be the closed subscheme where the \(1\)-form is allowed to have poles of order at most \(1\) at \(x\). Then we have
\[(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram}}_{J\to I}=\underset{n}{\text {colim}}\ (\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram},\leqslant n}_{J\to I},\]
and the transition maps are regular closed embeddings. Moreover, the map
\[(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram},\leqslant n}_{J\to I} \longrightarrow(\mathfrak{h}\otimes\Omega^{1}_{D})^{x\text{-ram},\leqslant n} _{X^{\text{f}}_{\text{dR}}}\]
is a regular closed embedding for any \(n\geq 0\), from which the claim follows.
The existence of \((\beta_{2}^{*})^{\mathrm{L}}\) follows from the fact that \(\mathrm{LS}_{H}(D)_{X_{\mathrm{dR}}^{\mathrm{r-ram}}}^{\mathrm{r-ram}}\) is the quotient of \(\mathrm{LS}_{H}(\mathring{D})_{X_{\mathrm{dR}}^{\mathrm{r-ram}}}^{\mathrm{r- ram}}\) by a certain Hecke groupoid, which is ind-proper because \(H\) is reductive.
As a consequence of the proposition, we can endow the object \(\mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}})\) with a unital structure and hence view it as an object of \(\mathsf{FactCat}^{\mathrm{lax-fact}}\). Explicitly, the structure morphism attached to an injection \(J\to I\) is given by
\[(\beta^{*})^{\mathrm{L}}\alpha^{*}:\mathsf{QCoh}(\mathrm{LS}_{H}(D)_{X_{ \mathrm{dR}}^{\mathrm{r-ram}}}^{\mathrm{r-ram}})\longrightarrow\mathsf{QCoh}( \mathrm{LS}_{H}(D)_{X_{\mathrm{dR}}^{\mathrm{r-ram}}}^{\mathrm{r-ram}}).\]
As explained in SS9.9 of [11], the \(1\)-affineness of \(\mathrm{LS}_{H}(D)_{X_{\mathrm{dR}}^{\mathrm{r}}}\) implies that \(\mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}})\) factorizes strictly, i.e. belongs to \(\mathsf{FactCat}\).
It follows that
\[(\iota^{*})^{\mathrm{L}}:\mathsf{QCoh}(\mathrm{LS}_{H}(D))\longrightarrow \mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}})\]
is a morphism in \(\mathsf{FactCat}\). Moreover, the pointwise tensor product on \(\mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}})\) upgrades it to an object of \(\mathsf{ComAlg}(\mathsf{FactCat}_{\mathrm{lax-untl}})\). The structure morphisms for this commutative algebra structure are generally only lax unital with respect to factorization, i.e. they do not take place in \(\mathsf{FactCat}\). For instance, the morphism
\[\mathrm{Vect}\longrightarrow\mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ ram}})\]
corresponding to the monoidal unit \(\mathcal{O}_{\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}}}\) is not strictly unital with respect to factorization, since the map
\[(\iota^{*})^{\mathrm{L}}\mathcal{O}_{\mathrm{LS}_{H}(D)}\longrightarrow \mathcal{O}_{\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}}}\]
from the factorization unit to the monoidal unit is of course not an isomorphism.
### Local geometric class field theory in the de Rham setting
Let \(\mathrm{Gr}_{T}^{\infty\cdot x}\) denote the corr-unital factorization space classifying a point of \(\mathrm{Gr}_{T}\) together with a full level structure at \(x\). In particular, the fiber of \(\mathrm{Gr}_{T}^{\infty\cdot x}\) at \(x\) identifies with \((\mathfrak{L}T)_{x}\), while its restriction to \(X\backslash\{x\}\) identifies with \(\mathrm{Gr}_{T}\). We will write
\[p:\mathrm{Gr}_{T}^{\infty\cdot x}\longrightarrow\mathrm{Gr}_{T}\]
for the projection.
The commutative group structure on \(\mathrm{Gr}_{T}^{\infty\cdot x}\) endows \(\mathsf{D}(\mathrm{Gr}_{T}^{\infty\cdot x})\) with a convolution symmetric monoidal structure, which makes it an object of \(\mathsf{ComAlg}(\mathsf{FactCat}_{\mathrm{lax-untl}})\). This commutative algebra structure takes place in the lax unital setting, corresponding to the fact that the natural morphism
\[\delta_{1}\longrightarrow\omega_{\mathfrak{L}^{+}T}\]
in \(\mathsf{D}(\mathfrak{L}T)_{x}\) is not an isomorphism.
**Theorem 7.3.1**.: _There is a canonical equivalence_
\[\mathbb{L}_{T}:\mathsf{D}(\mathrm{Gr}_{T}^{\infty\cdot x})\tilde{\longrightarrow} \mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram}})\]
_in \(\mathsf{ComAlg}(\mathsf{FactCat}_{\mathrm{lax-untl}})\) fitting into a commutative square_
\[\begin{CD}\mathsf{D}(\mathrm{Gr}_{T}^{\infty\cdot x})\xrightarrow{\mathbb{L}_ {T}}\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram}})\\ @V{}V{p_{\mathrm{dR},*}}V@V{}V{*}V\\ \mathsf{D}(\mathrm{Gr}_{T})\xrightarrow{\sim}\mathsf{QCoh}(\mathrm{LS}_{\tilde{ T}}(D)).\end{CD}\]
For any \(n\geq 0\), let \(\mathrm{Gr}_{T}^{n\cdot x}\) be the quotient of \(\mathrm{Gr}_{T}^{x\cdot x}\) classifying a point of \(\mathrm{Gr}_{T}\) together with a structure of level \(n\) at \(x\). To prove the theorem, we will construct a system of isomorphisms
\[\mathbb{L}_{T}^{(n)}:\mathsf{D}(\mathrm{Gr}_{T}^{n\cdot x})\widetilde{\longrightarrow} \mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\leqslant n})\]
in \(\mathsf{ComAlg}(\mathsf{FactCat}_{\mathrm{lax-untl}})\), and obtain \(\mathbb{L}_{T}\) by passing to the limit.
### Duality for quasicoherent sheaves on local systems
Fix a finite set \(I\) and \(n\geq 0\). Write
\[\pi:\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\leqslant n}_{X^{I}_{ \mathrm{dR}}}\longrightarrow X^{I}_{\mathrm{dR}}\]
for the projection. We define a \(\mathsf{D}(X^{I})\)-linear functor
\[\pi_{*,\mathrm{ren}}:\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ ram},\leqslant n}_{X^{I}_{\mathrm{dR}}})\widetilde{\longrightarrow}\mathsf{D}(X^{I})\]
as follows. Letting
\[\rho:\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I}_ {\mathrm{dR}}}\longrightarrow X^{I}_{\mathrm{dR}}\]
denote the projection, the usual direct image functor
\[\rho_{*}:\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log, \leqslant n}_{X^{I}_{\mathrm{dR}}})\widetilde{\longrightarrow}\mathsf{D}(X^{ I})\]
is continuous and \(\mathsf{D}(X^{I})\)-linear by Corollary 3.12.2 of [11] (the proof there is given at \(x\) rather than over \(X^{I}_{\mathrm{dR}}\), but adapts to the latter setting). The map
\[\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I}_{ \mathrm{dR}}}\longrightarrow\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram}, \leqslant n}_{X^{I}_{\mathrm{dR}}}\]
realizes the latter as the quotient of the former by the formal group
\[\mathcal{G}:=((\mathfrak{L}^{+}\tilde{T})^{x\cdot\mathrm{ram},\leqslant n}/ \mathfrak{L}^{+}T)_{X^{I}_{\mathrm{dR}}}.\]
The \(\mathsf{D}(X^{I})\)-module category \(\mathsf{IndCoh}(\mathcal{G})\) admits a natural symmetric monoidal structure with respect to convolution, which is in fact rigid because \(\mathcal{G}\) has finite-dimensional tangent space at the identity. Since \(\rho\) is equivariant for the action of \(\mathcal{G}\), the functor \(\rho_{*}\) therefore factors through
\[\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n }_{X^{I}_{\mathrm{dR}}})\widetilde{\longrightarrow}\mathsf{QCoh}(\mathrm{LS}_ {\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I}_{\mathrm{dR}}})_{ \mathsf{IndCoh}(\mathcal{G})}\cong\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x \cdot\mathrm{ram},\leqslant n}_{X^{I}_{\mathrm{dR}}}),\]
which yields the desired functor \(\pi_{*,\mathrm{ren}}\).
**Lemma 7.4.1**.: _The \(\mathsf{D}(X^{I})\)-linear pairing_
\[\mathsf{QCoh}(\mathrm{LS}_{T}(D)^{x\cdot\mathrm{ram},\leqslant n}_{X^{I}_{ \mathrm{dR}}})\underset{\mathsf{D}(X^{I})}{\otimes}\mathsf{QCoh}(\mathrm{LS}_ {\tilde{T}}(D)^{x\cdot\mathrm{ram},\leqslant n}_{X^{I}_{\mathrm{dR}}}) \widetilde{\longrightarrow}\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot \mathrm{ram},\leqslant n}_{X^{I}_{\mathrm{dR}}})\widetilde{\longrightarrow} \mathsf{D}(X^{I})\]
_is perfect. This self-duality makes the following square commute:_
Proof.: Since \(\rho_{*}\) is continuous and
\[\Delta:\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I }_{\mathrm{dR}}}\widetilde{\longrightarrow}\mathrm{LS}_{\tilde{T}}(D)^{x\cdot \mathrm{ram},\log,\leqslant n}_{X^{I}_{\mathrm{dR}}}\times\mathrm{LS}_{\tilde{ T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I}_{\mathrm{dR}}}\]
is affine, it follows that
\[\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{ X^{I}_{\mathrm{dR}}})\underset{\mathsf{D}(X^{I})}{\otimes}\mathsf{QCoh}(\mathrm{LS}_ {\tilde{T}}(D)^{x\cdot\mathrm{ram},\log,\leqslant n}_{X^{I}_{\mathrm{dR}}}) \widetilde{\longrightarrow}\mathsf{QCoh}(\mathrm{LS}_{\tilde{T}}(D)^{x\cdot \mathrm{ram},\log,\leqslant n}_{X^{I}_{\mathrm{dR}}})\widetilde{\longrightarrow} \mathsf{D}(X^{I})\]
is a perfect pairing. Now the fact that the pairing in lemma is perfect follows formally from the rigidity of \(\mathsf{IndCoh}(\mathcal{G})\).
As for the commutativity of the square, note that \(\rho_{*}\) and \(\rho^{*}\) are dual by the construction of the self-duality on
\[\operatorname{\mathsf{QCoh}}(\operatorname{LS}_{\tilde{T}}(D)^{x\operatorname{- ram},\operatorname{log},\leqslant n}_{X^{I}_{\operatorname{dR}}}).\]
The claim then follows formally from the construction of \(\pi_{*,\operatorname{ren}}\).
### Proof of Theorem 7.3.1
For any \(n\geq 0\) and any finite set \(I\), the Contou-Carrere pairing gives rise to a bimultiplicative line bundle
\[(\operatorname{Gr}_{T,X^{I}}^{n\cdot x})_{\operatorname{dR}}\underset{X^{I}_{ \operatorname{dR}}}{\times}\operatorname{LS}_{\tilde{T}}(D)^{x\operatorname{- ram},\leqslant n}_{X^{I}_{\operatorname{dR}}}\longrightarrow\operatorname{B} \operatorname{\mathbb{G}}_{m} \tag{7.5.1}\]
(cf. SS6.3 of [13]). Since \(\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\) is an ind-scheme locally of finite type, Verdier duality defines a perfect pairing
\[\mathsf{D}(\operatorname{Gr}_{T,X^{I}}^{n\cdot x})\underset{\mathsf{D}(X^{I} )}{\otimes}\mathsf{D}(\operatorname{Gr}_{T,X^{I}}^{n\cdot x})\longrightarrow \mathsf{D}(X^{I}).\]
Thus the line bundle (7.5.1) determines an object of
\[\operatorname{\mathsf{QCoh}}((\operatorname{Gr}_{T,X^{I}}^{n\cdot x})_{ \operatorname{dR}}\underset{X^{I}_{\operatorname{dR}}}{\times}\operatorname{ LS}_{\tilde{T}}(D)^{x\operatorname{-ram},\leqslant n}_{X^{I}_{\operatorname{dR}}}) \tilde{\longrightarrow}\operatorname{Fun}_{\mathsf{D}(X^{I})}(\mathsf{D}( \operatorname{Gr}_{T,X^{I}}^{n\cdot x}),\operatorname{\mathsf{QCoh}}( \operatorname{LS}_{\tilde{T}}(D)^{x\operatorname{-ram},\leqslant n}_{X^{I}_{ \operatorname{dR}}})),\]
which is the desired \(\mathsf{D}(X^{I})\)-linear functor
\[\mathbb{L}_{T}^{(n)}:\mathsf{D}(\operatorname{Gr}_{T,X^{I}}^{n\cdot x}) \longrightarrow\operatorname{\mathsf{QCoh}}(\operatorname{LS}_{\tilde{T}}(D) ^{x\operatorname{-ram},\leqslant n}_{X^{I}_{\operatorname{dR}}}).\]
The bimultiplicativity of (7.5.1) implies that \(\mathbb{L}_{T}^{(n)}\) is symmetric monoidal, and the compatibility of (7.5.1) with factorization implies that \(\mathbb{L}_{T}^{(n)}\) defines a morphism in \(\operatorname{\mathsf{ComAlg}}(\operatorname{Fact}_{\operatorname{lax-unl}})\) as \(I\) varies.
On the other hand, by Lemma 7.4.1, the line bundle (7.5.1) determines a \(\mathsf{D}(X^{I})\)-linear functor
\[\operatorname{\mathsf{QCoh}}(\operatorname{LS}_{\tilde{T}}(D)^{x\operatorname {-ram},\leqslant n}_{X^{I}_{\operatorname{dR}}})\longrightarrow\mathsf{D}( \operatorname{Gr}_{T,X^{I}}^{n\cdot x}).\]
Define \({}^{\prime}\mathbb{L}_{T}^{(n)}\) to be the composition of this functor with \(\operatorname{inv}^{!}\), where
\[\operatorname{inv}:\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\longrightarrow \operatorname{Gr}_{T,X^{I}}^{n\cdot x}\]
is inversion of the group law. We claim that \({}^{\prime}\mathbb{L}_{T}^{(n)}\) is inverse to \(\mathbb{L}_{T}^{(n)}\).
Denote by \(\mathcal{L}\) the line bundle (7.5.1), viewed as an object of
\[\operatorname{\mathsf{QCoh}}((\operatorname{Gr}_{T,X^{I}}^{n\cdot x})_{ \operatorname{dR}}\underset{X^{I}_{\operatorname{dR}}}{\times}\operatorname{ LS}_{\tilde{T}}(D)^{x\operatorname{-ram},\leqslant n}_{X^{I}_{\operatorname{dR}}}).\]
Write
\[\upsilon:\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\longrightarrow X^{I}\]
for the projection.
**Lemma 7.5.1**.: _There exist canonical isomorphisms_
\[(\upsilon_{\operatorname{dR},*}\otimes\operatorname{id})(\mathcal{L})\cong \delta_{1,X^{I}}\quad\text{and}\quad(\operatorname{id}\otimes\pi_{*, \operatorname{ren}})(\mathcal{L})\cong(\iota^{*})^{\mathsf{L}}\mathcal{O}_{ \tilde{T},X^{I}}.\]
_Here \(\mathcal{O}_{\tilde{T},X^{I}}\) denotes the image of \(\omega_{X^{I}}\) under the functor_
\[\operatorname{coind}_{1}^{\tilde{T}}:\mathsf{D}(X^{I})\longrightarrow \operatorname{\mathsf{Rep}}(\tilde{T})_{X^{I}_{\operatorname{dR}}}\cong \operatorname{\mathsf{QCoh}}(\operatorname{LS}_{\tilde{T}}(D))_{X^{I}_{ \operatorname{dR}}}.\]
Before proving the lemma, let us explain how it implies that \(\mathbb{L}_{T}^{(n)}\) and \({}^{\prime}\mathbb{L}_{T}^{(n)}\) are mutually inverse. Using the self-duality of \(\mathsf{D}(\operatorname{Gr}_{T,X^{I}}^{n\cdot x})\), we see that the composition
\[{}^{\prime}\mathbb{L}_{T}^{(n)}\circ\mathbb{L}_{T}^{(n)}:\mathsf{D}( \operatorname{Gr}_{T,X^{I}}^{n\cdot x})\longrightarrow\mathsf{D}( \operatorname{Gr}_{T,X^{I}}^{n\cdot x})\]
is given by a kernel \(\mathcal{K}\) in
\[\mathsf{D}(\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\underset{X^{I}_{\mathrm{dR}} }{\times}\operatorname{Gr}_{T,X^{I}}^{n\cdot x}).\]
The bimultiplicativity of (7.5.1) implies that
\[\mathcal{K}\cong(\operatorname{id}\otimes\operatorname{inv}^{!})\operatorname {mult}^{!}(v_{\mathrm{dR},*}\otimes\operatorname{id})(\mathcal{L}),\]
where
\[\operatorname{mult}:\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\underset{X^{I}_{ \mathrm{dR}}}{\times}\operatorname{Gr}_{T,X^{I}}^{n\cdot x}\longrightarrow \operatorname{Gr}_{T,X^{I}}^{n\cdot x}\]
denotes the multiplication map. The lemma therefore implies that
\[\mathcal{K}\cong\Delta_{\mathrm{dR},*}\omega_{\operatorname{Gr}_{T,X^{I}}^{n \cdot x}}\]
is isomorphic to the kernel corresponding to the identity functor, and hence \({}^{\prime}\mathbb{L}_{T}^{(n)}\circ\mathbb{L}_{T}^{(n)}\) is isomorphic to the identity functor. A symmetric arguments shows that \(\mathbb{L}_{T}^{(n)}\circ{}^{\prime}\mathbb{L}_{T}^{(n)}\) is isomorphic to the identity.
Proof of Lemma 7.5.1.: We begin by observing that \(\mathbb{L}_{T}^{(n)}\) and \({}^{\prime}\mathbb{L}_{T}^{(n)}\) are mutually inverse over \(X\backslash\{x\}\), where they agree with the standard equivalence
\[\mathsf{D}(\operatorname{Gr}_{T})\tilde{\longrightarrow}\mathsf{Rep}(\check{ T})=\mathsf{QCoh}(\operatorname{LS}_{\check{T}}(D))\]
and its inverse respectively.
It follows that \((\upsilon_{\mathrm{dR},*}\otimes\operatorname{id})(\mathcal{L})\) identifies with \(\delta_{1}\) away from \(x\). The factorization structure on \((\upsilon_{\mathrm{dR},*}\otimes\operatorname{id})(\mathcal{L})\) therefore upgrades it to an object of
\[\delta_{1}\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{D}(\operatorname{Gr}_ {T}^{n\cdot x}))_{x}.\]
Since \(\delta_{1}\) is the unit object for the factorization structure on \(\mathsf{D}(\operatorname{Gr}_{T})\), restriction along the inclusion
\[\operatorname{Gr}_{T,x}^{n\cdot x}\longrightarrow\operatorname{Gr}_{T,X^{I}}^{n \cdot x}\]
determines an equivalence
\[\delta_{1}\text{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{D}(\operatorname{Gr}_ {T}^{n\cdot x}))_{x}\cong\mathsf{D}(\operatorname{Gr}_{T,x}^{n\cdot x}).\]
It therefore it suffices to show that
\[(\upsilon_{\mathrm{dR},*}\otimes\operatorname{id})(\mathcal{L})|_{\operatorname {Gr}_{T,x}^{n\cdot x}}^{!}\cong\delta_{1}.\]
Using the fact that \(\mathcal{O}_{\check{T}}\) is the unit for the factorization structure on \(\mathsf{Rep}(\check{T})\), a symmetric argument reduces the construction of the other isomorphism to the claim that
\[(\operatorname{id}\otimes\!\pi_{*,\mathrm{ren}})(\mathcal{L})|_{\operatorname {LS}_{\check{T}}(\check{D}_{x})^{\leqslant n}}\cong(*^{*})^{\mathrm{L}} \mathcal{O}_{\check{T}}.\]
Thus we have reduced to checking that \(\mathbb{L}_{T}^{(n)}\) and \({}^{\prime}\mathbb{L}_{T}^{(n)}\) are mutually inverse over \(x\). This is well-known (cf. [10] SS6.3): a choice of coordinate at \(x\) allows one to decompose \(\operatorname{Gr}_{T,x}^{n\cdot x}\) and \(\operatorname{LS}_{\check{T}}(\check{D}_{x})^{\leqslant n}\) into direct factors which can be analyzed explicitly.
### Proof of Theorem 6.6.1 for \(G=t\)
As explained in SS6.13, it suffices to prove that \(\operatorname{Sat}_{T,x}\) is an equivalence. Let us reinterpret the monoidal functor
\[\mathsf{D}(\mathcal{H}_{T,x})\xrightarrow{\text{\sf(\ref{eq:G4.1})}}\mathcal{O}_ {\tilde{T}}\text{\sf-mod}^{\operatorname{fact}}(\mathsf{Rep}(\tilde{T}\times \tilde{T}))_{x}.\]
Being the fiber of the factorization category \(\mathsf{D}(\operatorname{Gr}_{T}^{\infty}\!\cdot\!x)\) at \(x\), the category \(\mathsf{D}(\mathfrak{T}T)_{x}\) admits commuting structures of \(\mathsf{D}(\mathfrak{T}T)_{x}\)-module and factorization \(\mathsf{D}(\operatorname{Gr}_{T})\)-module at \(x\). In particular, it defines a functor
\[\mathsf{D}(\mathfrak{T}T)_{x}\text{\sf-mod}\longrightarrow\mathsf{D}( \operatorname{Gr}_{T})\text{\sf-mod}_{x}^{\operatorname{fact}}.\]
Similarly, the category \(\mathsf{QCoh}(\operatorname{LS}_{\tilde{T}}(\mathring{D}_{x}))\) admits commuting structures of \(\mathsf{QCoh}(\operatorname{LS}_{\tilde{T}}(\mathring{D}_{x}))\)-module and factorization \(\mathsf{Rep}(\tilde{T})\)-module at \(x\), hence it defines a functor
\[\mathsf{QCoh}(\operatorname{LS}_{\tilde{T}}(\mathring{D}_{x}))\text{\sf-mod }\longrightarrow\mathsf{Rep}(\tilde{T})\text{\sf-mod}_{x}^{\operatorname{ fact}}.\]
Taking the fiber of \(\mathbb{L}_{T}\) at \(x\), we obtain an equivalence
\[\mathbb{L}_{T,x}:\mathsf{D}(\mathfrak{T}T)_{x}\tilde{\longrightarrow}\mathsf{ QCoh}(\operatorname{LS}_{\tilde{T}}(\mathring{D}_{x}))\]
of symmetric monoidal categories, which moreover respects the factorization module structures for \(\mathsf{D}(\operatorname{Gr}_{T})\cong\mathsf{Rep}(\tilde{T})\). It therefore gives rise to a commutative square
in \(\mathsf{DGCat}\text{\sf-mod}\).
We have a monoidal equivalence
\[\mathsf{D}(\mathcal{H}_{T,x})\tilde{\longrightarrow}\underline{\operatorname{ End}_{\mathsf{D}(\mathfrak{T}T)_{x}\text{\sf-mod}}}(\mathsf{D}( \operatorname{Gr}_{T,x})),\]
where the right side means relative inner endomorphisms with respect to the action of \(\mathsf{DGCat}\) on \(\mathsf{D}(\mathfrak{T}T)_{x}\text{\sf-mod}\). On the other hand, there is a monoidal equivalence
\[\mathcal{O}_{\tilde{T}}\text{\sf-mod}^{\operatorname{fact}}(\mathsf{Rep}( \tilde{T}\times\tilde{T}))_{x}\tilde{\longrightarrow}\underline{ \operatorname{End}_{\mathsf{Rep}(\tilde{T})\text{\sf-mod}_{x}^{\operatorname{ fact}}}}(\mathsf{Rep}(\tilde{T}))\]
by [11] SS9.22.
Tracing through the constructions, we see that the previously constructed monoidal functor
\[\mathsf{D}(\mathcal{H}_{T,x})\longrightarrow\mathcal{O}_{\tilde{T}}\text{ \sf-mod}^{\operatorname{fact}}(\mathsf{Rep}(\tilde{T}\times\tilde{T}))_{x}\]
corresponds under the above identifications with the monoidal functor
\[\underline{\operatorname{End}_{\mathsf{D}(\mathfrak{T}T)_{x}\text{\sf-mod}}}( \mathsf{D}(\operatorname{Gr}_{T,x}))\longrightarrow\underline{\operatorname{ End}_{\mathsf{Rep}(\tilde{T})\text{\sf-mod}_{x}^{\operatorname{fact}}}}(\mathsf{ Rep}(\tilde{T}))\]
induced by the upper-right circuit of the above commutative square. Using the commutativity of the square (and hence Theorem 7.3.1), it follows that this agrees with the monoidal functor induced by the lower-left circuit of the square. But this is an equivalence by Theorem 9.13.1 in [11] (in fact, the theorem is proved after a series of reductions by establishing precisely this equivalence).
This completes the proof of Theorem 6.6.1 for \(G=T\).
## 8. Factorization modules at a point
In this section, we will give an explicit description of the category of factorization modules
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\text{\sf-mod}^{\operatorname{fact}} (\mathsf{Rep}(G)\otimes\mathsf{Rep}(M))_{x}\]
at a fixed point \(x\in X(k)\).
### Lie-! coalgebras
Recall that a _Lie-! coalgebra_ in \(\mathsf{Rep}(H)\) is a Lie coalgebra in the symmetric monoidal category \(\mathsf{Rep}(H)\otimes\mathsf{D}(X)\). One can give an alternative definition as follows. The category \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\) carries a non-unital symmetric monoidal structure denoted by \(\otimes^{*}\) (cf. [10] SS7.17). Note that the restriction functor
\[\Delta^{!}:\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\longrightarrow\mathsf{ Rep}(H)_{X_{\mathrm{dR}}}\cong\mathsf{Rep}(H)\otimes\mathsf{D}(X)\]
is carries \(\otimes^{*}\) to \(\otimes^{!}\), and in particular lifts to a functor
\[(\Delta^{!})^{\mathrm{enh}}:\mathsf{LieCoalg}^{\otimes^{*}}(\mathsf{Rep}(H)_{ \mathrm{Ran}_{\mathrm{dR}}})\longrightarrow\mathsf{LieCoalg}^{\otimes^{!}}( \mathsf{Rep}(H)\otimes\mathsf{D}(X)).\]
**Proposition 8.1.1**.: _The functor \((\Delta^{!})^{\mathrm{enh}}\) restricts to an equivalence on the full subcategory of \(\mathsf{LieCoalg}^{\otimes^{*}}(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}})\) consisting of objects whose underlying D-module on \(\mathrm{Ran}\) is supported on the main diagonal._
Proof.: This follows from the observation that the left adjoint
\[\Delta_{\mathrm{dR},*}:\mathsf{Rep}(H)\otimes\mathsf{D}(X)\longrightarrow \mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\]
of \(\Delta^{!}\) is fully faithful and left-lax symmetric monoidal.
There is another non-unital symmetric monoidal structure \(\oplus^{*}\) on \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\), which is defined in the same way as \(\otimes^{*}\) but with \(\boxminus\) replaced by \(\boxminus\). Note that unlike \(\otimes^{*}\), the operation \(\oplus^{*}\) does not preserve colimits in each variable. The restriction functor
\[\Delta^{!}:\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\longrightarrow\mathsf{ Rep}(H)_{X_{\mathrm{dR}}}\cong\mathsf{Rep}(H)\otimes\mathsf{D}(X)\]
carries \(\oplus^{*}\) to \(\oplus\), and hence lifts to a functor
\[(\Delta^{!})^{\mathrm{enh},\boxminus}:\mathsf{ComAlg}^{\boxminus}(\mathsf{ Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}})\longrightarrow\mathsf{ComAlg}^{\boxminus}( \mathsf{Rep}(H)\otimes\mathsf{D}(X))\cong\mathsf{Rep}(H)\otimes\mathsf{D}(X).\]
The following result first appeared in Rozenblyum's thesis.
**Proposition 8.1.2**.: _The functor \((\Delta^{!})^{\mathrm{enh},\boxminus}\) admits a fully faithful left adjoint_
\[\mathrm{Fact}^{\boxminus}:\mathsf{Rep}(H)\otimes\mathsf{D}(X)\longrightarrow \mathsf{ComAlg}^{\boxminus}(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}).\]
_The essential image consists of those objects \(\mathcal{M}\) such that the structure map \(\mathcal{M}\oplus^{*}\mathcal{M}\to\mathcal{M}\) induces an equivalence_
\[(\mathcal{M}\boxminus\mathcal{M})|_{(\mathrm{Ran}_{\mathrm{dR}}\times\mathrm{ Ran}_{\mathrm{dR}})_{\mathrm{disj}}}\overset{\rightharpoonup}{\longrightarrow}( \mathrm{add}^{!}\,\mathcal{M})|_{(\mathrm{Ran}_{\mathrm{dR}}\times\mathrm{ Ran}_{\mathrm{dR}})_{\mathrm{disj}}}.\]
The category \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\) is also equipped with the "pointwise" tensor product \(\otimes^{!}\). Note that for any \(\mathcal{M}\) in \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\), the functor \((-)\otimes^{!}\mathcal{M}\) is lax symmetric monoidal with respect to \(\oplus^{*}\). In particular \(\otimes^{!}\) naturally lifts to a symmetric monoidal structure on \(\mathsf{ComAlg}^{\boxminus}(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}})\). The functor \((\Delta^{!})^{\mathrm{enh},\boxminus}\) is symmetric monoidal with respect to \(\otimes^{!}\), and hence \(\mathrm{Fact}^{\boxminus}\) is left-lax symmetric monoidal. It therefore lifts to a functor
\[\mathsf{LieCoalg}^{\otimes^{!}}(\mathsf{Rep}(H)\otimes\mathsf{D}(X)) \longrightarrow\mathsf{LieCoalg}^{\otimes^{!}}(\mathsf{ComAlg}^{\boxminus}( \mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}})).\]
If \(L\) is a Lie-! coalgebra in \(\mathsf{Rep}(H)\), we have seen that the object \(\Delta_{\mathrm{dR},*}L\) in \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\) has the structure of Lie coalgebra with respect to \(\otimes^{*}\). Given a finite set \(I\), we can consider \(\mathsf{Rep}(H)_{\mathrm{Ran}_{I},\mathrm{dR}}\), where \(\mathrm{Ran}_{I}\) is the version of \(\mathrm{Ran}\) space with marked points indexed by \(I\). This is acted on by \(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}}\) with its convolution product \(\otimes^{*}\). In particular, we can define _Lie-! comodules_ for \(L\) by
\[L\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))_{X_{\mathrm{dR}}^{I}}:=\Delta_{ \mathrm{dR},*}L\text{-}\mathsf{comod}^{\otimes^{*}}(\mathsf{Rep}(H)_{\mathrm{ Ran}_{I},\mathrm{dR}})\times_{\mathsf{Rep}(H)_{\mathrm{Ran}_{I},\mathrm{dR}}}\mathsf{ Rep}(H)_{X_{\mathrm{dR}}^{I}}.\]
That is, an object of \(\Delta_{\mathrm{dR},*}L\text{-}\mathsf{comod}^{\otimes^{*}}(\mathsf{Rep}(H)_{ \mathrm{Ran}_{I},\mathrm{dR}})\) is a Lie-! comodule if its underlying object of \(D(\mathrm{Ran}_{I})\) is supported on the main diagonal \(X^{I}\to\mathrm{Ran}_{I}\).
The assignment \(I\mapsto L\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\) can be promoted to an object \(L\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))\) in \(\mathsf{FactCat}^{\text{\rm lax-fact}}\), similarly to the case of Lie-\(*\) modules.
On the other hand, we have the object \(\mathrm{Fact}^{\oplus}(L)\) in \(\mathrm{LieCoalg}^{\otimes^{!}}(\mathsf{ComAlg}^{\oplus^{*}}(\mathsf{Rep}(H)_{ \mathrm{Ran}_{\mathrm{dR}}}))\). For any finite set \(I\), we obtain \(\mathrm{Fact}^{\oplus}(L)_{X^{I}_{\mathrm{dR}}}\), an object of \(\mathrm{LieCoalg}^{\otimes^{!}}(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}})\). Thus we can consider
\[\mathrm{Fact}^{\oplus}(L)_{X^{I}_{\mathrm{dR}}}\text{-}\mathsf{comod}(\mathsf{ Rep}(H)_{X^{I}_{\mathrm{dR}}}),\]
which is acted on by \(\mathsf{D}(X^{I})\). Using the commutative \(\oplus^{*}\)-algebra structure on \(\mathrm{Fact}^{\oplus}(L)\), one can assemble these into an object \(\mathrm{Fact}^{\oplus}(L)\text{-}\mathsf{comod}(\mathsf{Rep}(H))\) of \(\mathsf{FactCat}^{\text{\rm lax-fact}}\).
**Proposition 8.2.1**.: _There is a canonical equivalence_
\[L\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))\tilde{\longrightarrow}\mathrm{ Fact}^{\oplus}(L)\text{-}\mathsf{comod}(\mathsf{Rep}(H))\]
_in \(\mathsf{FactCat}^{\text{\rm lax-fact}}\), which commutes with the forgetful functors to \(\mathsf{Rep}(H)\)._
Proof.: For any finite set \(I\), we construct an equivalence
\[L\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))_{X^{I}_{\mathrm{dR}}}\tilde{ \longrightarrow}\mathrm{Fact}^{\oplus}(L)_{X^{I}_{\mathrm{dR}}}\text{-} \mathsf{comod}(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}})\]
which lifts the identity functor on \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). The compatibility with factorization structures will be manifest.
Since \(\mathrm{Fact}^{\oplus}\) is oplax symmetric monoidal with respect to \(\otimes^{!}\), so is
\[\mathsf{Rep}(H)_{X_{\mathrm{dR}}} \longrightarrow\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\] \[\mathcal{M} \mapsto\mathrm{Fact}^{\oplus}(\mathcal{M})_{X^{I}_{\mathrm{dR}}}.\]
This defines an oplax action of \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\) on \(\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\). On the other hand, we have the oplax symmetric monoidal functor
\[\Delta_{\mathrm{dR},*}:(\mathsf{Rep}(H)_{X_{\mathrm{dR}}},\otimes^{!}) \longrightarrow(\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}},\otimes^{*}).\]
The action of \((\mathsf{Rep}(H)_{\mathrm{Ran}_{\mathrm{dR}}},\otimes^{*})\) on \(\mathsf{Rep}(H)_{\mathrm{Ran}_{I,\mathrm{dR}}}\) therefore restricts to an oplax action of \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\). Thus it suffices to show that
\[\Delta_{I,*}:\mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathsf{Rep} (H)_{\mathrm{Ran}_{I,\mathrm{dR}}}\]
is a morphism of oplax \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\)-module categories with respect to the above actions. This follows from the observation that the \(\mathcal{M}\mapsto\mathrm{Fact}^{\oplus}(\mathcal{M})_{X^{I}_{\mathrm{dR}}}\) is isomorphic to the composition of
\[\mathsf{Rep}(H)_{X_{\mathrm{dR}}} \longrightarrow\mathsf{Rep}(H)_{\mathrm{Ran}_{I,\mathrm{dR}}}\] \[\mathcal{M} \mapsto\Delta_{\mathrm{dR},*}(\mathcal{M})\otimes^{*}\Delta_{I,*} \omega_{X^{I}}\]
with
\[\Delta^{!}_{I}:\mathsf{Rep}(H)_{\mathrm{Ran}_{I,\mathrm{dR}}}\longrightarrow \mathsf{Rep}(H)_{X^{I}_{\mathrm{dR}}}.\]
### Commutative Hopf algebras
Let \(\mathbb{O}\) be a symmetric monoidal DG category. The functor
\[\mathrm{triv}_{\mathsf{ComAlg}}:\mathbb{O}\longrightarrow\mathsf{ComAlg}^{ \mathrm{aug}}(\mathbb{O})\]
admits a left adjoint \(\mathrm{coPrim}_{\mathsf{ComAlg}}\), and similarly
\[\mathrm{triv}_{\mathsf{LieCoalg}}:\mathbb{O}\longrightarrow\mathsf{LieCoalg}( \mathbb{O})\]
admits a right adjoint \(\mathrm{Prim}_{\mathsf{LieCoalg}}\). We define
\[\mathrm{coChev}:=[1]\circ\mathrm{coPrim}_{\mathsf{ComAlg}}\quad\text{and}\quad \mathrm{Chev}:=[-1]\circ\mathrm{Prim}_{\mathsf{LieCoalg}}\,.\]
The Koszul duality between the commutative operad and the Lie co-operad means that these functor lift to adjoint functors
**Proposition 8.3.1**.: _The adjoint functors_
_are equivalences._
Proof.: This is proved in much the same way as Proposition 1.6.4 in Chapter 6 of [10], the key difference being that the commutativity of the square
is less obvious here. To prove this commutativity, we first rewrite
We remark that \(\mathsf{LieAlg}(\mathbb{O}^{\mathrm{op}})\) does not quite fit into the framework of _loc. cit._, since the tensor product on \(\mathbb{O}^{\mathrm{op}}\) does not generally preserve colimits, but nonetheless we can consider Lie algebras in this category. Given \(L\) in \(\mathsf{Grp}(\mathsf{LieAlg}(\mathbb{O}^{\mathrm{op}}))\), we note that the image of the simplicial object \((\mathsf{B}_{\mathsf{LieAlg}(\mathbb{O}^{\mathrm{op}})}\,L)^{\bullet}\) under the forgetful functor
\[\mathsf{Grp}(\mathsf{LieAlg}(\mathbb{O}^{\mathrm{op}}))\longrightarrow \mathsf{Grp}(\mathbb{O}^{\mathrm{op}})\cong\mathbb{O}^{\mathrm{op}}\]
(the equivalence here is due to the stability of \(\mathbb{O}^{\mathrm{op}}\)) is simply the Cech nerve of \(L\to 0\). The tensor product on \(\mathbb{O}^{\mathrm{op}}\) certainly preserves this particular sifted colimit in each variable, since the geometric realization of the Cech nerve in \(\mathbb{O}^{\mathrm{op}}\) computes the shift \(L[1]\). The claim now follows.
The category of commutative Hopf algebras in \(\mathbb{O}\) is defined by
\[\mathsf{ComHopfAlg}(\mathbb{O}):=\mathsf{Grp}(\mathsf{ComAlg}^{\mathrm{aug} }(\mathbb{O})^{\mathrm{op}})^{\mathrm{op}}.\]
By Proposition (8.3.1) and its proof, the functor
fits into a commutative square
and hence we denote it by
\[\mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg}}:=(\mathsf{B}_{\mathsf{ LieCoalg}(\mathbb{O})^{\mathrm{op}}})^{\mathrm{op}}\circ\mathsf{Grp}(\mathrm{coChev}^{ \mathrm{enh,op}})^{\mathrm{op}}.\]
Note that \(\mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg}}\) admits the right adjoint
\[\mathrm{U}^{\mathrm{co}}:\mathsf{Grp}(\mathrm{Chev}^{\mathrm{enh,op}})^{ \mathrm{op}}\circ(\Omega_{\mathsf{LieCoalg}(\mathbb{O})^{\mathrm{op}}})^{ \mathrm{op}}:\mathsf{LieCoalg}(\mathbb{O})\longrightarrow\mathsf{ComHopfAlg}( \mathbb{O}).\]
This choice of notation is justified by the following observation.
**Proposition 8.3.2**.: _The composition_
\[\mathsf{LieCoalg}(\mathbb{O})\stackrel{{\mathrm{U^{co}}}}{{ \longrightarrow}}\mathsf{ComHopfAlg}(\mathbb{O})\stackrel{{ \mathrm{oblv}}}{{\longrightarrow}}\mathsf{AssocCoalg}^{\mathrm{aug}}( \mathbb{O})\]
_is right adjoint to the functor_
\[\mathrm{cores}^{\mathsf{AssocCoalg}\to\mathsf{LieCoalg}}:\mathsf{AssocCoalg}^{ \mathrm{aug}}(\mathbb{O})\longrightarrow\mathsf{LieCoalg}(\mathbb{O})\]
_of corestriction along the morphism of co-operads \(\mathrm{Assoc}^{*}\to\mathrm{Lie}^{*}\)._
Proof.: The proof is analogous to that of Theorem 6.1.2 in Chapter 6 of [10].
**Corollary 8.3.2.1**.: _For any \(A\) in \(\mathsf{ComHopfAlg}(\mathbb{O})\), there is a canonical morphism_
\[\mathrm{cores}^{\mathsf{AssocCoalg}\to\mathsf{LieCoalg}}(A)\longrightarrow \mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg}}(A)\]
_in \(\mathsf{LieCoalg}(\mathbb{O})\), with the same underlying morphism in \(\mathbb{O}\) as the unit map_
\[A\longrightarrow\mathrm{triv}_{\mathsf{ComAlg}}\,\mathrm{coPrim}_{\mathsf{ ComAlg}}(A).\]
_In particular, the forgetful functor_
\[A\text{-}\mathsf{comod}(\mathbb{O})\longrightarrow\mathbb{O}\]
_naturally lifts to a functor_
\[A\text{-}\mathsf{comod}(\mathbb{O})\longrightarrow\mathrm{coPrim}^{\mathrm{enh }}_{\mathsf{ComHopfAlg}}(A)\text{-}\mathsf{comod}(\mathbb{O}).\]
Proof.: The morphism in question can be obtained as the composition
\[\mathrm{cores}^{\mathsf{AssocCoalg}\to\mathsf{LieCoalg}}(A) \longrightarrow\mathrm{cores}^{\mathsf{AssocCoalg}\to\mathsf{ LieCoalg}}(\mathrm{U^{co}}(\mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg}}(A)))\] \[\longrightarrow\mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg }}(A).\]
### Deformation theory
Let \(\pi:Z\to Y\) be a morphism of prestacks equipped with a section \(\sigma:Y\to Z\) which is affine.
**Proposition 8.4.1**.: _The cotangent complex \(T^{*}(Y/Z)\) relative to \(\sigma\) has a natural structure of Lie coalgebra in \(\mathsf{QCoh}(Y)\), and \(\sigma^{*}\) naturally lifts to a functor_
\[(s^{*})^{\mathrm{enh}}:\mathsf{QCoh}(Z)\longrightarrow T^{*}(Y/Z)\text{-} \mathsf{comod}(\mathsf{QCoh}(Y)).\]
Proof.: Let \(\mathcal{G}:=Y\times_{Z}Y\), a group in prestacks affine over \(Y\). Without loss of generality, we can replace \(Z\) by the classifying prestack \(\mathrm{B}_{Y}\,\mathcal{G}\). Let \(A\in\mathsf{ComHopfAlg}(\mathsf{QCoh}(Y)^{\leqslant 0})\) be the direct image of \(\mathcal{O}_{\mathcal{G}}\) under the projection \(\mathcal{G}\to Y\), so we have a canonical equivalence
\[A\text{-}\mathsf{comod}(\mathsf{QCoh}(Y))\tilde{\longrightarrow}\mathsf{QCoh }(\mathrm{B}_{Y}\,\mathcal{G})\]
which fits into a commutative triangle
Writing \(e:Y\to\mathcal{G}\) for the unit section, it follows from the definitions that we can identify
\[\mathrm{coPrim}_{\mathsf{ComAlg}}(A)\cong e^{*}T^{*}(\mathcal{G}/Y)\cong T^{*} (Y/B_{Y}\mathcal{G}).\]
Using the functor \(\mathrm{coPrim}^{\mathrm{enh}}_{\mathsf{ComHopfAlg}}\) constructed in the previous section, we obtain a Lie coalgebra structure on \(T^{*}(Y/B_{Y}\mathcal{G})\). The second claim follows from Corollary 8.3.2.1.
Now suppose that \(\pi:Z\to Y\) is a morphism of corr-unital commutative factorization spaces, again equipped with an affine section \(\sigma:Y\to Z\). We assume that for any injection of finite sets \(J\to I\), the structure map
\[Y_{I,J}\longrightarrow Y_{X^{I}_{\mathrm{dR}}}\]
is a regular closed embedding, and likewise for \(Z\). This allows us to equip \(\mathsf{QCoh}(Y)\) and \(\mathsf{QCoh}(Z)\) with unital structures, and hence upgrade them to objects of \(\mathsf{FactCat}^{\mathrm{lax\text{-}fact}}\). Namely, if
is the correspondence defining the corr-unital structure on \(Y\), then the corresponding unital structure on \(\mathsf{QCoh}(Y)\) is given by
\[(\beta^{*})^{\mathrm{L}}\alpha^{*}:\mathsf{QCoh}(Y)_{X^{J}_{\mathrm{dR}}} \longrightarrow\mathsf{QCoh}(Y)_{X^{I}_{\mathrm{dR}}}.\]
**Lemma 8.5.1**.: _There is a canonical isomorphism of Lie coalgebras_
\[\mathrm{Fact}^{\oplus}(T^{*}(Y_{X_{\mathrm{dR}}}/Z_{X_{\mathrm{dR}}})) \smash{\mathop{\longrightarrow}\limits^{\ldots}}T^{*}(Y_{\mathrm{Ran}_{ \mathrm{dR}}}/Z_{\mathrm{Ran}_{\mathrm{dR}}})\]
_in \(\mathsf{QCoh}(Y_{\mathrm{Ran}_{\mathrm{dR}}})\)._
Proof.: The commutative factorization structures on \(Y\) and \(Z\) induce a structure of commutative \(\oplus^{*}\)-algebra on \(T^{*}(Y_{\mathrm{Ran}_{\mathrm{dR}}}/Z_{\mathrm{Ran}_{\mathrm{dR}}})\), which moreover lies in the essential image of \(\mathrm{Fact}^{\oplus}\) by Proposition 8.1.2. It therefore suffices to produce an isomorphism of Lie-\(\wr\) coalgebras
\[T^{*}(Y_{X_{\mathrm{dR}}}/Z_{X_{\mathrm{dR}}})\smash{\mathop{\longrightarrow} \limits^{\ldots}}(\Delta^{!})^{\mathrm{enh}}T^{*}(Y_{\mathrm{Ran}_{\mathrm{dR} }}/Z_{\mathrm{Ran}_{\mathrm{dR}}}),\]
which is immediate from the definitions.
**Proposition 8.5.2**.: _The functor \(\sigma^{*}\) lifts to a morphism_
\[\mathsf{QCoh}(Z)\longrightarrow T^{*}(Y_{X_{\mathrm{dR}}}/Z_{X_{\mathrm{dR}}}) \smash{\mathop{\longrightarrow}\limits^{\ldots}}\mathsf{comod}^{!}( \mathsf{QCoh}(Y))\]
_in \(\mathsf{FactCat}^{\mathrm{lax\text{-}fact}}\)._
Proof.: This follows from Propositions 8.4.1 and 8.2.1 in view of Lemma 8.5.1.
### Deformations of local systems
Let us specialize to the situation
\[\pi:\mathrm{LS}_{P}(D)^{x\cdot\mathrm{ram}}\underset{\mathrm{LS}_{M}(D)^{x \cdot\mathrm{ram}}}{\times}\mathrm{LS}_{M}(D)\rightleftarrows\mathrm{LS}_{M}( D):\sigma.\]
We begin with a geometric observation.
**Proposition 8.6.1**.: _For any injection of finite sets \(J\to I\), the corr-unital structure map_
\[\mathrm{LS}_{P}(D)^{x\cdot\mathrm{ram}}_{J\to I}\underset{\mathrm{LS}_{M}(D)^{x \cdot\mathrm{ram}}_{J\to I}}{\times}\mathrm{LS}_{M}(D)_{X^{I}_{\mathrm{dR}}} \longrightarrow\mathrm{LS}_{P}(D)^{x\cdot\mathrm{ram}}_{X^{I}_{\mathrm{dR}}} \underset{\mathrm{LS}_{M}(D)^{x\cdot\mathrm{ram}}_{X^{\mathrm{dR}}_{\mathrm{ dR}}}}{\times}\mathrm{LS}_{M}(D)_{X^{I}_{\mathrm{dR}}}\]
_is a regular closed embedding. Moreover, the section_
\[\sigma:\mathrm{LS}_{M}(D)_{X^{I}_{\mathrm{dR}}}\longrightarrow\mathrm{LS}_{P} (D)^{x\cdot\mathrm{ram}}_{X^{I}_{\mathrm{dR}}}\underset{\mathrm{LS}_{M}(D)^{x \cdot\mathrm{ram}}_{X^{\mathrm{dR}}_{\mathrm{dR}}}}{\times}\mathrm{LS}_{M}(D)_ {X^{I}_{\mathrm{dR}}}\]
_is affine._
Proof.: Since \(N_{P}\) is unipotent, the canonical map
\[((\mathfrak{p}\otimes\Omega^{1}_{D})_{J\to I}^{x\text{-ram},\leqslant 1}\underset{( \mathfrak{m}\otimes\Omega^{1}_{D})_{J\to I}^{x\text{-ram},\leqslant 1}}{\times}( \mathfrak{m}\otimes\Omega^{1}_{D})_{X^{I}_{\text{dR}}})/(\mathfrak{L}^{+}P)_{ X^{I}_{\text{dR}}}\longrightarrow\operatorname{LS}_{P}(D)_{J\to I}^{x\text{-ram}} \underset{\operatorname{LS}_{M}(D)_{J\to I}^{x\text{-ram}}}{\times} \operatorname{LS}_{M}(D)_{X^{I}_{\text{dR}}}\]
is an isomorphism. Both assertions follow readily from this presentation.
The proposition allows us to endow
\[\operatorname{\mathsf{QCoh}}(\operatorname{LS}_{P}(D)^{x\text{-ram}} \underset{\operatorname{LS}_{M}(D)^{x\text{-ram}}}{\times}\operatorname{LS}_{M }(D))\]
with a unital structure, and hence view it as an object of \(\operatorname{\mathsf{FactCat}^{\text{\rm lax-fact}}}\). As shown in SS9.9 of [11], the \(1\)-affineness of \(\operatorname{LS}_{P}(D)_{X^{I}_{\text{dR}}}\) implies that the factorization structure is strict, i.e. this object belongs to \(\operatorname{\mathsf{FactCat}}\).
The object \(j!j^{!}(\mathfrak{n}_{P}^{*}\otimes\omega_{X})\) inherits a structure of Lie-\(!\) coalgebra in \(\operatorname{\mathsf{Rep}}(M)\) from \(\mathfrak{n}_{P}^{*}\otimes\omega_{X}\). It is Verdier dual to the Lie-\(*\) algebra \(j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\) which appears in SS8 of [11].
**Proposition 8.6.2**.: _The relative cotangent complex of_
\[\sigma:\operatorname{LS}_{M}(D)_{X_{\text{dR}}}\longrightarrow\operatorname{ LS}_{P}(D)_{X^{\text{-ram}}_{\text{dR}}}^{x\text{-ram}}\underset{ \operatorname{LS}_{M}(D)_{X^{\text{-ram}}_{\text{dR}}}^{x\text{-ram}}}{ \times}\operatorname{LS}_{M}(D)_{X_{\text{dR}}}\]
_is canonically isomorphic to \(j!j^{!}(\mathfrak{n}_{P}^{*}\otimes\omega_{X})\) as a Lie-\(!\) coalgebra in \(\operatorname{\mathsf{Rep}}(M)\)._
Proof.: Let \(U:=X\backslash\{x\}\). Since
\[\operatorname{LS}_{H}(D)_{X^{\text{-ram}}_{\text{dR}}}^{x\text{-ram}} \underset{X_{\text{dR}}}{\times}U_{\text{dR}}=\operatorname{LS}_{H}(D)_{U_{ \text{dR}}}=\operatorname{\mathbb{B}}H\times U_{\text{dR}},\]
we have
\[j^{!}T^{*}(\sigma)\cong T^{*}(\operatorname{LS}_{M}(D)_{U_{\text{dR}}}/ \operatorname{LS}_{P}(D)_{U_{\text{dR}}})\cong\mathfrak{n}_{P}^{*}\otimes \omega_{U}\cong j^{!}(\mathfrak{n}_{P}^{*}\otimes\omega_{X}),\]
from which we obtain a morphism of Lie-\(!\) coalgebras
\[j!j^{!}(\mathfrak{n}_{P}^{*}\otimes\omega_{X})\longrightarrow T^{*}(\sigma).\]
It is an isomorphism over \(U\) by construction, so it suffices to prove that
\[i^{!}j_{!}j^{!}(\mathfrak{n}_{P}^{*}\otimes\omega_{X})\longrightarrow i^{!}T^ {*}(\sigma)\]
is an isomorphism, where \(i:\{x\}\to X\) is the inclusion.
Observe that
\[i^{!}T^{*}(\sigma)\cong T^{*}(\operatorname{LS}_{M}(D_{x})/\operatorname{LS}_{ P}(\mathring{D}_{x})\underset{\operatorname{LS}_{M}(\mathring{D}_{x})}{ \times}\operatorname{LS}_{M}(D_{x}))\cong\mathfrak{n}_{P}^{*}\otimes \operatorname{C}^{\text{dR}}(\mathring{D}_{x}).\]
By construction, the morphism in question is given by tensoring the canonical isomorphism
\[i^{!}j_{!}\omega_{U}\tilde{\longrightarrow}\operatorname{C}^{\text{dR}}( \mathring{D}_{x})\]
with \(\mathfrak{n}_{P}^{*}\).
**Corollary 8.6.2.1**.: _The functor \(\sigma^{*}\) lifts to a morphism_
\[\operatorname{\mathsf{QCoh}}(\operatorname{LS}_{P}(D)^{x\text{-ram}} \underset{\operatorname{LS}_{M}(D)^{x\text{-ram}}}{\times}\operatorname{LS}_{M }(D))\longrightarrow j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\text{-mod}^{*}( \operatorname{\mathsf{Rep}}(M))\]
_in \(\operatorname{\mathsf{FactCat}}\)._
Proof.: Observe that
\[\mathsf{QCoh}(\mathrm{LS}_{H}(D)_{X_{\mathrm{dR}}})=\mathsf{QCoh}(\mathbb{B}H \times X_{\mathrm{dR}})=\mathsf{Rep}(H)\otimes\mathsf{D}(X)\]
is canonically self-dual via the self-duality on \(\mathsf{Rep}(H)\) and \(\mathsf{D}(X)\). The resulting equivalence
\[\mathsf{Rep}(H)^{\mathrm{c,op}}_{X}\tilde{\longrightarrow}\mathsf{Rep}(H)^{ \mathrm{c}}_{X}\]
lifts to a contravariant equivalence \(L\mapsto\mathbb{D}(L)\) between Lie coalgebras in \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\) (a.k.a. Lie-! coalgebras in \(\mathsf{Rep}(H)\)) whose underlying object of \(\mathsf{Rep}(H)_{X_{\mathrm{dR}}}\) is compact and Lie-\(*\) algebras in \(\mathsf{Rep}(H)\) satisfying the same condition. If \(L\) is such a Lie-\(*\) algebra, then there is a tautological equivalence
\[L\text{-}\mathsf{mod}^{*}(\mathsf{Rep}(H))_{X_{\mathrm{dR}}^{I}}\tilde{ \longrightarrow}\mathbb{D}(L)\text{-}\mathsf{comod}^{!}(\mathsf{Rep}(H))_{X_{ \mathrm{dR}}^{I}}\]
which upgrades to an isomorphism in \(\mathsf{Fact}\mathsf{Cat}^{\mathrm{lax\text{-}fact}}\).
The fact that \(j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{*}(\mathsf{ Rep}(M))\) belongs to \(\mathsf{Fact}\mathsf{Cat}\) (as opposed to merely \(\mathsf{Fact}\mathsf{Cat}^{\mathrm{lax\text{-}fact}}\)) follows from the existence of the monadic adjunction
\[\mathrm{ind}:\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M)) \rightrightarrows j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{ mod}^{*}(\mathsf{Rep}(M)):\mathrm{oblv}\]
in \(\mathsf{Fact}\mathsf{Cat}^{\mathrm{lax\text{-}fact}}_{\mathrm{lax\text{-} untl}}\) and the strictness of the factorization structure on \(\mathfrak{n}_{P}\text{-}\mathsf{mod}(\mathsf{Rep}(M))\).
Consider the canonical morphism of factorization spaces
\[\iota:\mathrm{LS}_{P}(D)\longrightarrow\mathrm{LS}_{P}(D)^{x\cdot\mathrm{ram}} \underset{\mathrm{LS}_{M}(D)^{x\cdot\mathrm{ram}}}{\times}\mathrm{LS}_{M}(D).\]
Proposition 8.6.1 implies that \(\iota^{*}\) admits a left adjoint \((\iota^{*})^{\mathrm{L}}\).
**Proposition 8.7.1**.: _The functor \((\iota^{*})^{\mathrm{L}}\) fits into a commutative square_
_in \(\mathsf{Fact}\mathsf{Cat}\), where the right vertical functor is given by induction along the morphism of Lie-\(*\) algebras \(\mathfrak{n}_{P}\otimes k_{X}\to j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\)._
Proof.: The square in question tautologically commutes if the vertical functors are replaced with their right adjoints. Thus we must show that the resulting natural transformation
\[\mathrm{ind}^{j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})}_{\mathfrak{n}_{P} \otimes k_{X}}\operatorname{res}^{N_{P}}_{\mathfrak{n}_{P}}\longrightarrow( s^{*})^{\mathrm{enh}}(\iota^{*})^{\mathrm{L}}\]
is an isomorphism. For any finite set \(I\), both composite functors are \(\mathsf{Rep}(M)_{X_{\mathrm{dR}}^{I}}\)-linear, and since \(\mathsf{Rep}(P)_{X_{\mathrm{dR}}^{I}}\) is generated as a \(\mathsf{Rep}(M)_{X_{\mathrm{dR}}^{I}}\)-module by \(\operatorname{triv}_{P}(\omega_{X^{I}})\), it suffices to check that this natural transformation is an isomorphism after evaluating on this object. Applying the conservative functor
\[\mathrm{oblv}:j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\text{-}\mathsf{mod}^{ *}(\mathsf{Rep}(M))_{X_{\mathrm{dR}}^{I}}\longrightarrow\mathsf{Rep}(M)_{X_{ \mathrm{dR}}^{I}},\]
we have reduced to proving that
\[\mathrm{ind}^{j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})}_{\mathfrak{n}_{P} \otimes k_{X}}\operatorname{triv}_{\mathfrak{n}_{P}}(\omega_{X^{I}}) \longrightarrow s^{*}(\iota^{*})^{\mathrm{L}}\operatorname{triv}_{P}(\omega_{X^ {I}})\]
is an isomorphism in \(\mathsf{Rep}(M)_{X_{\mathrm{dR}}^{I}}\). Using factorization, we we can assume that \(I\) is a singleton. Over \(U=X\backslash\{x\}\), then this morphism is simply the identity map on \(\operatorname{triv}_{M}(\omega_{U_{\mathrm{dR}}})\), so it suffices to check that the induced map on the fiber at \(x\) is an isomorphism.
In particular, we have reduced to proving the commutativity of the square in question at the point \(x\). Then we are in the realm of finite-dimensional deformation theory, and it can be identified with the commutative square
where the horizontal morphisms are given by inverse image and
\[\widehat{t}:\operatorname{LS}_{P}(D_{x})^{\wedge}_{\operatorname{LS}_{M}(D_{x})} \longrightarrow(\operatorname{LS}_{P}(\mathring{D}_{x})\underset{ \operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_{M}(D_{x})) ^{\wedge}_{\operatorname{LS}_{M}(D_{x})}\]
is obtained from \(\iota\) by formal completion along \(\operatorname{LS}_{M}(D_{x})\). All of the stacks appearing here are formally smooth, and for such stacks we have the equivalence
\[\Upsilon_{Y}:\operatorname{QCoh}(Y)\tilde{\longrightarrow}\mathsf{IndCoh}(Y),\]
which intertwines inverse image for \(\operatorname{QCoh}\) with \(\mathfrak{l}\)-inverse image for \(\mathsf{IndCoh}\). Since the maps \(\iota\) and \(\widehat{t}\) are proper, the above square can be identified with
which commutes by base change for the standard functors on \(\mathsf{IndCoh}\).
In particular, it follows that \((\ast^{\ast})^{\mathrm{enh}}\) induces a morphism
\[\operatorname{QCoh}(\operatorname{LS}_{P}(\mathring{D}_{x})\underset{ \operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_{M}(D_{x})) \longrightarrow j_{\ast}j^{\ast}(\mathfrak{n}_{P}\otimes k_{X})\text{-} \mathsf{mod}^{\ast}(\mathsf{Rep}(M))_{x} \tag{8.7.1}\]
of factorization \(\mathsf{Rep}(P)\)-module categories at \(x\).
**Proposition 8.7.2**.: _The morphism (8.7.1) restricts to an equivalence on the full subcategory_
\[\operatorname{QCoh}((\operatorname{LS}_{P}(\mathring{D}_{x})\underset{ \operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_{M}(D_{x})) \overset{\wedge}{\operatorname{LS}_{M}(D_{x})}),\]
_embedded via the left adjoint of the restriction functor_
\[\operatorname{QCoh}(\operatorname{LS}_{P}(\mathring{D}_{x})\underset{ \operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_{M}(D_{x})) \longrightarrow\operatorname{QCoh}((\operatorname{LS}_{P}(\mathring{D}_{x}) \underset{\operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_{M }(D_{x}))^{\wedge}_{\operatorname{LS}_{M}(D_{x})}).\]
Proof.: Identify (8.7.1) with the composition
\[\operatorname{QCoh}(\operatorname{LS}_{P}(\mathring{D}_{x}) \underset{\operatorname{LS}_{M}(\mathring{D}_{x})}{\times}\operatorname{LS}_ {M}(D_{x})) \overset{\Upsilon}{\longrightarrow}\mathsf{IndCoh}(\operatorname{LS}_{P}( \mathring{D}_{x})\underset{\operatorname{LS}_{M}(\mathring{D}_{x})}{\times} \operatorname{LS}_{M}(D_{x}))\] \[\longrightarrow\mathsf{IndCoh}((\operatorname{LS}_{P}(\mathring{D}_ {x})\underset{\operatorname{LS}_{M}(D_{x})}{\times}\operatorname{LS}_{M}(D_{ x}))^{\wedge}_{\operatorname{LS}_{M}(D_{x})})\]
as in the proof of Proposition 8.7.1.
### Factorization modules for \(\Upsilon(\mathfrak{n}_{P})\)
The lax factorization category \(\mathsf{QCoh}(\mathrm{LS}_{H}(D)^{x\cdot\mathrm{ram}})\) makes \(\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x}))\) a factorization \(\mathsf{Rep}(H)\)-module at \(x\). The action of \(\mathsf{QCoh}(\mathrm{LS}_{\Gamma}(\mathring{D}_{x}))\) on itself by tensor product of quasicoherent sheaves commutes with the factorization \(\mathsf{Rep}(H)\)-module structure. This yields a functor
\[\Phi_{H}:\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x}))\text{--} \mathsf{mod}\longrightarrow\mathsf{Rep}(H)\text{--}\mathsf{mod}_{x}^{\mathrm{ fact}}\]
which commutes with the forgetful functors to \(\mathsf{DGCat}\). On the level of objects, this means that there is a natural factorization \(\mathsf{Rep}(H)\)-module structure on any \(\mathcal{C}\) in \(\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{D}_{x}))\text{--}\mathsf{mod}\) coming from the factorization module structure on the second factor of
\[\mathcal{C}\otimes_{\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{D}_{x}))}\mathsf{ QCoh}(\mathrm{LS}_{H}(\mathring{D}_{x}))=\mathcal{C}.\]
The restriction functor
\[\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x}))\longrightarrow \mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x})_{\mathrm{LS}_{H}( D_{x})}^{\wedge})\]
admits a fully faithful \(\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x}))\)-linear left adjoint. It follows that the restriction of scalars functor
\[\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{\mathring{D}}_{x})_{\mathrm{LS}_{H}(D_{ x})}^{\wedge})\text{--}\mathsf{mod}\longrightarrow\mathsf{QCoh}(\mathrm{LS}_{H}( \mathring{\mathring{D}}_{x}))\text{--}\mathsf{mod}\]
is fully faithful.
**Theorem 8.8.1**.: _The restriction of \(\Phi_{H}\) to the full subcategory \(\mathsf{QCoh}(\mathrm{LS}_{H}(\mathring{D}_{x})_{\mathrm{LS}_{H}(D_{x})}^{ \wedge})\text{--}\mathsf{mod}\) is fully faithful._
Proof.: This is Theorem 9.13.1 of [10].
**Proposition 8.8.2**.: _There is a canonical isomorphism_
\[\Phi_{P}(\mathsf{QCoh}((\mathrm{LS}_{P}(\mathring{\mathring{D}}_{x})\underset{ \mathrm{LS}_{M}(\mathring{\mathring{D}}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))_{ \mathrm{LS}_{M}(D_{x})}^{\wedge}))\text{--}\Upsilon(\mathfrak{n}_{P})\text{--} \mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M))_{x}\]
_in \(\mathsf{Rep}(P)\text{--}\mathsf{mod}_{x}^{\mathrm{fact}}\), where \(\mathsf{QCoh}(\mathrm{LS}_{P}(\mathring{D}_{x})_{\mathrm{LS}_{P}(D_{x})}^{ \wedge})\) acts on_
\[\mathsf{QCoh}((\mathrm{LS}_{P}(\mathring{\mathring{D}}_{x})\underset{\mathrm{ LS}_{M}(\mathring{\mathring{D}}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))_{\mathrm{ LS}_{M}(D_{x})}^{\wedge})\]
_via restriction along the composition_
\[(\mathrm{LS}_{P}(\mathring{\mathring{D}}_{x})\underset{\mathrm{ LS}_{M}(\mathring{\mathring{D}}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))_{\mathrm{ LS}_{M}(D_{x})}^{\wedge} \longrightarrow(\mathrm{LS}_{P}(\mathring{\mathring{D}}_{x})\underset{ \mathrm{LS}_{M}(\mathring{\mathring{D}}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))_{ \mathrm{LS}_{P}(D_{x})}^{\wedge}\] \[\longrightarrow\mathrm{LS}_{P}(\mathring{\mathring{D}}_{x})_{ \mathrm{LS}_{P}(D_{x})}^{\wedge},\]
_and the factorization \(\mathsf{Rep}(P)\)-module structure on \(\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M ))_{x}\) arises from the morphism_
\[\mathsf{Rep}(P)\longrightarrow\mathfrak{n}_{P}\text{--}\mathsf{mod}(\mathsf{ Rep}(M))\xrightarrow{\mathrm{ind}^{\mathfrak{n}\twoheadrightarrow\mathrm{fact}}} \Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M ))_{x}\]
_in \(\mathsf{FactCat}\)._
Proof.: Lemma 8.16.1 of [10] says that
\[\mathrm{ind}^{*\twoheadrightarrow\mathrm{ch}}:j_{*}j^{*}(\mathfrak{n}_{P} \otimes k_{X})\text{--}\mathsf{mod}^{*}(\mathsf{Rep}(M))_{x}\text{--}\mathsf{ U}^{\mathrm{ch}}(j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X}))\text{--}\mathsf{mod}^{ \mathrm{ch}}(\mathsf{Rep}(M))_{x}\]
is an equivalence. Identifying
\[j^{*}\,\mathrm{U}^{\mathrm{ch}}(j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X}))\text{--} \text{--}j^{*}\,\mathrm{U}^{\mathrm{ch}}(\mathfrak{n}_{P}\otimes k_{X})\]
as chiral algebras on \(X\backslash\{x\}\) and applying Proposition 5.1.1, we obtain an equivalence
\[j_{*}j^{*}(\mathfrak{n}_{P}\otimes k_{X})\text{--}\mathsf{mod}^{*}(\mathsf{ Rep}(M))_{x}\text{--}\Upsilon(\mathfrak{n}_{P})\text{--}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(M))_{x}.\]
By construction, this equivalence respects the factorization \(\mathsf{Rep}(P)\)-module structures.
We can apply \(\Phi_{P}\) to the \(\mathsf{QCoh}(\mathrm{LS}_{P}(\mathring{D}_{x}))\)-linear functor
\[(\iota_{x}^{*})^{\mathrm{L}}:\mathsf{Rep}(P)=\mathsf{QCoh}(\mathrm{LS}_{P}(D_{x} ))\longrightarrow\mathsf{QCoh}((\mathrm{LS}_{P}(\mathring{D}_{x})\underset{ \mathrm{LS}_{M}(\mathring{D}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))\overset{ \wedge}{\mathrm{LS}_{M}(D_{x})})\]
to obtain a morphism in \(\mathsf{Rep}(P)\)-\(\mathsf{mod}_{x}^{\mathrm{fact}}\). Inspecting the constructions, we see that taking the fiber at \(x\) of the morphism \((\iota^{*})^{\mathrm{L}}\) in \(\mathsf{FactCat}^{\mathrm{lax-fact}}\) from Proposition 8.7.1 yields the same morphism. Now apply Proposition 8.7.2.
### Spectral Hecke category at a point
Finally, we deduce the promised description of factorization \(\Upsilon(\mathfrak{n},\mathcal{O}_{G})\)-modules at \(x\).
**Theorem 8.9.1**.: _There is a canonical equivalence_
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\mbox{-}\mathsf{mod}^{\mathrm{fact}} (\mathsf{Rep}(G\times M))_{x}\tilde{\longrightarrow}\mathsf{QCoh}((\mathrm{ LS}_{P}(\mathring{D}_{x})\underset{\mathrm{LS}_{G\times M}(\mathring{D}_{x})}{ \times}\mathrm{LS}_{G\times M}(D_{x}))\overset{\wedge}{\mathrm{LS}_{M}(D_{x}) }).\]
Proof.: As explained in [11] SSSS9.22-24, for any factorization \(\mathsf{Rep}(G)\)-module category \(\mathcal{C}\) at \(x\), we can identify
\[\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G)\otimes \mathcal{C})_{x}\tilde{\longrightarrow}\operatorname{Fun}_{\mathsf{Rep}(G) \mbox{-}\mathsf{mod}_{x}^{\mathrm{fact}}}(\mathsf{Rep}(G),\mathcal{C}).\]
Here we view the diagonal bimodule \(\mathcal{O}_{G}\) as a factorization algebra in \(\mathsf{Rep}(G\times G)\), and \(\mathsf{Rep}(G)\otimes\mathcal{C}\) as a factorization \(\mathsf{Rep}(G\times G)\)-module at \(x\). Moreover, if \(\mathcal{C}\) lies in the essential image of
\[\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D}_{x})\overset{\wedge}{\mathrm{LS}_{ G}(D_{x})})\mbox{-}\mathsf{mod}\]
under \(\Phi_{G}\), it follows that
\[\mathsf{Rep}(G)\underset{\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D}_{x}) \overset{\wedge}{\mathrm{LS}_{G}(D_{x})})}{\otimes}\mathcal{C}\tilde{ \longrightarrow}\operatorname{Fun}_{\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D} _{x})\overset{\wedge}{\mathrm{LS}_{G}(D_{x})})}(\mathsf{Rep}(G),\mathcal{C})\]
where we used Theorem 8.8.1 for the second equivalence. The first equivalence uses the observation that \(\mathsf{Rep}(G)\) is dualizable and self-dual as an object of \(\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D}_{x})\overset{\wedge}{\mathrm{LS}_ {G}(D_{x})})\mbox{-}\mathsf{mod}\), cf. _loc. cit_. SS9.18.
We now apply the above in the case
\[\mathcal{C}=\mathsf{QCoh}((\mathrm{LS}_{P}(\mathring{D}_{x})\underset{\mathrm{ LS}_{M}(\mathring{D}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))\overset{\wedge}{ \mathrm{LS}_{M}(D_{x})}),\]
viewed as a \(\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D}_{x})\overset{\wedge}{\mathrm{LS}_ {G}(D_{x})})\)-module via inverse image along the projection
\[\mathrm{LS}_{P}(\mathring{D}_{x})\overset{\wedge}{\mathrm{LS}_{P}(D_{x})} \longrightarrow\mathrm{LS}_{G}(\mathring{D}_{x})\overset{\wedge}{\mathrm{LS}_ {G}(D_{x})}.\]
Note that by Lemma 3.6.1, we have an equivalence
\[\mathcal{O}_{G}\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G)\otimes \Upsilon(\mathfrak{n}_{P})\mbox{-}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(M) ))_{x}\tilde{\longrightarrow}\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\mbox{ -}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(G\times M))_{x}.\]
Applying Proposition 8.8.2 and the equivalence in the previous paragraph identifies the right hand side with
\[\mathsf{Rep}(G)\underset{\mathsf{QCoh}(\mathrm{LS}_{G}(\mathring{D}_{x}) \overset{\wedge}{\mathrm{LS}_{G}(D_{x})})}{\otimes}\mathsf{QCoh}((\mathrm{LS}_ {P}(\mathring{D}_{x})\underset{\mathrm{LS}_{M}(\mathring{D}_{x})}{\times} \mathrm{LS}_{M}(D_{x}))\overset{\wedge}{\mathrm{LS}_{M}(D_{x})}).\]
The morphism
\[\mathrm{LS}_{G}(D_{x})\longrightarrow\mathrm{LS}_{G}(\mathring{D}_{x})\overset{ \wedge}{\mathrm{LS}_{G}(D_{x})},\]
being isomorphic to \(\{0\}/G\to\widehat{\mathfrak{g}}_{0}/G\), is affine, which implies that the above tensor product identifies with
\[\mathsf{QCoh}(\mathrm{LS}_{G}(D_{x})\underset{\mathrm{LS}_{G}(\hat{D}_{x})_{ \mathrm{LS}_{G}(D_{x})}^{\wedge}}{\times}(\mathrm{LS}_{P}(\mathring{\hat{D}_{x }})\underset{\mathrm{LS}_{M}(\hat{D}_{x})}{\times}\mathrm{LS}_{M}(D_{x}))_{ \mathrm{LS}_{M}(D_{x})}^{\wedge}).\]
Finally, using the identification
\[\mathrm{LS}_{H}(\mathring{\hat{D}_{x}})_{\mathrm{LS}_{H}(D_{x})}^{\wedge} \widetilde{\longrightarrow}\widehat{\mathfrak{h}}_{0}/H,\]
it is easy to see that the canonical map from the above fiber product to
\[\mathrm{LS}_{G}(D_{x})\underset{\mathrm{LS}_{G}(\hat{D}_{x})}{\times}\mathrm{ LS}_{P}(\mathring{\hat{D}_{x}})\underset{\mathrm{LS}_{M}(\hat{D}_{x})}{\times} \mathrm{LS}_{M}(D_{x})=\mathrm{LS}_{P}(\mathring{\hat{D}_{x}})\underset{ \mathrm{LS}_{G\times M}(\hat{D}_{x})}{\times}\mathrm{LS}_{G\times M}(D_{x})\]
factors through an isomorphism with the formal completion of the latter along \(\mathrm{LS}_{M}(D_{x})\).
**Corollary 8.9.1.1**.: _The monad on \(\mathsf{Rep}(P)\) induced by the adjunction_
\[\mathsf{Rep}(P)\rightleftarrows\mathsf{Sph}_{G,P,x}^{\mathrm{spec}},\]
_where the left adjoint is (5.6.2), is isomorphic to the monad given by the associative algebra \(\mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P})[-2])\)._
Proof.: Observe that the equivalence of Theorem 8.9.1 restricts to an equivalence
\[\Upsilon(\mathfrak{n}_{P},\mathcal{O}_{G})\mathsf{-mod}_{0}^{\mathrm{fact}}( \mathsf{Rep}(G\times M))_{x}\widetilde{\longrightarrow}\mathsf{QCoh}(( \mathrm{LS}_{P}(\hat{\hat{D}_{x}})\underset{\mathrm{LS}_{G\times M}(\hat{D}_{ x})}{\times}\mathrm{LS}_{G\times M}(D_{x}))_{\mathrm{LS}_{P}(D_{x})}^{\wedge}).\]
We claim that this equivalence is t-exact with respect to the usual t-structure on quasicoherent sheaves: recall that
\[(\mathrm{LS}_{P}(\mathring{\hat{D}_{x}})\underset{\mathrm{LS}_{G\times M}( \hat{D}_{x})}{\times}\mathrm{LS}_{G\times M}(D_{x}))_{\mathrm{LS}_{P}(D_{x}) }^{\wedge}\cong(\mathfrak{n}_{P}\underset{\mathfrak{g}}{\times}\{0\})/P,\]
and in particular this is an algebraic stack of finite type. The t-structure on
\[\mathsf{QCoh}((\mathfrak{n}_{P}\underset{\mathfrak{g}}{\times}\{0\})/P)\]
is uniquely characterized by the fact that direct image along
\[\{0\}/P\longrightarrow(\mathfrak{n}_{P}\underset{\mathfrak{g}}{\times}\{0\})/P\]
is t-exact, whence the claim.
It follows that the above equivalence preserves coherence, and hence renormalizes to
\[\mathsf{Sph}_{G,P,x}^{\mathrm{spec}}\widetilde{\longrightarrow}\mathsf{IndCoh}(( \mathrm{LS}_{P}(\mathring{\hat{D}_{x}})\underset{\mathrm{LS}_{G\times M}(\hat{ D}_{x})}{\times}\mathrm{LS}_{G\times M}(D_{x}))_{\mathrm{LS}_{P}(D_{x})}^{ \wedge}).\]
The functor (5.6.2) corresponds to direct image of ind-coherent sheaves along
\[\iota:\mathrm{LS}_{P}(D_{x})\longrightarrow(\mathrm{LS}_{P}(\mathring{\hat{D}_ {x}})\underset{\mathrm{LS}_{G\times M}(\hat{D}_{x})}{\times}\mathrm{LS}_{G \times M}(D_{x}))_{\mathrm{LS}_{P}(D_{x})}^{\wedge},\]
where we identify
\[\mathsf{IndCoh}(\mathrm{LS}_{P}(D_{x}))\cong\mathsf{QCoh}(\mathrm{LS}_{P}(D_{x }))\cong\mathsf{Rep}(P)\]
using the t-exact functor \(\Psi\). After a choice of formal coordinate at \(x\), this becomes the map
\[\{0\}/P\longrightarrow(\mathfrak{n}_{P}\underset{\mathfrak{g}}{\times}\{0\})/P.\]
If we identify
\[\Gamma(\mathfrak{n}_{P}\underset{\mathfrak{g}}{\times}\{0\},\mathcal{O})= \mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P})^{*}[1]),\]
then we have
\[\iota^{!}\iota_{*}V=\mathrm{Hom}_{\mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P} )^{*}[1])}(k,V)=\mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P})[-2])\otimes V\]
for any \(V\) in \(\mathsf{Rep}(P)\), i.e. the monad \({\iota}^{!}{\iota}_{\bullet}\) is given by the algebra \(\mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P})[-2])\) Koszul dual to \(\mathrm{Sym}((\mathfrak{g}/\mathfrak{n}_{P})^{*}[1])\), as desired.
## 9. The equivalence for \(P=g\)
In this section, we complete the proof of the factorizable derived Satake equivalence, as formulated in Theorem 6.6.1.
### The derived Satake transform as a morphism of monads
By the construction of the morphism (6.6.1), we have a commutative triangle
in \(\mathsf{FactCat}^{\mathrm{lax-fact}}\). After renormalization, we obtain a commutative triangle
in \(\mathsf{FactCat}\). By Lemma 6.3.1 and Theorem 4.6.1 respectively, the diagonal functors preserve ULA objects over each \(X^{I}\), hence admit factorizable right adjoints. Moreover, their right adjoints are conservative and hence monadic. The above triangle therefore corresponds to a morphism
\[\Psi_{G}\longrightarrow\Psi_{\check{G}}^{\mathrm{spec}} \tag{9.1.1}\]
between the corresponding monads on \(\mathsf{Rep}(\check{G})\) in \(\mathsf{FactCat}_{\mathrm{lax-untl}}\), which we will prove is an isomorphism. As previously explained, Theorem 6.6.1 will follow if we prove that the morphism of monads
\[\Psi_{G,x}\longrightarrow\Psi_{\check{G},x}^{\mathrm{spec}}\]
is an isomorphism over each \(x\in X(k)\).
### Explicit description of the monads at a point
We will need the following lemma.
**Lemma 9.2.1**.: _Let \(H\) be an affine group scheme and suppose that \(\mathsf{Rep}(H)\) acts on a DG category \(\mathcal{C}\). For any objects \(c,d\) in \(\mathcal{C}\), we have a canonical isomorphism_
\[\underline{\mathrm{Hom}}_{\mathsf{Rep}(H)}(c,d)\smash{\mathop{\longrightarrow }\limits^{\vbox{\hbox{\scriptsize$-$}}}}\mathrm{Hom}_{\mathcal{C}}(c, \mathcal{O}_{H}\star d)\]
_in \(\mathsf{Rep}(H)\), where the \(H\)-action on the right side arises by functoriality from the \(H\)-action on \(\mathcal{O}_{H}\) by right translations._
Now we deduce an explicit description of the monad \(\Psi_{G,x}\) from a well-known equivariant cohomology calculation originally due to Ginzburg.
**Proposition 9.2.2**.: _The monad_
\[\Psi_{G,x}:\mathsf{Rep}(\check{G})\longrightarrow\mathsf{Sph}_{G,x} \longrightarrow\mathsf{Rep}(\check{G})\]
_is isomorphic to the monad given by the associative algebra \(\mathrm{Sym}(\check{\mathfrak{g}}[-2])\)._
Proof.: The functor
\[\mathsf{Rep}(\check{G})\longrightarrow\mathsf{Sph}_{G,x}\]
obtained by renormalizing \(\operatorname{Sat}_{G,x}^{\operatorname{naive}}\) is monoidal, and in particular can be viewed as a \(\mathsf{Rep}(\check{G})\)-linear functor. It follows that \(\Psi_{G,x}\) is given by the associative algebra \(\underline{\operatorname{End}}_{\mathsf{Rep}(\check{G})}(\delta_{\mathfrak{L} ^{+}G})\). Lemma 9.2.1 yields a \(\check{G}\)-equivariant isomorphism
\[\underline{\operatorname{End}}_{\mathsf{Rep}(\check{G})}(\delta_{\mathfrak{L} ^{+}G})\tilde{\longrightarrow}\operatorname{Hom}_{\mathsf{Sph}_{G,x}}(\delta_ {\mathfrak{L}^{+}G},\mathcal{O}_{\check{G}}\star\delta_{\mathfrak{L}^{+}G}).\]
Note that this is an isomorphism of associative algebras in \(\mathsf{Rep}(\check{G})\), where the algebra structure on the right side is induced by the algebra structure on \(\mathcal{O}_{\check{G}}\) and composition of morphisms in \(\mathsf{Sph}_{G,x}\).
Then apply Theorem 7.6.1 from [1], which supplies an isomorphism of associative algebras
\[\operatorname{Hom}_{\mathsf{Sph}_{G,x}}(\delta_{1},\mathcal{O}_{\check{G}} \star\delta_{\mathfrak{L}^{+}G})\tilde{\longrightarrow}\operatorname{Sym}( \check{\mathfrak{g}}[-2]).\]
Next, we show that the monad \(\Psi_{\check{G},x}^{\operatorname{spec}}\) is abstractly isomorphic to the same associative algebra as \(\Psi_{G}\).
**Proposition 9.2.3**.: _The monad_
\[\Psi_{\check{G},x}^{\operatorname{spec}}:\mathsf{Rep}(\check{G}) \longrightarrow\mathsf{Sph}_{\check{G},x}^{\operatorname{spec}}\longrightarrow \mathsf{Rep}(\check{G})\]
_is isomorphic to the monad given by the associative algebra \(\operatorname{Sym}(\check{\mathfrak{g}}[-2])\)._
Proof.: This is Corollary 8.9.1.1, applied in the case \(\check{P}=\check{G}\).
### Another reduction step
We claim that if \(\operatorname{Sat}_{G,x}\) induces an isomorphism
\[\operatorname{End}_{\mathsf{Sph}_{G,x}}(\delta_{\mathfrak{L}^{+}G})\tilde{ \longrightarrow}\operatorname{End}_{\mathsf{Sph}_{G,x}^{\operatorname{spec}}}( \operatorname{Vac}_{\mathcal{O}_{\check{G}}})\]
on endomorphisms of the unit objects, then it is an equivalence.
We have previously shown that if the morphism of monads (9.1.1) is an isomorphism over \(x\), then \(\operatorname{Sat}_{G,x}\) is an equivalence. Propositions 9.2.2 and 9.2.3 show that both of these monads are abstractly isomorphic to the associative algebra \(\operatorname{Sym}(\check{\mathfrak{g}}[-2])\) in \(\mathsf{Rep}(\check{G})\), so we must prove that the endomorphism
\[\varphi_{G}:\operatorname{Sym}(\check{\mathfrak{g}}[-2])\longrightarrow \operatorname{Sym}(\check{\mathfrak{g}}[-2])\]
corresponding to (9.1.1) is an isomorphism.
It suffices to show that \(\varphi_{G}\) is an isomorphism at the level of cohomology. We view the cohomology of the associative DG algebra \(\operatorname{Sym}(\check{\mathfrak{g}}[-2])\) as a classical graded commutative algebra, so we need only prove that \(\varphi_{G}\) restricts to an isomorphism on the generators \(\check{\mathfrak{g}}\). On the other hand, the subspace
\[H^{2}(\operatorname{Sym}(\check{\mathfrak{g}}[-2])^{\check{G}})\subset\check {\mathfrak{g}}\]
generates the adjoint representation, so it is enough to show that \(\varphi_{G}\) restricts to an isomorphism on the algebra of invariants \(\operatorname{Sym}(\check{\mathfrak{g}}[-2])^{\check{G}}\).
From the proof of Proposition 9.2.2 we obtain a commutative square
where the left vertical isomorphism is the composition
\[\operatorname{Sym}(\check{\mathfrak{g}}[-2])^{\check{G}}\tilde{ \longrightarrow}\operatorname{Sym}(\mathfrak{g}[-2])^{G}\tilde{ \longrightarrow}\operatorname{Cd}_{\mathsf{R}}(\mathbb{B}G)\tilde{ \longrightarrow}\operatorname{End}_{\mathsf{Sph}_{G,x}}(\delta_{\mathfrak{L} ^{+}G}).\]
Similarly, Proposition 9.2.3 yields a commutative square
The claim now follows.
### Conclusion of the proof
Finally, to prove that \(\operatorname{Sat}_{G,x}\) induces an isomorphism on endomorphisms of the unit objects, we reduce to the case \(G=T\) using \(\operatorname{Sat}_{G,B}\) as an intermediary. Consider the commutative diagram in \(\operatorname{\mathsf{FactCat}}\)
where \(\mathsf{Sph}_{G}\to\mathsf{Sph}_{G,B}\) is the unique morphism of \(\mathsf{Sph}_{G}\)-modules in \(\operatorname{\mathsf{FactCat}}\) (uniqueness is a consequence of unitality), and similarly for the other vertical functors.
Taking the fiber at \(x\) and passing to endomorphisms of the unit objects, we obtain a commutative diagram of associative algebras
Let us describe the algebras in the middle row: first, we have
\[\operatorname{Sym}(\mathfrak{f}[-2])\tilde{\longrightarrow}\operatorname{ Sym}(\mathfrak{f}^{\mathfrak{s}}[-2])\tilde{\longrightarrow}\operatorname{C}_{ \operatorname{dR}}(\mathbb{B}T)\tilde{\longrightarrow}\operatorname{End}_{ \mathsf{Sph}_{G,B,x}}(\Delta^{0}).\]
By Corollary 8.9.1.1, we have an equivalence
\[\mathsf{Sph}^{\operatorname{spec}}_{\tilde{G},\tilde{B},x}\tilde{ \longrightarrow}\operatorname{Sym}((\check{\mathfrak{g}}/\check{\mathfrak{n }})[-2])\tilde{\longrightarrow}\mathsf{mod}(\operatorname{Rep}(\tilde{B}))\]
which sends
\[\operatorname{Vac}_{\Upsilon(\check{\mathfrak{n}},\check{\mathfrak{O}}_{ \check{G}})}\mapsto\operatorname{Sym}((\check{\mathfrak{g}}/\check{\mathfrak{ n}})[-2]).\]
In particular, we have isomorphisms
\[\operatorname{End}_{\mathsf{Sph}^{\operatorname{spec}}_{G,\tilde{B},x}}( \operatorname{Vac}_{\Upsilon(\check{\mathfrak{n}},\check{\mathfrak{O}}_{ \check{G}})})\tilde{\longrightarrow}\operatorname{End}_{\operatorname{Sym}(( \check{\mathfrak{g}}/\check{\mathfrak{n}})[-2])\tilde{\longrightarrow} \operatorname{End}(\operatorname{Rep}(\tilde{B}))}(\operatorname{Sym}(( \check{\mathfrak{g}}/\check{\mathfrak{n}})[-2]))\]
\[\tilde{\longrightarrow}\operatorname{Sym}((\check{\mathfrak{g}}/\check{ \mathfrak{n}})[-2])^{\tilde{B}}=\operatorname{Sym}(\check{\mathfrak{f}}[-2]).\]
To summarize, under these identifications the above commutative diagram of associative algebras becomes
Here the downward vertical arrows are given by the Chevalley homomorphism
\[\operatorname{Sym}(\tilde{\mathfrak{g}}[-2])^{\check{G}}\tilde{\longrightarrow} \operatorname{Sym}(\tilde{\mathfrak{l}}[-2])^{W}\longrightarrow\operatorname{ Sym}(\tilde{\mathfrak{l}}[-2]).\]
We have previously proved that Theorem 6.6.1 holds in the case \(G=T\), which implies that \(\varphi_{T}\) is an isomorphism. It follows that
\[\varphi_{G}:\operatorname{Sym}(\tilde{\mathfrak{g}}[-2])^{G}\longrightarrow \operatorname{Sym}(\tilde{\mathfrak{g}}[-2])^{G}\]
is injective at the level of cohomology, hence an isomorphism, since \(\operatorname{Sym}(\tilde{\mathfrak{g}}[-2])^{G}\) has finite-dimensional cohomologies.
## 10. The equivalence for a proper parabolic
In this section, we deduce Theorem 6.12.3 from Theorem 6.6.1 and the main theorem of [10].
### Recovering spherical objects from Whittaker invariants
Consider the pairing in \(\operatorname{\mathsf{FactCat}}\) defined as the composite
\[\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)\tilde{\longrightarrow}\mathsf{D}(\operatorname{Gr}_{G})^{ \mathfrak{L}N^{-},\psi}\otimes\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)\longrightarrow\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-}, \psi},\]
where the first functor is the equivalence of Theorem 6.5.1 and the second functor is given by convolution. By Proposition 3.4.1, this corresponds to a morphism
\[\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)\longrightarrow\mathsf{ Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-},\psi}. \tag{10.1.1}\]
in \(\operatorname{\mathsf{FactCat}}_{\text{\rm{lax-untl}}}\). By construction, this morphism is equivariant for the right action of \(\mathfrak{L}G\).
**Proposition 10.1.1**.: _The morphism (10.1.1) factors through the inclusion_
\[\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-}, \psi,\operatorname{acc}}\longrightarrow\mathsf{Rep}(\check{G})\otimes \mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-},\psi},\]
_and moreover sends the unit \(\delta_{\mathfrak{L}^{+}G}\) to the image of the regular bimodule \(\mathcal{O}_{\check{G}}\) under the functor_
\[\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check{G})\tilde{ \longrightarrow}\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\operatorname{Gr}_ {G})^{\mathfrak{L}N^{-},\psi}\] \[\tilde{\longrightarrow}\operatorname{\operatorname{oblv}_{ \mathfrak{L}^{+}G}}\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}G)^{ \mathfrak{L}N^{-},\psi,\operatorname{acc}}.\]
Proof.: For first assertion, note that (10.1.1) can also be obtained from the composite
\[\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)\longrightarrow\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)\tilde{\longrightarrow}\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L} N^{-},\psi},\]
where the first morphism is the action of \(\mathsf{Rep}(\check{G})\) coming from \(\operatorname{Sat}_{G}^{\text{naive}}\).
For the second claim, apply the functor of \(\mathfrak{L}^{+}G\)-invariants to (10.1.1) to obtain a morphism
\[\mathsf{D}(\mathcal{H}_{G})\longrightarrow\mathsf{Rep}(\check{G})\otimes \mathsf{D}(\operatorname{Gr}_{G})^{\mathfrak{L}N^{-},\psi}\tilde{ \longrightarrow}\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check{G}).\]
By construction, this agrees with the morphism of associative algebras (6.5.1) and in particular preserves the monoidal unit. The claim follows.
Observe that for a fixed \(x\in X(k)\), the category
\[\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}N^{-}, \psi,\mathrm{acc}}\]
has a canonical structure of factorization module at \(x\) for the factorization category
\[\mathsf{Rep}(\check{G})\otimes\mathsf{D}(\mathrm{Gr}_{G})^{\mathfrak{L}N^{-}, \psi}\cong\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check{G}),\]
compatible with the functor
\[\mathrm{oblv}_{\mathfrak{L}^{+}G}:\mathsf{Rep}(\check{G})\otimes\mathsf{D}( \mathrm{Gr}_{G})_{x}^{\mathfrak{L}N^{-},\psi}\longrightarrow\mathsf{Rep}( \check{G})\otimes\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}N^{-},\psi, \mathrm{acc}}.\]
The previous proposition implies that (10.1.1) induces a morphism
\[\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\longrightarrow \mathcal{O}_{\check{G}^{-}}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(\check{G} )\otimes\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-},\psi,\mathrm{acc}})_{x},\]
equivariant for the right action of \((\mathfrak{L}G)_{x}\).
If \(\mathcal{C}\) is a left \(\mathsf{D}(\mathfrak{L}G)_{x}\)-module, we obtain a functor
\[\mathcal{C}^{\mathfrak{L}^{+}G\tilde{-}}\widetilde{\longrightarrow}\mathsf{ D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\underset{\mathsf{D}( \mathfrak{L}G)_{x}}{\otimes}\mathcal{C}\longrightarrow\mathcal{O}_{\check{G}^ {-}}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(\check{G})\otimes\mathsf{D}( \mathfrak{L}G)^{\mathfrak{L}N^{-},\psi,\mathrm{acc}})_{x}\underset{\mathsf{D}( \mathfrak{L}G)_{x}}{\otimes}\mathcal{C}.\]
Note that since the \(\mathsf{D}(\mathfrak{L}G)_{x}\)-module and factorization \(\mathsf{Rep}(\check{G})\)-module structures on the category
\[\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}N^{-},\psi,\mathrm{acc}}\]
commute, the category
\[\mathcal{C}^{\mathfrak{L}N^{-},\psi,\mathrm{acc}}\cong\mathsf{D}(\mathfrak{L }G)_{x}^{\mathfrak{L}N^{-},\psi,\mathrm{acc}}\underset{\mathsf{D}(\mathfrak{ L}G)_{x}}{\otimes}\mathcal{C}\]
automatically inherits a factorization \(\mathsf{Rep}(\check{G})\)-module structure, and hence we can identify
\[\mathcal{O}_{\check{G}^{-}}\mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(\check{ G})\otimes\mathsf{D}(\mathfrak{L}G)^{\mathfrak{L}N^{-},\psi,\mathrm{acc}})_{x} \underset{\mathsf{D}(\mathfrak{L}G)_{x}}{\otimes}\mathcal{C}\widetilde{ \longrightarrow}\mathcal{O}_{\check{G}^{-}}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(\check{G})\otimes\mathcal{C}^{\mathfrak{L}N^{-},\psi,\mathrm{ acc}})_{x}.\]
We will show that the resulting functor
\[\mathcal{C}^{\mathfrak{L}^{+}G}\longrightarrow\mathcal{O}_{\check{G}^{-}} \mathsf{mod}^{\mathrm{fact}}(\mathsf{Rep}(\check{G})\otimes\mathcal{C}^{ \mathfrak{L}N^{-},\psi,\mathrm{acc}})_{x}. \tag{10.1.2}\]
is "nearly" an equivalence.
### Tempered geometric Satake equivalence
Given a \(\mathsf{D}(\mathfrak{L}G)_{x}\)-module category \(\mathcal{C}\), we define the _anti-tempered_ subcategory of \(\mathcal{C}^{\mathfrak{L}^{+}G}\) by
\[\mathcal{C}^{\mathfrak{L}^{+}G,\mathrm{anti-temp}}:=\ker(\mathcal{C}^{ \mathfrak{L}^{+}G}\xrightarrow{\mathrm{Av}_{\mathrm{t}}^{\mathfrak{L}N^{-}, \psi}}\mathcal{C}^{\mathfrak{L}N^{-},\psi}).\]
The _tempered_ subcategory \(\mathcal{C}^{\mathfrak{L}^{+}G,\mathrm{temp}}\) is defined to be the right complement of \(\mathcal{C}^{\mathfrak{L}^{+}G,\mathrm{anti-temp}}\). The inclusion
\[\mathcal{C}^{\mathfrak{L}^{+}G,\mathrm{temp}}\longrightarrow\mathcal{C}^{ \mathfrak{L}^{+}G}\]
admits a right adjoint.
**Proposition 10.2.1**.: _The composite functor_
\[\mathsf{D}(\mathrm{Gr}_{G})_{x}^{\mathfrak{L}G,\mathrm{temp}}\longrightarrow \mathsf{D}(\mathrm{Gr}_{G})_{x}^{\mathfrak{L}G}=\mathsf{D}(\mathcal{H}_{G})_{ x}\xrightarrow{\eqref{eq:C_G}\cdot}\mathcal{O}_{\check{G}^{-}}\mathsf{mod}^{\mathrm{fact}}( \mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check{G}))_{x}\]
_is an equivalence._
Proof.: Recall that (6.6.1) is t-exact, and that by Theorem 6.6.1 it becomes an equivalence after renormalization. By Corollary 2.3.7 of [1], the subcategory
\[\mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}G,\text{anti-temp}}\subset \mathsf{D}(\mathcal{H}_{G})_{x}\]
consists precisely of the infinitely connective objects, i.e. those with vanishing cohomology objects in all degrees. It follows that the functor
\[\mathsf{D}(\mathcal{H}_{G})_{x}\longrightarrow\mathsf{D}(\operatorname{Gr}_{ G})_{x}^{\mathfrak{L}G,\text{temp}}\]
right adjoint to the inclusion realizes the latter as the left completion of the t-structure on the former. On the other hand, the proof of Proposition 8.9.1.1 shows that we have a t-exact equivalence
\[\mathcal{O}_{\check{G}}\text{--}\mathsf{mod}^{\text{fact}}(\mathsf{Rep}( \check{G})\otimes\mathsf{Rep}(\check{G}))_{x}\cong\mathsf{QCoh}((\{0\}\times \limits_{\check{\mathfrak{g}}}\{0\})/\check{G}),\]
and in particular the t-structure on the source is left complete. Thus
\[\mathsf{Sph}_{\check{G},x}^{\text{spec}}=\mathcal{O}_{\check{G}}\text{--} \mathsf{mod}^{\text{fact}}(\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}( \check{G}))_{x}^{\text{ren}}\longrightarrow\mathcal{O}_{\check{G}}\text{--} \mathsf{mod}^{\text{fact}}(\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check {G}))_{x}\]
realizes the target as the left completion of the source. It follows that the functor in the proposition is obtained by applying left completion to the equivalence of Theorem 6.6.1, hence is an equivalence.
We will need a couple of lemmas.
**Lemma 10.3.1**.: _The convolution functor_
\[\mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}N^{-},\psi}\underset{ \mathsf{D}(\mathcal{H}_{G})_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{x}\longrightarrow\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L} N^{-},\psi}\]
_is an equivalence onto_
\[\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}N^{-},\psi,\text{acc}}.\]
Proof.: This functor is fully faithful because \(\mathcal{H}_{G,x}\) is ind-proper (see Theorem 3.1.5 of [1]), so it suffices to show that the image generates the target under colimits. Tensor the functor
\[\operatorname{Av}_{\mathfrak{l}}^{\mathfrak{L}N^{-},\psi}:\mathsf{D}( \mathcal{H}_{G})_{x}\longrightarrow\mathsf{D}(\operatorname{Gr}_{G})_{x}^{ \mathfrak{L}N^{-},\psi}\]
with \(\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\) over \(\mathsf{D}(\mathcal{H}_{G})_{x}\) to obtain a functor
\[\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\longrightarrow \mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}N^{-},\psi}\underset{ \mathsf{D}(\mathcal{H}_{G})_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{x}.\]
Its composition with the functor in question identifies with
\[\operatorname{Av}_{\mathfrak{l}}^{\mathfrak{L}N^{-},\psi}:\mathsf{D}( \mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\longrightarrow\mathsf{D}( \mathfrak{L}G)_{x}^{\mathfrak{L}N^{-},\psi},\]
whose image generates the accessible subcategory by definition.
**Lemma 10.3.2**.: _The functor_
\[\mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}^{+}G,\text{temp}} \underset{\mathsf{D}(\mathcal{H}_{G})_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^ {+}G\backslash\mathfrak{L}G)_{x}\longrightarrow\mathsf{D}(\operatorname{Gr}_ {G})_{x}^{\mathfrak{L}^{+}G}\underset{\mathsf{D}(\mathcal{H}_{G})_{x}}{ \otimes}\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}=\mathsf{D}( \mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G}\]
_is an equivalence onto \(\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G,\text{temp}}\)._
Proof.: Recall that the inclusion
\[\mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}^{+}G,\text{temp}} \longrightarrow\mathsf{D}(\operatorname{Gr}_{G})_{x}^{\mathfrak{L}^{+}G}\]
admits a right adjoint, which is automatically \(\mathsf{D}(\mathcal{H}_{G})_{x}\)-linear because \(\mathsf{D}(\mathcal{H}_{G})_{x}\) is compactly generated by left and right dualizable objects. It follows that the functor in question is fully faithful.
To see that the image of this functor is \(\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G,\mathrm{temp}}\), it suffices to show that that the kernel of the right adjoint
\[\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G}\longrightarrow\mathsf{D}( \mathrm{Gr}_{G})_{x}^{\mathfrak{L}^{+}G,\mathrm{temp}}\underset{\mathsf{D}( \mathfrak{H}G)_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{ L}G)_{x}\]
is equal to \(\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G,\mathrm{anti-temp}}\). This follows from the commutative square
\[\begin{CD}\mathsf{D}(\mathrm{Gr}_{G})_{x}^{\mathfrak{L}^{+}G}\underset{ \mathsf{D}(\mathfrak{H}G)_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{x}@>{\sim}>{}>\mathsf{D}(\mathfrak{L}^{+}G \backslash\mathfrak{L}G)_{x}\\ @V{}V{\mathrm{Av}_{1}^{\mathfrak{L}N^{-},\psi}\otimes\mathrm{id}}V@V{}V{ \mathrm{Av}_{1}^{\mathfrak{L}N^{-},\psi}}V\\ \mathsf{D}(\mathrm{Gr}_{G})_{x}^{\mathfrak{L}N^{-},\psi}\underset{\mathsf{D}( \mathfrak{H}G)_{x}}{\otimes}\mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{ L}G)_{x}@>{\sim}>{}>\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}N^{-},\psi, \mathrm{acc}}.\end{CD}\]
Restrict (10.1.2) to the tempered subcategory to obtain a functor
\[\mathfrak{C}^{\mathfrak{L}^{+}G,\mathrm{temp}}\longrightarrow\mathcal{O}_{G} \tilde{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}} \mathsf{\mathsf{\mathsf{D}}}} \mathsf{D}(\mathfrak{H}_{G})_{x}^{\mathfrak{L}^{+}G,\mathrm{temp}} \underset{\mathsf{D}(\mathfrak{H}G)_{x}}{\otimes}\mathfrak{C}, \tag{10.3.1}\]
**Proposition 10.3.3**.: _The functor (10.3.1) is an equivalence for any \(\mathsf{D}(\mathfrak{L}G)_{x}\)-module category \(\mathcal{C}\)._
Proof.: First, we can immediately reduce to the universal case \(\mathcal{C}=\mathsf{D}(\mathfrak{L}G)_{x}\). Namely, we have
\[\mathfrak{C}^{\mathfrak{L}^{+}G,\mathrm{temp}}\cong\mathsf{D}(\mathfrak{L}G) _{x}^{\mathfrak{L}^{+}G,\mathrm{temp}}\underset{\mathsf{D}(\mathfrak{H}G)_{ x}}{\otimes}\mathfrak{C},\]
and the functor in question is given by tensoring
\[\mathsf{D}(\mathfrak{L}G)_{x}^{\mathfrak{L}^{+}G,\mathrm{temp}}\longrightarrow \mathcal{O}_{\tilde{G}}\tilde{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}} \mathsf{} \mathsf{D}} \mathsf{(\mathfrak{L}G)_{x} \tag{10.3.2}\]
with \(\mathcal{C}\) over \(\mathsf{D}(\mathfrak{L}G)_{x}\).
Next, we reduce to the case \(\mathcal{C}=\mathsf{D}(\mathrm{Gr}_{G})_{x}\). First, observe that
\[\begin{CD}\mathcal{O}_{\tilde{G}}\tilde{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}} \mathsf{} \mathsf{D}(\mathfrak{H}_{G})\otimes&\mathsf{D}(\mathrm{Gr}_{G})^{ \mathfrak{L}N^{-},\psi})_{x}\underset{\mathsf{D}(\mathfrak{H}G)_{x}}{\otimes} \mathsf{D}(\mathfrak{L}^{+}G\backslash\mathfrak{L}G)_{x}\\ @>{\tilde{\tilde{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{{{{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}
is an equivalence by Proposition 10.2.1.
### Proof of Theorem 6.12.3
As shown in SS6.13, it is enough to prove that \(\operatorname{Sat}_{G,P,x}\) is an equivalence for a fixed \(x\in X(k)\).
Apply Proposition 10.3.3 in the case \(\mathcal{C}=(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}\), which yields the equivalence
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{temp}}\tilde{\longrightarrow}\mathcal{O}_{\tilde{G} }\mathsf{-mod}^{\text{fact}}(\mathsf{Rep}(\tilde{G})\otimes(\mathsf{D}( \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L}N^{-}, \psi,\text{acc}})_{x}.\]
Theorem 6.8.1 supplies an isomorphism
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{ L}N^{-},\psi,\text{acc}}\tilde{\longrightarrow}\Upsilon(\tilde{\mathfrak{n}}_{ \tilde{P}})\mathsf{-mod}_{0}^{\text{fact}}(\mathsf{Rep}(\tilde{M}))\]
in \(\mathsf{FactCat}\), and the commutative triangle in the proof of Theorem 6.12.1 yields a commutative triangle
Here the left diagonal morphism is the composite
\[\mathsf{Rep}(\tilde{G})\longrightarrow\mathsf{D}(\mathfrak{L}^{+}G\backslash \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M}\xrightarrow{\operatorname {Av}_{\mathfrak{l}}^{\mathfrak{L}N^{-},\psi}}(\mathsf{D}(\mathfrak{L}G)_{ \mathfrak{L}N_{P}\mathfrak{L}^{+}M})^{\mathfrak{L}N^{-},\psi,\text{acc}},\]
where the first morphism is given by the acting on the unit via \(\operatorname{Sat}_{G}^{\text{naive}}\). We therefore obtain an equivalence
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{temp}}\tilde{\longrightarrow}\Upsilon(\tilde{ \mathfrak{n}}_{\tilde{P}},\mathcal{O}_{\tilde{G}})\mathsf{-mod}^{\text{fact}}( \mathsf{Rep}(\tilde{G})\otimes\Upsilon(\tilde{\mathfrak{n}})\mathsf{-mod}_{0} ^{\text{fact}}(\mathsf{Rep}(\tilde{M})))_{x},\]
and by definition the latter category is equivalent to
\[\Upsilon(\tilde{\mathfrak{n}}_{\tilde{P}},\mathcal{O}_{\tilde{G}})\mathsf{- mod}_{0}^{\text{fact}}(\mathsf{Rep}(\tilde{G})\otimes\mathsf{Rep}(\tilde{M}))_{x}.\]
Equip the category
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{temp}}\]
with the t-structure characterized by the requirement that the functor
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G}\longrightarrow(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P }\mathfrak{L}^{+}M})_{x}^{\mathfrak{L}^{+}G,\text{temp}} \tag{10.4.1}\]
right adjoint to the inclusion is left t-exact. In fact, this functor is t-exact: its kernel
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{anti-temp}}\]
equals the kernel of \(\operatorname{Av}_{\mathfrak{l}}^{\mathfrak{L}N^{-},\psi}\) by definition, and by Proposition 6.11.1 the latter consists of infinitely connective objects, i.e. those with vanishing cohomology objects in all degrees. In particular this kernel is stable under truncation functors, which implies that (10.4.1) is t-exact as claimed.
Combining the above equivalences, we obtain
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{temp}}\tilde{\longrightarrow}\Upsilon(\tilde{ \mathfrak{n}}_{\tilde{P}},\mathcal{O}_{\tilde{G}})\mathsf{-mod}_{0}^{\text{fact }}(\mathsf{Rep}(\tilde{G})\otimes\mathsf{Rep}(\tilde{M}))_{x}. \tag{10.4.2}\]
Tracing through the constructions, we see that this is the restriction of (6.12.2) to the tempered subcategory. We claim that the triangle
commutes, which amounts to the assertion that (6.12.2) vanishes on
\[(\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{\mathfrak{L} ^{+}G,\text{anti-temp}}.\]
As noted above, this subcategory consists of infinite connective objects, and the t-structure on
\[\Upsilon(\tilde{\mathfrak{n}}_{\check{P}},\mathcal{O}_{\check{G}})\text{-- mod}_{0}^{\text{fact}}(\mathsf{Rep}(\check{G})\otimes\mathsf{Rep}(\check{M}))_{x} \cong\mathsf{QCoh}((\tilde{\mathfrak{n}}_{\check{P}}\underset{\check{\mathfrak{ g}}}{\times}\{0\})/\check{P})\]
is left complete, so the claim follows.
It therefore suffices to show that (10.4.1) restricts to an equivalence on eventually coconnective objects, or equivalently on coconnective objects. The restriction of this functor to coconnective objects admits the left adjoint
\[((\mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}^{+}G,\text{temp}})^{\geqslant 0}\longrightarrow(\mathsf{D}( \mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{\mathfrak{L}^{+}G} \overset{\tau^{\geqslant 0}}{\longrightarrow}((\mathsf{D}(\mathfrak{L}G)_{ \mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{\mathfrak{L}^{+}G})^{\geqslant 0},\]
which is fully faithful because (10.4.1) is t-exact. So it is enough to show that (10.4.1) is conservative on eventually coconnective objects. But
\[\operatorname{Av}_{!}^{\mathfrak{L}N^{-},\psi}:(\mathsf{D}(\mathfrak{L}G)_{ \mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{\mathfrak{L}^{+}G}\longrightarrow( \mathsf{D}(\mathfrak{L}G)_{\mathfrak{L}N_{P}\mathfrak{L}^{+}M})_{x}^{ \mathfrak{L}N^{-},\psi}\]
factors through (10.4.1), and the former is conservative on eventually coconnective objects by Proposition 6.11.1.
|
2308.03526 | AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning | StarCraft II is one of the most challenging simulated reinforcement learning
environments; it is partially observable, stochastic, multi-agent, and
mastering StarCraft II requires strategic planning over long time horizons with
real-time low-level execution. It also has an active professional competitive
scene. StarCraft II is uniquely suited for advancing offline RL algorithms,
both because of its challenging nature and because Blizzard has released a
massive dataset of millions of StarCraft II games played by human players. This
paper leverages that and establishes a benchmark, called AlphaStar Unplugged,
introducing unprecedented challenges for offline reinforcement learning. We
define a dataset (a subset of Blizzard's release), tools standardizing an API
for machine learning methods, and an evaluation protocol. We also present
baseline agents, including behavior cloning, offline variants of actor-critic
and MuZero. We improve the state of the art of agents using only offline data,
and we achieve 90% win rate against previously published AlphaStar behavior
cloning agent. | Michaël Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Richard Powell, Konrad Żołna, Julian Schrittwieser, David Choi, Petko Georgiev, Daniel Toyama, Aja Huang, Roman Ring, Igor Babuschkin, Timo Ewalds, Mahyar Bordbar, Sarah Henderson, Sergio Gómez Colmenarejo, Aäron van den Oord, Wojciech Marian Czarnecki, Nando de Freitas, Oriol Vinyals | 2023-08-07T12:21:37Z | http://arxiv.org/abs/2308.03526v1 | # AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
###### Abstract
StarCraft II is one of the most challenging simulated reinforcement learning environments; it is partially observable, stochastic, multi-agent, and mastering StarCraft II requires strategic planning over long time horizons with real-time low-level execution. It also has an active professional competitive scene. StarCraft II is uniquely suited for advancing offline RL algorithms, both because of its challenging nature and because Blizzard has released a massive dataset of millions of StarCraft II games played by human players. This paper leverages that and establishes a benchmark, called _AlphaStar Unplugged_, introducing unprecedented challenges for offline reinforcement learning. We define a dataset (a subset of Blizzard's release), tools standardizing an API for machine learning methods, and an evaluation protocol. We also present baseline agents, including behavior cloning, offline variants of actor-critic and MuZero. We improve the state of the art of agents using only offline data, and we achieve 90% win rate against previously published AlphaStar behavior cloning agent.
Starcraft II, Offline RL, Large-scale learning 2023-8-8
## 1 Introduction
Deep Reinforcement Learning is dominated by online Reinforcement Learning (RL) algorithms, where agents must interact with the environment to explore and learn. The online RL paradigm achieved considerable success on Atari (Mnih et al., 2015), Go (Silver et al., 2017), StarCraft II (Vinyals et al., 2019), DOTA 2 (Berner et al., 2019), and robotics (Andrychowicz et al., 2020). However, the requirements of extensive interaction and exploration make these algorithms unsuitable and unsafe for many real-world applications. In contrast, in the offline setting (Fu et al., 2020; Fujimoto et al., 2019; Gulcehre et al., 2020), agents learn from a fixed dataset previously logged by humans or other agents. While the offline setting would enable RL in real-world applications, most offline RL benchmarks such as D4RL (Fu et al., 2020) and RL Unplugged (Gulcehre et al., 2020) have mostly focused on simple environments with data produced by RL agents. More challenging benchmarks are needed to make progress towards more ambitious real-world applications.
To rise to this challenge, we introduce _AlphaStar Unplugged_, an offline RL benchmark, which uses a dataset derived from replays of millions of humans playing the multi-player competitive game of StarCraft II. StarCraft II continues to be one of the most complex simulated environments, with partial observability, stochasticity, large action and observation spaces, delayed rewards, and multi-agent dynamics. Additionally, mastering the game requires strategic planning over long time horizons, and real-time low-level execution. Given these difficulties, breakthroughs in AlphaStar Unplugged will likely translate to many other offline RL settings, potentially transforming the field.
Additionally, unlike most RL domains, StarCraft II has an independent leaderboard of competitive
human players over a wide range of skills. It constitutes a rich and abundant source of data to train and evaluate offline RL agents.
With this paper, we release the most challenging large-scale offline RL benchmark to date, including the code of canonical agents and data processing software. We note that removing the environment interactions from the training loop significantly lowers the compute demands of StarCraft II, making this environment accessible to far more researchers in the AI community.
Our experiments on this benchmark suggest that families of algorithms that are state-of-the-art on small scale benchmarks do not perform well here, e.g. Return Conditioned Behavior Cloning (Emmons et al., 2021; Srivastava et al., 2019), Q-function based approaches (Fujimoto et al., 2019; Gulcehre et al., 2021; Wang et al., 2020), and algorithms that perform off-policy evaluation during learning (Schrittwieser et al., 2021). These approaches sometimes fail to win a single game against our weakest opponent, and all fail to outperform our unconditional behavior cloning baseline.
However, it has also provided insights on how to design successful agents. So far all of our successful approaches are so-call one-step offline RL approaches (Brandfonbrener et al., 2021; Gulcehre et al., 2021). Generally, our best performing agents follow a two-step recipe: first train a model to estimate the behavior policy and behavior value function. Then, use the behavior value function to improve the policy, either while training or during inference. We believe sharing these insights will be valuable to anyone interested in offline RL, especially at large scale.
## 2 StarCraft II for Offline Reinforcement Learning
StarCraft is a real-time strategy game in which players compete to control a shared map by gathering resources and building units and structures. The game has several modes, such as team games or custom maps. For instance, the StarCraft Multi-Agent Challenge (Samvelyan et al., 2019) is an increasingly popular benchmark for Multi-Agent Reinforcement Learning and includes a collection of specific tasks.
In this paper, we consider StarCraft II as a two-player game, which is the primary setting for StarCraft II. This mode is played at all levels, from casual online games to professional esport. It combines high-level reasoning over long horizons with fast-paced unit management. There are numerous strategies for StarCraft II with challenging properties presenting cycles and non-transitivity, especially since players start the game by selecting one of three alien _races_, each having fundamentally different mechanics, strengths and weaknesses. Each game is played on one of the several _maps_, which have different terrain and can affect strategies.
StarCraft II has many properties making it a great environment to develop and benchmark offline reinforcement learning algorithms. It has been played online for many years, and millions of the games were recorded as _replays_, which can be used to train agents. On the other hand, evaluation of the agents can be done by playing against humans -- including professional players -- the built-in bots, scripted bots from the community, or even the stronger online RL agents such as AlphaStar (Vinyals et al., 2019) or TStarBot (Han et al., 2021). Finally, we highlight a few properties of StarCraft II that make it particularly challenging from an offline RL perspective.
**Action space.** When learning from offline data, the performance of algorithms depends greatly on the availability of different state-action pairs in the data. We call this _coverage_ -- the more state-action pairs are absent, _i.e._ the lower the coverage, the more challenging the problem is. StarCraft II has a highly structured action space. The agent must select an action type, select a subset of its units to apply the action to, select a target for the action (either a map location or a visible unit), and decide when to observe and act next. In our API, we can consider there are approximately \(10^{26}\) possible
actions per game step. In comparison, Atari has only 18 possible actions per step. This makes it almost impossible to attain high state-action coverage for StarCraft II.
**Stochastic environment.** Stochastic environments may need many more trajectories to obtain high state-action coverage. The game engine has a small amount of stochasticity itself, but the main source of randomness is the unknown opponent policy, which is typically not deterministic. In conrast, in the Atari environment, stochasticity arises only from sticky actions (Machado et al., 2018).
**Partial Observability.** StarCraft II is an imperfect information game. Players only have information about opponent units that are within the field of view of the player's own units. As a result, players need to scout, _i.e._ send their units around the map to gather information about the current state of the game, and may need it at a later point in the game. On the other hand, a memory of the 3 previous frames is usually considered sufficient for Atari.
**Data.** For StarCraft II, we have access to a dataset of millions of human replays. These replays display a wide and diverse range of exploration and exploitation strategies. In comparison, the existing benchmarks (Agarwal et al., 2020; Gulcehre et al., 2020) have a bias toward datasets generated by RL agents.
## 3 AlphaStar Unplugged
We propose AlphaStar Unplugged as a benchmark for offline learning on StarCraft II. This work builds on top of the StarCraft II Learning Environment and associated replay dataset (Vinyals et al., 2017), and the AlphaStar agents described in Vinyals et al. (2019), by providing a few key components necessary for an offline RL benchmark:
* **Training setup.** We fix a dataset and a set of rules for training in order to have fair comparison between methods.
* **Evaluation metric.** We propose a set of metrics to measure performance of agents.
* **Baseline agents.** We provide a number of well tuned baseline agents.
* **Open source code.** Building an agent that performs well on StarCraft II is a massive engineering endeavor. We provide a well-tuned behavior cloning agent which forms the backbone for all agents presented in this paper1.
Footnote 1: We open-sourced our architecture, data pipeline, dataset generation scripts and supervised learning agent in [https://github.com/deepmind/alphastar](https://github.com/deepmind/alphastar)
### Dataset
About 20 million StarCraft II games are publicly available through the replay packs2. For technical reasons, we restrict the data to StarCraft II versions 4.8.2 to 4.9.2 which leaves nearly 5 million games. They come from the StarCraft II _ladder_, the official matchmaking mechanism. Each player is rated by their _MMR_, a ranking mechanism similar to _Elo_(Elo, 1978). The MMR ranges roughly from 0 to 7000. Figure 1 shows the distribution of MMR among the episodes. In order to get quality training data, we only use games played by players with MMR greater than 3500, which corresponds to the top 22% of players. This leaves us with approximately 1.4
Figure 1: Histogram of player MMR from replays used for training.
million games. Each game forms two episodes from a machine learning point of view -- one for each side, since we consider two-player games -- so there are 2.8 million episodes in the dataset. This represents a total of more than 30 years of game played. These replays span over two different balance patches, introducing some subtle differences in the rules of StarCraft II between the older and the more recent games, which are small enough to be ignored3. In addition, the map pool changed once during this period, so the games are played on a total of 10 different maps4.
Footnote 3: However, the version of the game is available for each episode, so one could decide to condition the agent on the version.
Footnote 4: Acropolis, Automaton, Cyber Forest, Kairos Junction, King’s Cove, New Repugnancy, Port Aleksander, Thunderbird, Turbo Cruise ’84, Year Zero.
The average two-player game is about 11 minutes long which corresponds to approximately \(15,000\) internal game steps in total. This poses a significant modeling challenge, making training harder and slower. Therefore, we shorten trajectories by only observing the steps when the player took an action. We augment each observation by adding the _delay_ which contains the number of internal game steps until the next action, and we discard the internal steps in-between. This cuts the effective length of the episode by 12 times, and is similar to what was done in Vinyals et al. (2019).
Each episode also contains metadata, the most important ones being the outcome, which can be 1 for a victory, 0 for a draw5 and -1 for a defeat, as well as the MMR of each player. The games were played online using Blizzard's matchmaking system which ensures that in the vast majority of games, both players have a similar MMR.
Footnote 5: Draws are rare in StarCraft II, but can happen if no player can fulfill the win-condition.
The replays are provided by Blizzard and hosted on their servers. The data is anonymized, and does not contain personal information about the players. The full dataset represents over 30 years of game play time, in the form of 21 billion internal game steps. This corresponds to 3.5 billion training observations.
### Training restrictions
During training, we do not allow algorithms to use data beyond the dataset described in Section 3.1. In particular, the environment cannot be used to collect more data. However, online policy evaluation is authorized, _i.e._ policies can be run in the environment to measure their performance. This may be useful for hyperparameter tuning.
Unlike the original AlphaStar agents, agents are trained to play all three races of StarCraft II. This is more challenging, as agents are typically better when they are trained on a single race. They are also trained to play on all 10 maps available in the dataset.
In our experiments, we tried to use the same number of training inputs whenever possible -- of the order of \(k_{max}=10^{10}\) observations in total -- to make results easier to compare. However this should be used as a guideline and not as a hard constraint. The final performance reached after each method eventually saturates is a meaningful comparison metric, assuming each method was given enough compute budget.
Figure 2: Training procedure.
### Evaluation protocol
Numerous metrics can be used to evaluate the agents. On one hand, the easiest to compute -- and least informative -- is simply to look at the value of the loss function. On the other hand, perhaps the most informative -- and most difficult to compute -- metric would be to evaluate the agent against a wide panel of human players, including professional players. In this paper, we propose a compromise between these two extremes. We evaluate our agents by playing repeated games against a fixed selection of 7 opponents: the very_hard built-in bot6, as well as a set of 6 reference agents presented below.
Footnote 6: The very_hard bot is not the strongest built-in bot in StarCraft II, but it is the strongest whose strength does not come from unfair advantages which break the game rules.
During training, we only evaluate the agents against the very_hard bot, since it is significantly less expensive, and we mostly use that as a validation metric, to tune hyper-parameters and discard non-promising experiments.
Fully trained agents are evaluated against the full set of opponents presented above, on all maps. We combine these win rates into two aggregated metrics while uniformly sampling the races of any pair of these agents: _Elo rating_(Elo, 1978), and _robustness_. Robustness is computed as one minus the minimum win rate over the set of reference agents. See details of the metrics computation in Appendix A.2.
## 4 Reference Agents
As explained in Section 3, we provide a set of 6 reference agents, which can be used both as baselines and for evaluation metrics. In this section, we detail the methodology and algorithms used to train them. The implementation details can be found in Appendix A.3, and results in Section 5.11.
### Definitions
The underlying system dynamics of StarCraft II can be described by a _Markov Decision Process7 (MDP)_(Bellman, 1957). An MDP, \((\mathcal{S},\mathcal{A},P,r,\mathcal{I})\), consists of finite sets of states \(\mathcal{S}\) and actions \(\mathcal{A}\), a transition distribution \(P(s^{\prime}|s,a)\) for all \((s,a,s^{\prime})\in\mathcal{S}\times\mathcal{A}\times\mathcal{S}\), a reward function8\(r:\mathcal{S}\rightarrow\mathbb{R}\), and an initial state distribution \(\mathcal{I}:\mathcal{S}\rightarrow[0,1]\). In the offline setting, the agent does not interact with the MDP but learns only from a dataset \(\mathcal{D}\) containing _episodes_, made of sequences of state and actions \((s_{t},a_{t})\). We denote \(\mathbf{s}\) the sequence of all states in the episode, and \(len(\mathbf{s})\) its length. A _policy_ is a probability distribution over the actions given a state. The dataset \(\mathcal{D}\) is assumed to have been generated by following an unknown _behavior policy_\(\mu\), such that \(a_{t}\sim\mu(\cdot|s_{t})\) for all \(t<len(\mathbf{s})\).
Footnote 7: Strictly speaking, we have a Partially Observable MDP, but we simplify this for ease of presentation.
Footnote 8: In the usual definition of an MDP, the reward is a function of the state and the action. But in StarCraft II, the reward is 1 in a winning state, -1 in a losing state, and zero otherwise. So it does not depend on the action.
As explained in Section 3.1, observed states are a subset of the internal game steps. We call _delay_ the number of internal game steps between two observed internal game steps, which corresponds to the amount of real time elapsed between the two observations9. Given states and action \((s_{t},a_{t},s_{t+1})\), we note \(d(a_{t})\) the delay between states \(s_{t}\) and \(s_{t+1}\). Note that the delay at step \(t\) is referring to the step \(t+1\), not \(t-1\). This is needed for inference, since the environment must be provided with the number of internal steps to skip until the next observation. Therefore the delay must be part of the action.
Footnote 9: One internal game step occurs every 45ms.
Given a policy \(\pi\) and a state \(s_{t}\), we define the expected discounted return \(v^{\pi}(s_{t})\) as the expected sum of the discounted rewards obtained if we follow \(\pi\) from \(s_{t}\). The discount between two steps is
based on the delay between the steps. In the case of StarCraft II, the reward is the win-loss signal, so it can only be non-zero on the last step of the episode. Therefore we can write
\[\nu^{\pi}(s_{t})=\mathbb{E}_{h_{k+1}-(\cdot|h_{k},a_{k}),a_{k}\sim\pi(\cdot|h_{k} ),\nu_{k}\geq t}\left[\nu^{D_{t}(\mathbf{s})}r(\mathbf{s})\right]\qquad\text{ with}\qquad D_{t}(\mathbf{s})=\sum_{k=t}^{len(\mathbf{s})-1}d(a_{k}), \tag{1}\]
where \(r(\mathbf{s})\) is the reward on the last step of the episode. \(D_{t}(\mathbf{s})\) is simply the remaining number of internal game steps until the end of the episode.
The goal of offline RL is to find a policy \(\pi^{*}\) which maximizes \(\mathbb{E}_{s_{0}\in\mathcal{I}}[\nu^{\pi^{*}}(s_{0})]\). Dunring training, we refer to the policy \(\pi\) trained to estimate \(\pi^{*}\) as the _target policy_.
We use \(V^{\mu}\) and \(V^{\pi}\) to denote the value functions for the behavior and target policies \(\mu\) and \(\pi\), which are trained to estimate \(\nu^{\mu}\) and \(\nu^{\pi}\), respectively.
We typically train the agent on _rollouts_, _i.e._ sequences of up to \(K\) consecutive timesteps, assembled in a minibatch of \(M\) independent rollouts. Unless specified otherwise, the minibatches are independent from each other, such that two consecutive minibatches are not correlated.
### Architecture
All our experiments are based on the same agent architecture. It is an improved version of the model used in Vinyals et al. (2019). The full architecture is summarized on Figure 3.
Inputs of the raw StarCraft II API are structured around three modalities: _vectors_, _units_ -- a list of features for each unit present in the game -- and _feature planes_ (see A.1 for more details).
Actions are comprised of seven _arguments_, and can be organized in similar modalities: function, delay, queued and repeat as vectors, since each argument is sampled from a single vector of logits. Unit_tags and target_unit_tag refer to indices in the units inputs. Finally, the world action is a 2d point on the feature planes.
We structure the architecture around these modalities:
Figure 3: Illustration of the architecture that we used for our reference agents. Different types of data are denoted by different types of arrows (vectors, units or feature planes).
* Each of the three modalities of inputs is encoded and processed independently using a fitting architecture: MLP for the vector inputs, transformer (Vaswani et al., 2017) for the units input and residual convolutional network (He et al., 2015) for the feature planes. Some of these convolutions are strided so that most of the computation is done at a lower resolution. Arguments of the previous action are embedded as well, with the execption of the world previous argument, since we found this causes too much overfitting.
* We use special operations to add interactions between these modalities: we _scatter_ units into feature planes, _i.e._ we place the embedding of each unit in its corresponding spatial location on the feature plane. We use a averaging operation to embed the units into the embedded vectors. Feature planes are embedded into vectors using strided convolutions and reshaping, and the reverse operations to embed vectors into feature planes.
* We tried using memory in the vector modality, which can be LSTM (Hochreiter and Schmidhuber, 1997) or Transformer XL (Dai et al., 2019). Most of our results do not use memory (see Section 5.6).
* For the experiments using a value function, we add an MLP on top of the vector features to produce a estimate of the value function.
* Finally, we sample actions. The seven arguments are sampled in the following order: function, delay, queued, repeat, unit_tags, target_unit_tag and world. They are sampled autoregressively,10_i.e._ each sampled argument is embedded to sample the next one. The first four arguments are sampled from the vector modality. The next two are sampled from the vector and units modalities using pointer networks (Vinyals et al., 2017), and finally the world argument is sampled from the upsampled feature planes. Note that unit_tags is actually obtained by sampling the pointer network 64 times autoregressively, so conceptually, unit_tags represent 64 arguments. Footnote 10: With the exception of target_unit_tag and world, because no action in the API uses a target_unit_tag and a world argument at the same time.
The exact hyperparameters and details of the architecture can be found in the open-sourced code which can be accessed via [https://github.com/deepmind/alphastar](https://github.com/deepmind/alphastar).
MMR conditioning.At training time, the MMR of the player who generated the trajectory is passed as a vector input. During inference, we can control the quality of the game played by the agent by changing the MMR input. In practice, we set the MMR to the highest value to ensure the agent plays its best. This is similar to Return-Conditioned Behavior Cloning (Srivastava et al., 2019) with the MMR as the reward.
MuZero latent model.For the MuZero experiments, detailed in Section 4.5, we define the latent space \(\mathcal{L}\) as the space of vectors before the function MLP. We split the model presented above into an encoder \(E:\mathcal{S}\rightarrow\mathcal{L}\) and two decoder: \(D_{\pi}\) maps latent states to distributions over actions, and a value function decoder \(D_{\nu^{\pi}}:\mathcal{L}\rightarrow\mathbb{R}\), such that \(\pi(\cdot|s)=D_{\pi}(E(s))\) and \(V^{\pi}(s)=D_{\nu^{\pi}}(E(s))\). Note that in our implementation, the decoder \(D_{\pi}\) actually produces distributions for the function and delay only. The other arguments are obtained from the estimated behavior policy \(\tilde{\mu}\). Finally, we add a latent model \(L:\mathcal{L}\times\mathcal{A}\rightarrow\mathcal{L}\). Given a rollout \(((s_{0},a_{0}),...(s_{K},a_{K}))\), we compute:
\[h_{0}=E(s_{0}) h_{k+1}=L(h_{k},a_{k}) \pi(\cdot|s_{k})=D_{\pi}(h_{k}) V^{\pi}(s_{k})=D_{\nu^{\pi}}(h_{k}) \tag{2}\]
for all \(k<K\). Note that \(s_{0}\) is only the first state of the rollout, but not necessarily the first state of an episode. See Figure 4 for an illustration.
### Behavior cloning
Behavior Cloning (BC) agent.Our first reference agent is trained using _behavior cloning_, the process of estimating the behavior policy \(\mu\). We learned an estimate \(\hat{\mu}\) by minimizing the negative log-likelihood of the action \(a_{t}\) under the policy \(\hat{\mu}(\cdot|s_{t})\). Given a rollout \(\mathbf{s}\), we write
\[L^{BC}(\mathbf{s})=-\sum_{t=0}^{len(\mathbf{s})-1}\log\left(\hat{\mu}(a_{t}|s_{ t})\right). \tag{3}\]
This is similar to training a language model. The procedure is detailed in Algorithm 1 in the Appendix. It is the same procedure that was used by the AlphaStar Supervised agent in Vinyals et al. (2019). In practice, since each action is comprised of seven arguments, there is one loss per argument.
In order to avoid overfitting during behavior cloning, we also used a weight decay loss which is defined as the sum of the square of the network parameters.
Fine-tuned Behavior Cloning (FT-BC) agent.Behavior Cloning mimics the training data, so higher quality data should lead to better performance. Unfortunately, since filtering the data also decreases the number of episodes, generalization is affected (see Section 5.5). In order to get the best of both worlds, we used a method called _fine tuning_. It is a secondary training phase after running behavior cloning on the whole dataset. In this phase, we reduced the learning rate and filter the data to top-tier games. This generalizes better than training only on either set of data, and was already used in Vinyals et al. (2019).
### Offline Actor-Critic
Actor-critic (Barto et al., 1983; Witten, 1977) algorithms learn a target policy \(\pi\) and the value function \(\nu^{\pi}\). In off-policy settings, where the target policy \(\pi\) differs from the behavior policy \(\mu\), we compute _importance sampling_ ratios \(\rho_{t}(a_{t}|s_{t})=\pi(a_{t}|s_{t})/\mu(a_{t}|s_{t})\) where \((s_{t},a_{t})\) come from the data, _i.e._ follow the behavior policy. There are many variants of the loss in the literature. The simplest version is called \(1\)-Step Temporal Differences, or TD(0), defined as:
\[L^{TD(0)}(\mathbf{s})=-\sum_{t=0}^{len(\mathbf{s})-2}\odot[\rho_{t}(a_{t}|s_{ t})\left(y\nu^{\pi}(s_{t+1})-V^{\pi}(s_{t})+r(s_{t+1})\right)]\log(\pi(a_{t}|s_{ t})) \tag{4}\]
Figure 4: Illustration of the architecture used for MuZero. \(E\) is the encoder, \(L\) is the latent model, and \(D_{\pi}\) and \(D_{V^{\pi}}\) are the policy and value function decoders, respectively.
where \(\odot\) symbol corresponds to the stop-gradient operation. In this equation, \(V^{\pi}\) is called the _critic_. The loss can be modified to use N-Step Temporal Differences (Sutton and Barto, 2018) by adding more terms to Equation 4, and it can be further improved by using V-Trace (Espeholt et al., 2018) in order to reduce variance. Note that for simplicity of implementation, we only applied this loss for some of the arguments, namely function and delay, and we use the behavior policy for the other ones.
We learned the estimated behavior value \(V^{\mu}\) by minimizing the _Mean-Squared Error (MSE)_ loss:
\[L^{MSE}(\mathbf{s})=\frac{1}{2}\sum_{t=0}^{len(\mathbf{s})-1}||V^{\mu}(s_{t})- r(\mathbf{s})||_{2}^{2}. \tag{5}\]
Offline Actor-Critic (OAC) agent.Although actor-critic has an off-policy correction term, it was not enough to make it work without adjustments to the pure offline setting.
The behavior policy \(\mu\) appears in the denominator of \(\rho\), but we do not have access to the behavior policy used by the players, we can only observe their actions. Fortunately, the behavior Cloning agent learns an estimate \(\hat{\mu}\) which we used to compute the estimated \(\hat{\rho}=\pi/\hat{\mu}\).
The Behavior Cloning policy \(\hat{\mu}\) can be used as the starting point for \(\pi\) (_i.e._ used to initialize the weights). This way, the estimated importance sampling \(\hat{\rho}\) equals \(1\) at the beginning of training.
Equation 4 uses \(V^{\pi}\) as the critic, which is standard with actor-critic methods. This can be done even in offline settings, by using a Temporal Differences loss for the value function (Espeholt et al., 2018). Unfortunately, this can lead to divergence during offline training, which is a known problem (van Hasselt et al., 2018). One solution could be early stopping: the policy \(\pi\) improves at first before deteriorating, therefore we could stop training early and obtain an improved policy \(\pi\). However, this method requires running the environment to detect when to stop, which is contrary to the rules of AlphaStar Unplugged. Instead, we used \(V^{\mu}\) as a critic, and keep it fixed, instead of \(V^{\pi}\).
Emphatic Offline Actor-Critic (E-OAC) agent._N-step Embatic Traces (NETD)_(Jiang et al., 2021) avoids divergence in off-policy learning under some conditions, by weighting the updates beyond the importance sampling ratios. We refer to Jiang et al. (2021) for details about the computation of the emphatic traces.
### MuZero
MuZero Unplugged (Schrittwieser et al., 2021) adapts _Monte Carlo Tree Search (MCTS)_ to the offline setting. It has been successful on a variety of benchmarks (Mulac-Arnold et al., 2019; Gulcehre et al., 2020). In order to handle the large action space of StarCraft II, we sample multiple actions from the policy and restrict the search to these actions only, as introduced in Hubert et al. (2021). This allows us to scale to the large action space of StarCraft II. We used the latent model presented in Section 4.2, and similarly to the offline actor-critic agents, we only improved the function and delay from the behavior cloning policy.
MuZero Supervised (MZS) agent.Similarly to the Offline Actor-Critic case, training the target policy \(\pi\) and estimated value function \(V^{\pi}\) jointly can diverge. In an analog approach, a workaround is to only train the policy to estimate the behavior policy, and use the value and latent model to run MCTS at inference time only. This results in only using the losses for the policy and the value function
for MuZero. In other words, the loss is simply the following loss:
\[L^{MuZero}(s)=L^{BC}(s)+L^{MSE}(s) \tag{6}\]
where the policy and value function are computed using the latent model for all steps except the first one, as shown on Equation 2. Although the loss is similar to standard behavior cloning, using this method can lead to improved performance thanks to the regularization effects of the value function training and the latent model.
MuZero Supervised with MCTS at inference time (MZS-MCTS) agent.The MuZero Unplugged algorithm uses MCTS at training time and inference time. As explained above, policy improvement at training time can lead to divergence. Using MCTS at inference time, on the other hand, is stable and leads to better policies. We use the approach detailed in Hubert et al. (2021) for the inference.
## 5 Experiments
In this section, we measure the influence of several parameters. For simplicity, we use the win rate against the very_hard bot as the metric for these experiments. Most experiments are run in the behavior cloning setting. Due to the cost of running such experiments, we could only train a single model per set of parameters, but the consistency of the conclusions leads us to believe that the results are significant.
Moreover, Section 5.11 presents the performance of the reference agents on all AlphaStar Unplugged metrics, as well as against the original AlphaStar agents from Vinyals et al. (2019).
In this section, we call number of learner _steps_ the number of updates of the weights on minibatches of rollouts of size \(M\times K\). We call number of learner _frames_ the total number of observations used by the learner, _i.e._ the number of steps multiplied by \(M\times K\).
### Minibatch and rollout sizes
The minibatch size \(M\) and rollout size \(K\) influence the final performance of the models. Table 1 compares some settings in the case of behavior cloning. In all these experiments, the total number of training frames is \(10^{10}\). We found that more data per step -- _i.e._ larger \(M\times K\) -- leads to better final performance.
There are unfortunately a few constraints to respect. \(M\times K\) cannot be increased indefinitely because of the memory usage. The largest value we could use was \(16,384\) or \(32,768\), depending on the method. Besides, the Offline Actor-Critic and MuZero methods require \(K>1\), and larger values of \(K\) stabilize the training.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Minibatch size \(M\) & Rollout length \(K\) & \(M\times K\) & win rate vs. very_hard \\ \hline
8,192 & 1 & 8,192 & 70\% \\
16,384 & 1 & 16,384 & 79\% \\
256 & 64 & 16,384 & 79\% \\
32,768 & 1 & 32,768 & 84\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Behavior cloning performance with different minibatch sizes \(M\) and rollout lengths \(K\).
Figure 5 | Win rate against the very_hard bot for different learning rate schedules, on behavior cloning.
### Learning rate
The learning rate \(\lambda\) has a significant influence on the final performance of the agents. We used a cosine learning rate schedule (Loshchilov and Hutter, 2016), parameterized by the initial learning rate \(\lambda_{0}\). Some experiments use a ramp-in period over \(N_{\text{ramp-in}}\) frames. At frame \(k\), the learning rate is given by
\[\lambda(k)=\min\left(1,\frac{k}{N_{\text{ramp-in}}}\right)\cdot \left(\frac{\lambda_{0}}{2}\cdot\cos\left(\pi\cdot\frac{k}{k_{max}}\right)+0.5\right) \tag{7}\]
where \(k_{max}\) is the total number of training frames. We compared this schedule to a constant learning rate on Figure 4(a).
Figure 4(b) shows the final performance for different values for \(\lambda_{0}\) and different minibatch sizes \(M\). Since these experiments are slow, it is common to look at the win rate before the experiment is over and decide to compare the results before convergence. It is noteworthy to mention that it should be avoided to find the optimal \(\lambda_{0}\). Indeed, we observed that after only \(10^{9}\) steps, the best performance is obtained with the \(\lambda_{0}=10^{-3}\), but after the full training, it changes.
In the following experiments, we used \(\lambda_{0}=5\cdot 10^{-4}\) unless specified otherwise. The learning rate schedules used to train the reference agents are detailed in Appendix A.3.
### Number of training frames
As mentioned in Section 3.2, we trained most of our agents over \(k_{max}=10^{10}\) input frames. We measured the behavior cloning performance of the agents trained on fewer frames, as shown on Figure 5(a). The performance increases logarithmically with the number of training frames.
Note that the fine-tuned (BC-FT) and offline actor-critic (OAC and E-OAC) reference agents were trained on \(10^{9}\) frames, restarting from the behavior cloning agent. Therefore, they were trained on a total on a total of 11 billion frames, whereas the BC, MZS and MZS-MCTS were only trained on 10 billion frames.
### Dataset size
Figure 5(b) shows the behavior cloning performance for different dataset sizes, _i.e._ number of unique episodes used for training. For all the points on this curve, we trained the model on the full \(k_{max}=10^{10}\) frames, which means that episodes are repeated more often with smaller sized datasets. Unlike most experiments, here we used minibatch size \(M=16,384\) and a learning rate of \(10^{-3}\).
It is noteworthy that the win rate with only \(10\%\) of the episodes in the dataset is close to the best one. This can be used to save storage at very little cost, if it is a concern. However, further reducing the dataset size significantly alters the performance.
### Data filtering
Filtering the data has a large influence on the final performance of the models. Table 2 shows that, for behavior cloning, restricting the training set to fewer, higher quality episodes results in poorer performance. However, training using the full dataset followed by a fine-tuning phase on high quality data works best (BC-FT reference agent).
\begin{table}
\begin{tabular}{c c c|c c|c} \hline \hline \multicolumn{3}{c|}{Main training} & \multicolumn{3}{c|}{Fine-tuning} & \\ \hline MMR & filter & \#episodes & MMR & filter & \#episodes & win rate vs. very\_hard \\ \hline \(>\)3500 & win\(+\)loss & 2,776,466 & & & 84\% \\ \(>\)6000 & win\(+\)loss & 64,894 & & & 65\% \\ \(>\)6000 & win & 32,447 & & & 51\% \\ \(>\)3500 & win\(+\)loss & 2,776,466 & \(>\)6200 & win & 21,836 & **899\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of behavior cloning when using different MMR filtering schemes. Higher quality data also means fewer episodes, therefore worse performance. High quality data for fine-tuning gives the best results.
Figure 6: Win rate against the very_hard bot when scaling the data.
### Memory
The AlphaStar agent of Vinyals et al. (2019) uses an LSTM module to implement memory. We have tried using LSTM, Transformers and no memory. Surprisingly, we found that no memory performs better than LSTM for behavior cloning, although the final values of the losses are higher.
Results with transformers are more ambivalent. The transformer agent performs similarly to the memory-less agent on \(10^{10}\) training frames. However, although the performance of the memory-less agent saturates beyond \(k_{max}=10^{10}\) frames, transformers do not, and they outperform the memory-less agent if trained for \(2\cdot 10^{10}\) frames. Table 3 summarizes the performance of these agents versus the very_hard bot.
Transformer require extensive hyperparameter tuning and longer training times. Therefore all agents presented in the main experiments are memory-less. Using transformers for other Offline RL baselines may result in more pronounced benefits and is an interesting future research direction.
### Model size
Because of the complexity of the model, many parts could be scaled individually, but this would be prohibitive. We chose our standard model size as the largest model which can fit in memory without significant slowdown. Scaling down the width of the model by half leads to significant decrease of the performance, from 83% to 76% win rate against the very_hard bot, however scaling down the depth by half (rounding up) barely changes the win rate (82%). In the setup used for our experiments, the training speed does not significantly increase when decreasing the depth, but potential speed gains could be obtained in the future by using smaller models.
### Temperature and sampling
During inference, we sample from the policy \(\pi(\cdot|s_{t})\) given a state \(s_{t}\). In practice, the policy is characterized by a _logits_ vector \(y\) such that:
\[\pi(a|s_{t})=\frac{\exp\left(y_{a}/\beta\right)}{\sum_{a^{\prime}=0}^{|\Re|} \exp\left(y_{a^{\prime}}/\beta\right)} \tag{8}\]
where \(\beta\) is the _temperature_. During training, the temperature is \(\beta=1\) but it can be changed at inference time, in order to make the policy more peaked.
We found that \(\beta=0.8\) is a good value for the temperature during inference, as shown on Table 4.
\begin{table}
\begin{tabular}{c|c} \hline \hline Memory & Win rate vs. very_hard \\ \hline LSTM & 70\% \\ No Memory & 84\% \\ Transformer, \(k_{max}=10^{10}\) frames & 85\% \\ Transformer, \(k_{max}=2\cdot 10^{10}\) frames & 89\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of behavior cloning performance against the very_hard built-in bot with different implementations of memory.
### Critic of offline actor-critic
For the offline actor-critic, we experimented with using the value function of the target policy, \(V^{\pi}\), as the critic, instead of using the fixed value function of the behavior policy, \(V^{\pi}\). Figure 6(a) shows the divergence observed when using \(V^{\pi}\). Indeed, although the win rate first increases in both cases, it stays high with \(V^{\mu}\) but deteriorates with \(V^{\pi}\). On Figure 6(b), we can see that the importance sampling \(\rho\) (clipped by the V-Trace algorithm) decayed much faster and lower when using \(V^{\pi}\). This means that the policy \(\pi\) and \(\mu\) got further and further apart on the training set and eventually diverged.
### MCTS during training and inference
Our preliminary experiments on using the full MuZero Unplugged algorithm, _i.e._ training with MCTS targets, were not successful. We found that the policy would collapse quickly to a few actions with high (over-)estimated value. While MCTS at inference time improves performance, using MCTS at training time leads to a collapsed policy. To investigate this further, we evaluated the performance of repeated applications of MCTS policy improvement on the behavior policy \(\hat{\mu}\) and value \(V^{\mu}\). We do this by training a new MuZero model using MCTS actions of a behavior policy, i.e. \(\hat{\nu}=MCTS(\hat{\mu},V^{\mu})\). We found that the MCTS performance of this policy \(MCTS(\hat{\nu},V^{\mu})\) is worse than the performance of \(\hat{\nu}\) or \(MCTS(\hat{\mu},V^{\mu})\). Thus, repeated applications of MCTS do not continue to improve the policy. We believe this is likely due to MCTS policy distribution generating out of distribution action samples with over-estimated value estimates.
Figure 8 compares using MCTS or not during inference. We can see that using MCTS always outperforms not using it, even at the beginning of training.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Temperature \(\beta\) & Behavior Cloning & Fine-Tuning & Offline Actor Critic &
\begin{tabular}{c} Emphatic \\ Offline Actor-Critic \\ \end{tabular} \\ \hline
1 & 84\% & 90\% & 93\% & 93\% \\
0.8 & 88\% & 95\% & 98\% & 97\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: | Win rate of different agents versus the very_hard bot with two different sampling temperatures.
### Evaluation of the reference agents
Table 5 shows the performance of our six reference agents using our three metrics: robustness, Elo and win rate versus the very_hard built-in bot. These metrics are explained in Section 3.3. The three best agents utilize offline RL algorithms (highlighted in pale blue).
The full win rate matrix of the reference agents can be seen in Figure 9. A more detailed matrix, split by race, is displayed in Figure 10 in Appendix A.4.
We observe that the MuZero Supervised with MCTS at inference time (MZS) reference agent performs best, although at the cost of slower inference. Generally, we see that the three offline RL methods are ranked closely and significantly higher than behavior cloning. For completeness, we compare with the original AlphaStar agents. The AlphaStar Supervised was trained as three race-specific agents, which is different from the rules of our benchmark (agents should play all races). Therefore, we also compare our agents to a version of AlphaStar Supervised trained to play all races. The win rate of the MZS-MCTS, E-OAC and OAC are 90%, 93% and 90% respectively (see Figure 9). We also note that although offline RL improves upon the behavior cloning baseline, they are far from the online RL performance of AlphaStar Final, which was trained using several orders of magnitude more computing power.
Figure 8: Comparison of the win rates of the MZS and MZS-MCTS agents over the course of training. Using MCTS outperforms not using it throughout training.
Figure 9: Win rate matrix of the reference agents, normalized between 0 and 100. Note that because of draws, the win rates do not always sum to 100 across the diagonal. AS-SUP is the original AlphaStar Supervised agent (not race specific).
### Additional offline RL baselines
We evaluated several typical off-policy and offline RL baselines such as action-value based methods like deep offline Q-Learning (Agarwal et al., 2020), SARSA (Rummery and Niranjan, 1994), Critic Regularized Regression (CRR) (Wang et al., 2020), Batch-Constrained Deep Q-Learning (BCQ) (Fujimoto et al., 2019), Regularized Behavior Value Estimation (R-BVE) (Gulcehre et al., 2021), Critic-Weighted Policy (CWP) (Wang et al., 2020) and Return Conditioned Behavior Cloning (RCBC) (Srivastava et al., 2019) on AlphaStar Unplugged. We also tried Advantage-Weighted Regression (AWR) (Peng et al., 2019), and Proximal Policy Optimization (PPO) (Schulman et al., 2017). None of those approaches could achieve better results than the agents such as BC and FT-BC. In this section, we will highlight some of those approaches and challenges we faced when we scaled them up to StarCraft II.
Deep offline Q-Learning.We trained offline Q-learning agents based on DQN (Mnih et al., 2015), that are predicting Q-values and policy with the same output layer for only the function argument. However, the training of those offline Q-learning agents was very unstable, and they have 0% win rate against the very_hard bot. Moreover, typical approaches to improve Q-learning in the such as N-step returns, dueling network architecture (Wang et al., 2016) and double-Q-learning (Hasselt et al., 2016) did not improve the performance of our Q-learning agents. Besides the policies themselves, the accuracy of the action-values predicting the returns was poor.
Offline RL methods using action values.We trained CRR, BCQ, CWP, and R-BVE agents with an action-value Q-head on the function argument. CRR and R-BVE achieved very similar results, and neither could provide significant improvements over the BC agent. BVE and R-BVE were very stable in terms of training. For CRR, we also used BVE to learn the Q-values instead of the. On the other hand, CRR, R-BVE, CWP, and BCQ all achieved around 83-84% win rate against the very_hard bot.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Agent & Robustness & Elo & vs very\_hard \\ \hline MuZero Supervised with MCTS at inference time & 50\% & 1578 & 95\% \\ Embatic Offline Actor-Critic & 43\% & 1563 & 97\% \\ Offline Actor-Critic & 43\% & 1548 & 98\% \\ Fine-tuned Behavior Cloning & 36\% & 1485 & 95\% \\ MuZero Supervised & 30\% & 1425 & 92\% \\ Behavior Cloning & 25\% & 1380 & 88\% \\ \hline very\_hard built-in bot & 3\% & 1000 & 50\% \\ AlphaStar Supervised & 8\% & 1171 & 75\% \\ AlphaStar Supervised (Race specific networks) & 17\% & 1280 & 82\% \\ AlphaStar Supervised (Race specific networks + FT) & 44\% & 1545 & 94\% \\ AlphaStar Final (Race specific networks + FT + Online Learning) & 100\% & 2968 & 100\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of the 6 reference agents with the proposed metrics. Agents highlighted in pale blue utilize offline RL algorithms, whereas the other 3 rely on behavior cloning. In the bottom portion of this table we show performance of agents from Vinyals et al. (2019). Our BC agent is most comparable to AlphaStar Supervised but performs better due to significant tuning improvements. The other AlphaStar agents highlighted in grey have differences which make their performance not directly comparable to ours.
Return conditioned behavior cloning. (RCBC)We trained a BC agent conditioned on the win-loss return. During inference, we conditioned it on winning returns only, to make it model behavior policy used in winning games. We did not notice any difference, in fact the agent seemed to ignore the return conditioning. We attribute this to the two well-known failure points of RCBC approaches: stochasticity arising due to the noisy opponents, and inability to do trajectory stitching (Brandfonbrener et al., 2022).
## 6 Discussion
Behavior cloning is the foundation of all agents in this work. The offline RL agents start by estimating the behavior policy using behavior cloning, then improve upon it using the reward signal. This allows them to perform significantly better than the behavior cloning results. Indeed, although the agents are conditioned on the MMR during training, the behavior cloning agents are still fundamentally limited to estimating the behavior policy, ignorant about rewards. As a result, the policy they learn is a smoothed version of all the policies that generated the dataset. In contrast, offline RL methods use rewards to improve learned policies in different ways. Offline Actor-Critic methods use policy-gradient. MCTS at inference time aims at maximizing the estimated return. Even the MuZero Supervised without MCTS and the fine-tuned behavior cloning make use of the reward, and outperform the BC baseline.
We have observed that algorithms originally designed for online learning -- even with off-policy corrections -- do not work well when applied directly to the full offline RL setting. We attribute this in part to the problem of the _deadly triad_(Sutton and Barto, 2018; Tsitsiklis and Van Roy, 1997; van Hasselt et al., 2018). However, many recent works have found these algorithms can be made more effective simply by making modifications that ensure the target policy stays close to the behavior policy \(\mu\), that the value function stays close to \(V^{\mu}\), or both. Our results with Actor-Critic and MuZero are in accordance with these findings.
Among all the methods we tried, the reference agents are the ones which led to improved performance. However, we have tried several other methods without success, listed in Section 5.12. We may have failed to find the modifications which would have made these methods perform well on this dataset. However, AlphaStar Unplugged is fundamentally difficult:
* **Limited coverage.** The action space is very large, the state-action coverage in the dataset is low, and the environment is highly partially observable. This makes it challenging for the value function to extrapolate to unseen state and actions (Fujimoto et al., 2019; Gulcehre et al., 2022). This is particularly impactful for Q-values based methods, since there are significantly fewer states than state-action pairs, so it is easier to learn state value functions. The approaches like R-BVE mitigate extrapolation errors during training, but still the agent has to extrapolate during inference.
* **Weak learning signal.** The win-loss reward is a weak learning signal to learn a good policy because the dataset has games with a wide range of qualities and the win-loss reward ignores this. For example, the winner of a game between two low-skilled players would consistently lose to the loser of a game between two professional players. Thus, purely relying on the win-loss signal in the offline RL case is problematic.
* **Credit assignment** is difficult due to large action space, sparse rewards, long-horizon, and partial observability. This exacerbates the problems with the offline RL based agents.
* **Autoregressive action space** requires learning autoregressive Q-values which is challenging and understudied in the literature. In this paper, we side-stepped this by just learning a Q-function only for the function argument.
## 7 Related work
Online RL has been very impactful for building agents to play computer games. RL agents can outperform professional human players in many games such as StarCraft II (Vinyals et al., 2019), DOTA (Berner et al., 2019) or Atari (Badia et al., 2020; Mnih et al., 2015). Similar levels of progression have been observed on board games, including chess and Go (Silver et al., 2016, 2017). Although offline RL approaches have shown promising results on Atari recently (Schrittwieser et al., 2021), they have not been previously applied on complex partially observable games using data derived from human experts.
RL Unplugged (Gulcehre et al., 2020) introduces a suite of benchmarks for Offline RL with a diverse set of task domains with a unified API and evaluation protocol. D4RL (Fu et al., 2020) is an offline RL benchmark suite focusing only on mixed data sources. However, both RL Unplugged and D4RL lack high-dimensional, partially observable tasks. This paper fills that gap by introducing a benchmark for StarCraft II.
Offline RL has become an active research area, as it enables us to leverage fixed datasets to learn policies to deploy in the real-world. Offline RL methods include 1) policy-constraint approaches that regularize the learned policy to stay close to the behavior policy (Fujimoto et al., 2019; Wang et al., 2020), 2) value-based approaches that encourage more conservative value estimates, either through a pessimistic regularization or uncertainty (Gulcehre et al., 2021; Kumar et al., 2020), 3) model-based approaches (Kidambi et al., 2020; Schrittwieser et al., 2021; Yu et al., 2020), and 4) adaptations of standard off-policy RL methods such as DQN (Agarwal et al., 2020) or D4PG (Wang et al., 2020). Recently methods using only one-step of policy improvement has been proven to be very effective on offline reinforcement learning (Brandfonbrener et al., 2021; Gulcehre et al., 2021).
## 8 Conclusions
Offline RL has enabled the deployment of RL ideas to the real world. Academic interest in this area has grown and several benchmarks have been proposed, including RL-Unplugged (Gulcehre et al., 2020), D4RL (Fu et al., 2020), and RWRL (Dulac-Arnold et al., 2019). However, because of the relatively small-scale and synthetic nature of these benchmarks, they don't capture the challenges of real-world offline RL.
In this paper, we introduced AlphaStar Unplugged, a benchmark to evaluate agents which play StarCraft II by learning only from _offline_ data. This data is comprised of over a million games games mostly played by amateur human StarCraft II players on Blizzard's Battle.Net.11 Thus, the benchmark more accurately captures the challenges of offline RL where an agent must learn from logged data, generated by a diverse group of weak experts, and where the data doesn't exhaust the full state and action space of the environment.
Footnote 11: [https://en.wikipedia.org/wiki/Battle.net](https://en.wikipedia.org/wiki/Battle.net)
We showed that offline RL algorithms can exceed 90% win rate against the all-races version of the previously published AlphaStar Supervised agent (trained using behavior cloning). However, the gap between online and offline methods still exists and we hope the benchmark will serve as a testbed to advance the state of art in offline RL algorithms.
#### Acknowledgments
We would like to thank Alistair Muldal for helping with several aspects of the open-sourcing which went a long way in making the repository user-friendly. We would like to thank Scott Reed and
David Silver for reviewing the manuscript, the AlphaStar team (Vinyals et al., 2019) for sharing their knowledge and experience about the game. We would like to thank the authors of MuZero Unplugged (Schrittwieser et al., 2021) and Sampled MuZero (Hubert et al., 2021) for advising on the development of the MuZero Supervised agent. We also thank the wider DeepMind research, engineering, and environment teams for the technical and intellectual infrastructure upon which this work is built. We are grateful to the developers of tools and frameworks such as JAX (Babuschkin et al., 2020), Haiku (Hennigan et al., 2020) and Acme (Hoffman et al., 2020) that enabled this research.
|
2306.13299 | Homotopy continuation methods for coupled-cluster theory in quantum
chemistry | Homotopy methods have proven to be a powerful tool for understanding the
multitude of solutions provided by the coupled-cluster polynomial equations.
This endeavor has been pioneered by quantum chemists that have undertaken both
elaborate numerical as well as mathematical investigations. Recently, from the
perspective of applied mathematics, new interest in these approaches has
emerged using both topological degree theory and algebraically oriented tools.
This article provides an overview of describing the latter development. | Fabian M. Faulstich, Andre Laestadius | 2023-06-23T05:25:45Z | http://arxiv.org/abs/2306.13299v1 | # Homotopy continuation methods for coupled-cluster theory in quantum chemistry
###### Abstract
Homotopy methods have proven to be a powerful tool for understanding the multitude of solutions provided by the coupled-cluster polynomial equations. This endeavor has been pioneered by quantum chemists that have undertaken both elaborate numerical as well as mathematical investigations. Recently, from the perspective of applied mathematics, new interest in these approaches has emerged using both topological degree theory and algebraically oriented tools. This article provides an overview of describing the latter development.
## I Introduction
Coupled-cluster (CC) theory is a widely acclaimed, high-precision wavefunction approach that is used in quantum chemistry and is of great interest to both practitioners as well as theoreticians [1]. The origin of CC theory dates back to 1958 when Coester proposed to use an exponential parametrization of the wave function [2]. This parametrization was independently derived by Hubbard [3] and Hugenholtz [4] in 1957 as an alternative to summing many-body perturbation theory (MBPT) contributions order by order. A milestone of CC theory is the work by Cizek from 1966 [5]. In this work, Cizek discussed the foundational concepts of second quantization (as applied to many-fermion systems), normal ordering, contractions, Wick's theorem, normal-ordered Hamiltonians (which was a novelty at that time), Goldstone-style diagrammatic techniques, and the origin of the exponential wave function ansatz. He moreover derived the connected cluster form of the Schrodinger equation and proposed a general recipe for how to produce the energy and amplitude equations through projections of the connected cluster form of the Schrodinger equation on the reference and excited determinants, which was illustrated using the CC doubles (CCD) approximation. This work also reports the very first CC computations, using full and linearized forms of CCD, for nitrogen (treated fully at the _ab initio_ level) and benzene (treated with a PPP model Hamiltonian). For a more detailed history of CC theory, several reviews have been written by, e.g., Kummel [6] and Cizek [7]. Other articles that provide insight into the history and development of CC theory include those by Bartlett [8], Paldus [9], Arponen [10], and Bishop [11].
The CC equations are a set of nonlinear algebraic equations that are typically solved using (quasi) Newton-type methods [12]. Each equation in the set corresponds to a projection on a specific excitation of the reference determinant, and the number of equations increases rapidly with the size of the system and allowed excitations under consideration. Since the CC equations are a set of nonlinear algebraic equations, there are multiple roots to the CC equations. The existence of multiple roots to the CC equations can present challenges for the practical implementation of the method. For example, the roots can be difficult to adequately converge to, and the convergence properties of the iterative solution methods can strongly depend on the employed initial guess. Furthermore, excited states can also be targeted with equation-of-motion (EOM)-CC [13; 14; 15; 16; 17], where an initial "ground-state" CC calculation is the starting point of a non-Hermitian diagonalization problem.
Although great progress has been made along the fundamental and mathematical study of the root structure of the CC equations, which include homotopy methods applied to the CC methodology, progress and widespread applications have been hampered by the high dimensionality and non-linearity of the CC equations, as well as the steep scaling of _algebro-computational_ methods. The first study on this topic dates back to 1978, where Zivkovic and Monkhorst investigated the singularities and multiple solutions of the single-reference CC equations revealing conditions for reality and the maximum multiplicity of solutions [18]. In 1998, Kowalski and Jankowski revived homotopy methods in connection with CC theory and used them to solve a CCD system [19]. This was followed by a fruitful collaboration with Piecuch, who worked on multiple solutions to the single reference CC and state-universal MRCC equations [20; 21]. Piecuch and Kowalski extended the application of homotopy methods to CC single and doubles-(CCSD)-, CC singles, doubles and triples-(CCSDT)-, and CC single, doubles, triples and quadruples-(CCSDTQ)-equations [22] for a 4-electron system described by a minimum basis set. They also introduced the formalism of \(\beta\)-nested equations and proved the _Fundamental Theorem of the \(\beta\)-NE Formalism_, which enabled them to explain the behavior of the curves connecting multiple solutions of the various CC polynomial systems, i.e., from
CCSD to CCSDT, CCSDT to CCSDTQ, etc. In [23], Piecuch and Kowalski used homotopy methods to determine all solutions of nonlinear state-universal multireference CCSD equations based on the Jeziorski-Monkhorst ansatz, proving two theorems that provided an explanation for the observed intruder solution problem. In a sequel work [24], they used homotopy methods to obtain all solutions of the generalized Bloch equation, which is nonlinear even in a CI parametrization. Further articles utilizing homotopy methods arose in the late '90s where, amongst other things, symmetry breaking processes in non-linear CC formulation are explained [25; 26; 27; 28; 29].
In addition to examining the latest developments in homotopy techniques in CC theory, this article aims to present new findings that result from the application of homotopy approaches using both topological degree theory and algebraic geometry tools, from an applied mathematics perspective. These approaches are essential for expanding the scope of mathematical investigations beyond the ground state. While local analyses using strong monotonicity [30; 31; 32; 33; 34; 35; 36; 37; 38] have been useful since Schneider's seminal work [30], they only provide a limited perspective compared to the broader study of the CC equations using topological degree theory or algebraic geometry [39; 40]. By adopting these tools, we can gain a more comprehensive understanding of the mathematical structure of the CC equations and their solutions, providing new avenues for further explorations and refinements of the theory.
This article is organized as follows: In Section II we briefly introduce the CC theory including concepts like the CC variety as well as the equivalence between a wavefunction being an eigenstate and the CC parametrization (standard- and EOM-CC). In Section III we recall the concept of homotopy continuation techniques, highlighting important edge-cases to consider. In Section IV, we elaborate on the root structure of polynomial systems and the challenges involved with using (quasi) Newton-type methods. In Section V we then present upper bounds on the number of roots of the CCSD equations based on Bezout's theorem and outline how to establish improved bounds using the Bernstein-Khovanskii-Kushnirenko theorem. In Section VI, we discuss the mathematical existence of solution curves connecting CC roots at different truncation levels. We also present an energy error estimate valid for any approximate eigenstate of exponential (CC) form. Both these results have appeared previously in the mathematical literature [39]. We conclude in Section VII by summing up the main points as well as presenting a brief outlook on future work.
## II Coupled-cluster theory
The underlying idea of coupled-cluster (CC) theory is the exponential parametrization of the targeted wavefunction \(|\Psi\rangle\), i.e, for a given reference determinant \(|\Phi_{0}\rangle\), we make the ansatz
\[|\Psi\rangle=e^{\hat{T}}|\Phi_{0}\rangle, \tag{1}\]
where \(\hat{T}\) is the new unknown, called the _cluster operator_, which we will define shortly. Using this ansatz, we find
\[\mathcal{H}|\Psi\rangle=E|\Psi\rangle\ \Leftrightarrow\ e^{-\hat{T}}\mathcal{H}e^{ \hat{T}}|\Phi_{0}\rangle=E|\Phi_{0}\rangle, \tag{2}\]
where \(\mathcal{H}\) is the considered Hamiltonian. Projecting Eq. 2 yields
\[\begin{cases}E=\langle\Phi_{0}\mid e^{-\hat{T}}\mathcal{H}e^{\hat{T}}\mid\Phi _{0}\rangle\\ 0=\langle\Phi\mid e^{-\hat{T}}\mathcal{H}e^{\hat{T}}\mid\Phi_{0}\rangle,\ \ \ \forall\ |\Phi\rangle\perp|\Phi_{0}\rangle\end{cases} \tag{3}\]
of which the latter is used to compute the cluster operator. The cluster operator is a linear combination of elementary particle-hole excitation operators (vide infra), and we shall henceforth highlight the dependence of \(\hat{T}\) to its expansion coefficients by writing \(\hat{T}(\mathbf{t})\). Elementary particle-hole excitation operators are merely compositions of projections onto a subset of single-particle basis elements (the occupied orbitals) and wedge products with another subset of single-particle basis elements (the virtual orbital). It is, therefore, highly convenient to label the elementary particle-hole excitation operators using multi-indices \(\mu\) that clarify the projections and wedge products involved (see, e.g., [30]). Employing this notation yields the following expression for the cluster operator
\[\hat{T}(\mathbf{t})=\sum_{\mu}t_{\mu}\hat{X}_{\mu}. \tag{4}\]
As is customary in the quantum chemistry community, we will use upper case letters to describe cluster operators, and lower case letters to describe cluster amplitudes, e.g., \(\hat{T}(\mathbf{t})\), \(\hat{C}(\mathbf{c})\), \(\hat{R}(\mathbf{r})\) or \(\hat{S}(\mathbf{s})\). By construction, acting on the reference state \(|\Phi_{0}\rangle\) the elementary particle-hole excitation operators define the Hilbert space of functions that are \(L^{2}\)-orthogonal to \(|\Phi_{0}\rangle\). We henceforth set \(\mathcal{V}=\{|\Phi_{0}\rangle\}^{\perp}\) and note that \(\hat{T}(\mathbf{t})|\Phi_{0}\rangle\in\mathcal{V}\) for all amplitudes. Hence, using the standard Galerkin projection approach, we can express the second equation and the orthogonality constraint in Eq. (3) as
\[0=\langle\Phi_{\mu}\mid\hat{\mathcal{H}}(\mathbf{t})\mid\Phi_{0}\rangle,\ \ \ \ \forall\mu \tag{5}\]
where \(|\Phi_{\mu}\rangle=\hat{X}_{\mu}|\Phi_{0}\rangle\) and \(\hat{\mathcal{H}}(\mathbf{t})=e^{-\hat{T}(\mathbf{t})}\hat{\mathcal{H}}e^{ \hat{T}(\mathbf{t})}\). Since the cluster operator is defined by the cluster amplitudes \(\mathbf{t}\), we can define the CC energy as a function of \(\mathbf{t}\), i.e.,
\[\mathcal{E}_{\text{CC}}(\mathbf{t})=\langle\Phi_{0}\mid\hat{\mathcal{H}}( \mathbf{t})\mid\Phi_{0}\rangle. \tag{6}\]
Note that the similarity transformed Hamiltonian \(\mathcal{H}(\mathbf{t})\) for all amplitudes \(\mathbf{t}\) has the same eigenvalues as the original Hamiltonian \(\mathcal{H}\). However, the CC equations in (5) do not arise from diagonalizing the similarity-transformed Hamiltonian.
We emphasize that the orthogonality conditions in Eq. (5) yields a square system of polynomial equations, see e.g. [1; 40; 41]. Hence, a key object in CC theory is the CC variety:
\[\mathcal{S}=\{\mathbf{t}\in\mathbb{F}^{K}\ |\ \langle\Phi_{\mu}\mid\hat{\mathcal{H}}( \mathbf{t})\mid\Phi_{0}\rangle=0\quad\forall\mu\}\subset\mathbb{V}, \tag{7}\]
where \(\mathbb{F}\) is the considered number field (either \(\mathbb{R}\) or \(\mathbb{C}\)), \(K\) is the "system size" (given by the number of correlated electrons, the size of one-particle basis functions as well as further selection rules) and \(\mathbb{V}\) is the cluster amplitude space, i.e., \(\mathbf{t}\in\mathbb{V}\) if and only if \(\hat{T}(\mathbf{t})|\Phi_{0}\rangle\in\mathcal{V}\). Note that in this work, we always assume a finite set of one-particle basis functions (i.e., orbitals), in particular, \(\mathbb{V}=\mathbb{F}^{K}\). Note that \(\mathbb{F}\) can be \(\mathbb{R}\) or \(\mathbb{C}\) determining if we are seeking real or complex valued amplitudes. We emphasize that although the CC polynomial coefficients are real, the roots to the polynomial system may not be. Mathematically, the more general theory describing complex valued solutions is simpler than the theory describing real valued solutions. Therefore, it is easier to consider \(\mathbb{F}=\mathbb{C}\) for mathematical considerations, however, solving the truncated CC equations for \(\mathbb{F}=\mathbb{C}\) may yield complex valued energies, e.g. see [40]. Moreover, the majority of quantum chemistry implementations seek real valued solutions, i.e., the CC equations are solved for \(\mathbb{F}=\mathbb{R}\).
All possible determinants that can be generated given the basis set and number of (correlated) electrons can be represented in an excitation graph, \(G^{\text{full}}\), where the vertices are the determinants \(|\Phi_{\mu}\rangle\) and the edges are the operators \(\hat{X}_{\mu}\)[42]. However, we might not always want to consider all possible determinants that can be generated but rather a subset \(G\subset G^{\text{full}}\). For example, we might just consider excitations up to a certain order or "rank", such as CCSD (\(\hat{X}_{\mu}\) contains excitation of at most two electrons), CCSDT (\(\hat{X}_{\mu}\) contains excitations of at most three electrons), etc. Thus, we will sometimes write \(\mathcal{S}=\mathcal{S}(G)\), \(\mathbb{V}=\mathbb{V}(G)\), etc. to highlight that we consider a truncated CC scheme as dictated by \(G\subset G^{\text{full}}\).
In the untruncated case, \(G^{\text{full}}\), we note that for given \(\mathbf{t}_{*}\in\mathcal{S}\) the wavefunction \(\Psi=(c_{0}\hat{I}+\hat{C}(\mathbf{c}))|\Phi_{0}\rangle\) is an eigenfunction of \(\mathcal{H}\), i.e., there exists a constant \(\hat{\mathcal{E}}\) such that \(\hat{\mathcal{H}}|\Psi\rangle=\mathcal{E}|\Psi\rangle\), if and only if \(e^{-\hat{T}(\mathbf{t}_{*})}(c_{0}\hat{I}+\hat{C}(\mathbf{c}))=r_{0}\hat{I}+ \hat{R}(\mathbf{r})\), where
\[\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})r_{0}+\langle\Phi_{0} \mid\hat{\mathcal{H}}(\mathbf{t}_{*})\hat{R}(\mathbf{r})\mid\Phi_{0}\rangle =\mathcal{E}r_{0}\\ \Pi_{\mathcal{V}}\hat{\mathcal{H}}(\mathbf{t}_{*})\hat{R}(\mathbf{ r})|\Phi_{0}\rangle =\mathcal{E}\hat{R}(\mathbf{r})|\Phi_{0}\rangle\Bigg{\}} \tag{8}\]
with \(\Pi_{\mathcal{V}}\) being the orthogonal projection onto \(\mathcal{V}\) and \({c_{0}=r_{0}}\). For a proof of this statement we refer to Lemma 4.1 in [39].
To shed some light on the above description of eigenstates using the CC framework, we will give a few examples (all familiar to the quantum-chemistry setting):
1. We first note that \(\mathcal{E}=\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})\) is equivalent to \(\mathbf{r}_{0}\neq 0\), \(\hat{R}(\mathbf{r})=0\), and then \(|\Psi\rangle=e^{\hat{T}(\mathbf{t}_{*})}|\Phi_{0}\rangle\).
2. If \(\mathcal{E}=\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})\) and there is \(\hat{R}(\mathbf{r})\neq 0\), then \(|\Psi\rangle=(r_{0}\hat{I}+\hat{R}(\mathbf{r}))e^{\hat{T}(\mathbf{t}_{*})}| \Phi_{0}\rangle\) is another eigenstate, i.e., \(\mathcal{E}\) is a degenerate energy level.
3. If \(\mathcal{E}\neq\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})\) and \(\hat{R}(\mathbf{r})\neq 0\), then \(|\Psi\rangle=(r_{0}\hat{I}+\hat{R}(\mathbf{r}))e^{\hat{T}(\mathbf{t}_{*})}| \Phi_{0}\rangle\) is an "excited" eigenstate with respect to the energy level \(\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})\)).
If we let
\[|\Psi_{(k)}\rangle=e^{\hat{T}(\mathbf{t}_{(k)})}|\Phi_{0}\rangle \tag{9}\]
denote the \(k\)th eigenstate to \(\hat{\mathcal{H}}\), the EOM-CC method [13; 14; 15; 16; 17] gives with \(\hat{R}(\mathbf{r}_{(1)})=\hat{I}\) that the solutions (under certain assumptions) can be written
\[|\Psi_{(k)}\rangle=\hat{R}(\mathbf{r}_{(k)})e^{\hat{T}(\mathbf{t}_{(1)})}| \Phi_{0}\rangle. \tag{10}\]
We will in the remaining part only consider the case (i) above.
The ground state (intermediately normalized) moreover corresponds to \(|\Psi_{(1)}\rangle=e^{\hat{T}(\mathbf{t}_{(1)})}|\Phi_{0}\rangle\), where
\[\mathbf{t}_{(1)}=\operatorname*{arg\,min}_{\mathbf{t}\in\mathcal{S}}\ \mathcal{E}_{\text{CC}}(\mathbf{t}). \tag{11}\]
We emphasize that in the untruncated case, the algebraic variety \(\mathcal{S}\) (given in Eq. (7)) describes all solutions to the electronic Schrodinger equation, given that the eigenstates have non-zero overlap with the reference determinant \(|\Phi_{0}\rangle\). Hence, the \(k\)th eigenstate (that has non-zero overlap with the reference determinant \(|\Phi_{0}\rangle\)) can be characterized using the min-max principle, i.e.,
\[|\Psi_{(k)}\rangle=e^{\hat{T}(\mathbf{t}_{(k)})}|\Phi_{0}\rangle \tag{12}\]
where \(\mathbf{t}_{(k)}\) corresponds to the argument that solves the min-max problem
\[\min\{\max_{\mathbf{t}\in\mathcal{S}_{k}}\ \mathcal{E}_{\text{CC}}(\mathbf{t})\ |\ \mathcal{S}_{\mathbf{k}}\subset\mathcal{S},\,|\mathcal{S}_{\mathbf{k}}|=\mathbf{k}\}.\]
In the case of untruncated CC (FCC), amplitudes that solve the FCC equations describe an eigenstate of the Hamiltonian. Therefore, the cardinality of \(\mathcal{S}\) is equal to the number of eigenstates of \(\mathcal{H}\) that are intermediately normalized, hence, the number of roots is bounded by the number of Slater determinants. However, when truncations are imposed, the cardinality of \(\mathcal{S}=\mathcal{S}(G)\) increases, i.e., truncated CC theory yields (some) unphysical solutions. Therefore, it becomes less clear what the different elements in \(\mathcal{S}\) describe. Understanding the variety \(\mathcal{S}\) and characterizing (some of) its elements are the subject of this manuscript. To that end, we employ two homotopy continuation perspectives: The first is to compute \(\mathcal{S}\) in its entirety, where a homotopy is used to connect \(\mathcal{S}\) to solutions of a simpler system of polynomial equations. The second is to characterize the physical solutions in \(\mathcal{S}\) corresponding to truncated CC equations, where a homotopy is used to connect \(\mathcal{S}\) to the FCI solutions (or at least some "higher" truncation scheme, i.e., less truncated CC equations).
## III Homotopy continuation
Homotopy continuation methods are well studied mathematically, we refer the interested reader to 43-46. Polynomial homotopy continuation is a numerical method to compute solutions to systems of polynomial equations,
\[F\left(x_{1},\ldots,x_{n}\right)=\left[\begin{array}{c}f_{1}\left(x_{1}, \ldots,x_{n}\right)\\ \vdots\\ f_{m}\left(x_{1},\ldots,x_{n}\right)\end{array}\right]=0. \tag{13}\]
Note that in our case, \(F\) corresponds to the CC equations. In a general case, we require \(m\geq n\), however, the CC equations are a square system, i.e., \(m=n\). The underlying idea is straightforward: to solve \(F(\mathbf{x})=0\), we construct an auxiliary system of polynomial equations, called \(G(\mathbf{x})=0\), with known zeroes, together with a homotopy that connects both systems. More precisely, we define a family of systems \(H(\mathbf{x},\lambda)\) for \(\lambda\in\mathbb{R}\) interpolating between \(F\) and \(G\), i.e., \(H(\mathbf{x},0)=F(\mathbf{x})\) and \(H(\mathbf{x},1)=G(\mathbf{x})\). Considering one zero, \(\mathbf{y}\), of \(G(\mathbf{x})\) and restricting to \(\lambda\in[0,1]\), \(H(\mathbf{x},\lambda)=0\) defines a solution path \(\mathbf{x}(\lambda)\subset\mathbb{C}^{n}\) such that \(H(\mathbf{x}(\lambda),\lambda)=0\) for \(\lambda\in[0,1]\) and \(\mathbf{x}(1)=\mathbf{y}\). The path is followed from \(\lambda=1\) to \(\lambda=0\) to compute the solution \(\mathbf{z}=\mathbf{x}(0)\). This is equivalent to solving the initial value problem
\[\frac{\partial}{\partial\mathbf{x}}H(\mathbf{x},\lambda)\left(\frac{\mathrm{d }}{\mathrm{d}\lambda}\mathbf{x}(\lambda)\right)+\frac{\partial}{\partial \lambda}H(\mathbf{x},\lambda)=0,\quad\mathbf{x}(1)=\mathbf{y}.\]
Which is known as the Davidenko differential equation [47; 48]. We say that \(\mathbf{x}(1)=\mathbf{y}\) gets tracked towards \(\mathbf{x}(0)\). For this to work, \(\mathbf{x}(\lambda)\) must be a regular zero of \(H(\mathbf{x},\lambda)=0\) for every \(\lambda\in(0,1]\). In the case of nonregular solutions at \(\lambda=0\), so-called endgames are employed which are special numerical methods [49].
When tracking the solutions described by the homotopy, we may encounter different scenarios, see FIG. 1. As shown in 43, we can see that one path (solid line) has no finite limit as \(\lambda\to 0\), while the other three have limits. One path (dotted-dashed line) has a unique limit, i.e., the endpoint of one at \(\lambda=0\) is the regular zero of the target system \(F(x)\). Two paths (dashed lines) have the same limit, and their common endpoint is an isolated zero of \(F(x)\) of multiplicity two.
We moreover wish to highlight that great progress has been made along the lines of homotopy continuation based software development exploiting parallel implementations, which can significantly extend their application areas in the coming years. In particular, we here want to highlight PHCpack [50], Bertini [51], HOM4PS [52; 53], NAG4M2 [43], and HomotopyContinuation.jl [54].
## IV Root structure of polynomial systems
In this section, we will elaborate on the fundamental and practical importance of understanding the root structure of the CC equations. We emphasize that it is hard to describe the root structure of the CC equations in general because it is a high-dimensional and non-linear system. Although this seems to be a daunting task, the CC equations show a number of symmetries, which allow some results regarding the system's root structure [18; 19; 22; 23; 24; 34].
The root structure of a polynomial system is of fundamental importance as it reveals, e.g., the multiplicity of the roots or whether the roots are real or complex [55]. Having information about the root structure is also of practical importance when using approximate methods. Most commonly, (quasi) Newton type methods are employed to find and approximate _one_ root of the CC equations. However, the convergence behavior of (quasi) Newton type methods is merely locally well understood; its global convergence behavior can be highly complicated [55; 56]. This can be illustrated through Newton fractals. Newton fractals are graphical representations of the iterative process used to find the roots of a given polynomial system using (quasi) Newton type methods. To create a Newton fractal, one assigns to each initialization the root to which it converged. In the case of one polynomial, one can color each point in the complex plane according to the root to which it converges under the considered (quasi) Newton type method, see e.g. FIG. 2 where we show the Newton fractal for \(p(z)=z^{3}-1\).
Clearly, FIG. 2 is a simplified perspective, since it shows the root structure of merely one polynomial. However, it already illustrates the (potential) issues that the application of (quasi) Newton type methods to the CC equations can have. Figure 2 shows the three roots of the polynomial \(p(z)=z^{3}-1\), namely \(x_{1}=1+i\cdot 0\) and \(x_{2,3}=-1/2\pm i\sqrt{3}/2\), as well as the color-coded basins of attraction that correspond to the respective roots. We can clearly see that if the initial guess is _close_ to one of the solutions, (quasi) Newton type methods will stably
Figure 1: Sketch of possible homotopy paths. The solid line shows a path with no finite limit as \(\lambda\to 0\), the dashed lines have the same limit, and the dotted-dashed line has a unique limit.
converge to that solution. However, if the initial guess is not close to a solution, but lies in one of the "fractal branches", the convergence of (quasi) Newton type methods becomes very unstable [57]. In fact, the numerical stability can become so poor that differences at the level of machine precision will change the converged result. Since the CC equations are so high-dimensional, a bruit force visualization is already for the smallest systems impractical. Yet, since the highly intricate global convergence behavior of (quasi) Newton type methods is rather common, one must be aware of this phenomenon, especially in a scenario when perturbative initializations are not justifiable. In fact, if the initial guess lies in a "fractal branch", the employed (quasi) Newton type methods may yield a root that is suboptimal, although the used CC ansatz (e.g. CCSD) is well capable of describing the targeted state. To summarize this section, using (quasi) Newton type methods to approximate a root of the CC equations that corresponds to a targeted state, one must ensure to be in the correct basin of convergence. Otherwise, it is unclear if the obtained CC result is an approximation to the targeted state.
## V Homotopy I
In this section, we discuss a path toward numerical methods that aim to find all solutions to the CC equations. Due to the complicated structure of the basins of attraction for (quasi) Newton type methods described in the previous section, employing this approach for finding a CC solution heavily relies on certain local assumptions of the initial guess, which mathematically reduces to a perturbative picture. This importance of perturbation theory related to CC theory is well-known in the computational chemistry community, and we refer the interested reader to [1] for a thorough and text-book-like presentation. However, this perturbative argument is also well-known to fail. In particular, we wish to highlight important failure modes presented in [21] revealing the existence of algebraic branch points as a perturbation is turned on. The existence of such branch points strongly indicates that the commonly employed procedure of approximating _one_ root via (quasi) Newton-type methods starting from the Hartree-Fock reference cannot be generally applied. Therefore, in order to reliably extend the CC theory to the non-perturbative regime, accessing all roots of the CC equations appears to be inevitable. Here, homotopy continuation methods are one potential way to achieve this, but the high dimensionality of the equations makes a direct application of these types of procedures challenging [40; 22].
In fact, establishing a good enough bound to the number of roots is already challenging! This bound is important since it is used to initialize an auxiliary system \(G\), the employed homotopy continuation method will then track all solutions of \(G\). Clearly, when over-estimating the number of roots, along the path \(\lambda\to 0\), spurious roots will collapse, yet, if the estimated number of roots is too large the produce will simply become numerically intractable. A trivial bound can be obtained by observing that Hadamard's lemma, i.e., the expansion of \(\mathcal{H}(\mathbf{t})\), together with commutator considerations (see e.g., [1; 41]) yields a polynomial system order of less or equal than four. The corresponding Bezout bound is then
\[\mathcal{N}\leq 4^{n_{\mathcal{K}}}, \tag{14}\]
where \(\mathcal{K}\) describes the number of projective equations. However, this number grossly overestimates the number of roots, since no further structure of the CC equations is taken into account. A first dramatic reduction can be obtained by noticing that the projective equations onto the singly excited Slater determinants are of the order less or equal to three (see e.g. [1; 41]). Since our main objective is to incorporate homotopy methods for CCSD, we will for the remainder of this section only consider this truncation level. The above consideration then yields
\[\mathcal{N}\leq 3^{n_{s}}4^{n_{d}} \tag{15}\]
where \(n_{s}\) and \(n_{d}\) are the number of projected single and double equations, respectively [22]. Note that Eq. (15) is specific to the CCSD scheme, and generalizations to this bound for higher-order CC schemes, such as CCSDT and CCSDTQ are also discussed in [22].
Although this bound includes the further structure of the CCSD equations, it is still overestimating the true number of roots. As has been recently shown, this bound can be significantly improved by rewriting the CCSD equations as a quadratic system [40]. The key in this reformulation is to notice that the anti-symmetry property of Slater determinants allows for factorizing disconnected
Figure 2: Newton fractal of \(p(z)=z^{3}-1\). The white dots correspond to the roots \(x_{1}=1+i\cdot 0\) and \(x_{2,3}=-1/2\pm i\sqrt{3}/2\). The different colored regions, red, blue, and green, correspond to the basins of attraction of the roots \(x_{1}\), \(x_{2}\), and \(x_{3}\), respectively.
doubles. To that end, we define the variety \(A\subseteq\mathbb{F}^{n_{s}+2n_{d}}\) (where \(\mathbb{F}\) is either \(\mathbb{R}\) or \(\mathbb{C}\)) over which we seek to optimize the CCSD equations.
Formally, we define an index map \(\iota\) that flattens the tuple \((i,j,a,b)\) and off-sets this compound index by \(n_{s}+n_{d}\). Note that since \(i\), \(j\) are occupied indices, and \(a\), \(b\) are virtual indices, there are exactly \(n_{d}\) auxiliary indices that are obtained by \(\iota\). We then define the variety \(\tilde{\mathcal{S}}\) as
\[\tilde{\mathcal{S}}=\{x\in\mathbb{F}^{n}|x_{k}-x_{i}^{a}x_{j}^{b}+x_{i}^{b}x_{ j}^{a}=0,\,\forall k=\iota(i,j,a,b)\},\]
where we introduced the short-hand notation \(n=n_{s}+2n_{d}\). On this variety, every projected equation \(f_{\mu}\) takes the form
\[f_{\mu}(\Pi_{\mathbb{V}}\mathbf{x})=\sum_{q,r}h_{q,r}^{\mu}x_{q}x_{r} \tag{16}\]
where \(h_{q,r}^{\mu}\) is a matrix (see [40]), \(\Pi_{\mathbb{V}}\) is the projection onto \(\mathbb{V}\) and \(\mathbf{x}\in\tilde{\mathcal{S}}\). Note that the l.h.s. in Eq. (16) is a potentially fourth-order polynomial in \(n\) variables, whereas the r.h.s. in Eq. (16) is a second order polynomial in \(n+n_{d}\) variables. This order reduction yields the improved bound of
\[\mathcal{N}\leq 2^{n_{s}+2n_{d}}, \tag{17}\]
which is exponentially better than the existing Bezout bound (15).
Although this reduction is already quite significant, we expect that for special cases further symmetry considerations can be incorporated yielding even better bounds. However, Bezout type bounds are always worst-case estimates. A better bound is expected when employing the Bernstein-Khovanskii-Kushnirenko (BKK) theorem. Bezout type bounds arise from the idea that the polynomials in the considered system are independent of each other, in which case the number of roots corresponds to the product of the individual number of roots; hence, they are worst-case bounds. BKK-type bounds are less intuitive since the BKK theorem relates the root counting problem for a system of polynomial equations with the theory of convex bodies. More precisely, the BKK theorem shows that the generic number of isolated solutions to a system of (Laurent) polynomial equations equals the mixed volume of the Newton polytopes of the (Laurent) polynomials. The general challenge when establishing a BKK-type bound is to compute the volume of convex bodies, i.e., the Newton polytopes. However, general algorithms for computing the mixed volume are exponential in the dimension. Since even computing the volume is known to be #P-hard, a brute force approach seems doomed. That being said, the CC equations are very structured, and a surrogate system of Newton polytopes can be established [40].
## VI Homotopy II
As already mentioned above, the second form of homotopy approaches we wish to discuss is the one used to characterize the physical solutions in \(\mathcal{S}=\mathcal{S}(G)\) corresponding to truncated CC equations (as characterized by the choice of excitation graph \(G\subset G^{\text{full}}\)). A homotopy scheme can then be used to connect \(\mathcal{S}(G)\) to some "higher" truncation scheme, in particular the FCC (FCI) regime (i.e., CC method with \(G^{\text{full}}\)).
Before continuing further, we note that \(\mathbf{t}\in\mathcal{S}(G)\) is equivalent to
\[\langle\hat{S}(\mathbf{s})\Phi_{0}\mid\hat{\mathcal{H}}(\mathbf{t})\mid\Phi_{ 0}\rangle=0 \tag{18}\]
for all \(\mathbf{s}\in\mathbb{V}(G)\). Let \(\mathbb{V}^{*}\) denote the dual space of \(\mathbb{V}\), we then define the CC mapping \(\mathcal{A}(\mathbf{t}):\mathbb{V}\rightarrow\mathbb{V}^{*}\) via
\[\langle\mathbf{s},\mathcal{A}(\mathbf{t})\rangle=\langle\hat{S}(\mathbf{s}) \Phi_{0}\mid\hat{\mathcal{H}}(\mathbf{t})\mid\Phi_{0}\rangle. \tag{19}\]
Here \(\langle\cdot,\cdot\rangle\) is also used to denote the dual pairing between amplitudes and elements in the dual amplitude space. Note that, by definition, \(\mathcal{S}\) is the set of zeros to \(\mathcal{A}\). We will assume for \(\mathbf{t}_{*}\in\mathcal{S}\) that \(\det\mathcal{A}^{\prime}(\mathbf{t}_{*})\neq 0\), i.e., there are no non-trivial right eigenvectors to \(\hat{\mathcal{H}}(\mathbf{t}_{*})\) corresponding to the eigenvalue \(\mathcal{E}_{\text{CC}}(\mathbf{t}_{*})\) (in short, \(\mathbf{t}_{*}\) is a non-degenerate zero).
Let \(\mathbb{V}^{1}\) be the amplitude space corresponding to the excitation graph \(G^{1}\). This can be thought of as the untruncated CC case, i.e., full graph with all possible determinants (vertices) included. Similarly, \(\mathbb{V}^{0}\) is the amplitude space corresponding to \(G^{0}\), which contains amplitudes with rank \(\leq\rho\) (\(2\leq\rho<N\)). We then decompose \(\mathbb{V}^{1}=\mathbb{V}(G^{1})\) as: \(\mathbb{V}^{1}=\mathbb{V}^{0}\oplus\mathbb{V}^{\perp}\). Here, \(\mathbb{V}^{\perp}\) contains amplitudes with rank rank strictly greater than \(\rho\). It hols \(\langle\mathbf{s}^{0},\mathbf{s}^{\perp}\rangle=0\) for \(\mathbf{s}^{0}\in\mathbb{V}^{0}\) and \(\mathbf{s}^{\perp}\in\mathbb{V}^{\perp}\).
### Existence for Kowalski-Piecuch homotopy
We will next discuss (from a more mathematical perspective) how to make sense of a truncated CC solution by finding a trajectory that connects it to a FCC (FCI) solution. We define the _Kowalski-Piecuch homotopy_\(\mathcal{K}_{\text{KP}}:\mathbb{V}^{1}\times[0,1]\rightarrow(\mathbb{V}^{1})^{*}\) via the instruction
\[\begin{split}\langle\mathcal{K}_{\text{KP}}(\mathbf{t}^{1}, \lambda),\mathbf{s}^{1}\rangle=\langle\hat{S}(\mathbf{s}^{0})\Phi_{0}\mid \hat{\mathcal{H}}(\mathbf{t}^{0})\mid\Phi_{0}\rangle\\ +\langle\hat{S}(\mathbf{s}^{\perp})\Phi_{0}\mid\hat{\mathcal{H}}( \mathbf{t}^{1})\mid\Phi_{0}\rangle\\ +\lambda\langle\hat{S}(\mathbf{s}^{0})\Phi_{0}\mid\hat{\mathcal{H }}(\mathbf{t}^{0})\mid(e^{\hat{T}(\mathbf{t}^{\perp})}-\hat{I})\Phi_{0}\rangle \end{split} \tag{20}\]
for all \(\mathbf{t}^{1},\mathbf{s}^{1}\in\mathbb{V}^{1}\) and \(\lambda\in[0,1]\). We can note that by construction, the Kowalski-Piecuch homotopy satisfies \(\mathcal{K}_{\text{KP}}(\mathbf{t}^{1},1)=\mathcal{A}(\mathbf{t}^{1})\). Furthermore, \(\mathcal{K}_{\text{KP}}(\mathbf{t}^{1}_{**},0)=0\) is equivalent to
\[\langle\hat{S}(\mathbf{s}^{0})\Phi_{0}\mid\hat{\mathcal{H}}(\mathbf{t }^{0}_{**})\mid\Phi_{0}\rangle =0, \tag{21}\] \[\langle\hat{S}(\mathbf{s}^{\perp})\Phi_{0}\mid\hat{\mathcal{H}}( \mathbf{t}^{0}_{**}+\mathbf{t}^{\perp}_{**})\mid\Phi_{0}\rangle =0. \tag{22}\]
Equation (21) is the usual truncated CC equation (associated with \(G^{0}\)) and Eq. (22) is the Kowalski-Piecuch-auxiliary equation.
The Kowalski-Piecuch homotopy has been used to establish an existence result that connects a truncated solution to a corresponding untruncated ("full") solution [39]:
Let \(\mathbf{t}^{1}_{*}\in\mathbb{V}^{1}\) be a non-degenerate zero of \(\mathcal{A}\). Under technical assumptions (see Theorem 4.34 in [39]), we can find \(\varepsilon>0\) such that for any \(\lambda\in[0,1)\) there exists \(\mathbf{t}^{1}_{**}(\lambda)\in\mathcal{D}_{\varepsilon}\) fulfilling \(\mathcal{K}_{\mathrm{KP}}(\mathbf{t}^{1}_{**}(\lambda),\lambda)=0\), where
\[\mathcal{D}_{\varepsilon}=\{\mathbf{t}^{1}_{*}+\mathbf{r}^{1}\in\mathbb{V}^{1 }:\|\mathbf{r}^{0}\|_{\mathbb{V}}^{2}+\|\mathbf{r}^{\perp}\|_{\mathbb{V}}^{2}<\varepsilon\}.\]
In particular, there exists \(\mathbf{t}^{1}_{**}\in\mathcal{D}_{\varepsilon}\) such that \(\mathcal{A}(\mathbf{t}^{0}_{**})=0\). (See FIG. 3 for an illustration.)
For the interested reader, the technical assumptions are discussed in [39] (in Theorem 4.34 and directly afterward). In essence, these assumptions concern the fluctuation potential (and how it couples the truncated space to the "rest", which can be controlled by the truncation level \(\rho\)) and the size of the \(\mathbf{t}_{**}{}^{\perp}\) (that cannot be too large). One of the assumptions can also be interpreted as a perturbative assumption related to the size of the second derivative of the CC mapping (i.e., \(\mathcal{A}^{\prime\prime}\)) at \(\mathbf{t}^{1}_{*}\).
### Error estimate for Kowalski-Piecuch homotopy
The _Fundamental theorem of the formalism of \(\beta\)-nested equations_ of Kowalski and Piecuch [22] can be cast into an _a posteriori_ error estimate [39]. Let \(\mathbb{V}^{1}=\mathbb{V}(G^{\mathrm{full}})\). Suppose that \(\mathbf{t}^{1}_{*}\in\mathbb{V}^{1}\) is a zero of \(\mathcal{A}\) and \(\mathbf{t}^{1}_{**}\in\mathbb{V}^{1}\) a zero of \(\mathcal{K}_{\mathrm{KP}}(\cdot,0)=0\). We then have the energy error estimate valid for any (approximate) eigenstate (and not just the ground state) parameterized by the truncated amplitudes \(\mathbf{t}^{0}_{**}\) that solve Eq. (21):
Let \(\rho\geq 2\) and assume that the nonorthogonality condition \(\langle e^{\widehat{T}(\mathbf{t}^{0}_{**})}\Phi_{0}\mid e^{\widehat{T}( \mathbf{t}^{1}_{*})}\Phi_{0}\rangle\neq 0\) hold. Then
\[|\mathcal{E}_{\mathrm{CC}}(\mathbf{t}^{0}_{**})-\mathcal{E}_{\mathrm{CC}}( \mathbf{t}^{1}_{*})|\leq C(\mathbf{t}^{1}_{**},\mathbf{t}^{1}_{*})\ \|\mathbf{t}^{\perp}_{**}\|_{\mathbb{V}}, \tag{23}\]
where \(\mathbf{t}^{\perp}_{**}\) solves Kowalski-Piecuch-auxiliary equation (22).
Here the constant in the rhs. of Eq. (23) is given by
\[C(\mathbf{t}^{1}_{**},\mathbf{t}^{1}_{*})=\tilde{C}(\mathbf{t}^{1}_{**}, \mathbf{t}^{1}_{*})\,|\langle e^{\widehat{T}(\mathbf{t}^{0}_{**})}\Phi_{0}\mid e ^{\widehat{T}(\mathbf{t}^{1}_{*})}\Phi_{0}\rangle|^{-1}. \tag{24}\]
The constant \(\tilde{C}\) is a multifaceted quantity that comprises several contributing factors. These factors include a norm equivalence constant, the similarity transform of the fluctuation operator (where the Hamiltonian is a sum of the Fock and fluctuation operator), and the size of the orthogonal space \(\mathcal{V}^{\perp}\). We emphasize that the constant becomes smaller in the perturbative regime of the CC method when the fluctuation potential is small and \(\mathcal{V}^{\perp}\) is less physically relevant. For more details, we refer the interested reader to Theorem 4.40 in [39].
If the nonorthogonality condition holds and \(\mathbf{t}^{\perp}_{**}=0\), then \(\mathbf{t}^{0}_{**}\) is a FCC (FCI) solution and the energy error estimate implies that the error is zero, i.e., \(\mathcal{E}_{\mathrm{CC}}(\mathbf{t}^{0}_{**})=\mathcal{E}_{\mathrm{CC}}( \mathbf{t}^{1}_{*})\). Consequently, \(\|\mathbf{t}^{1}_{**}\|_{\mathbb{V}}\) allows us to view Eq.(22) as providing an _a posteriori_ error estimate for a truncated CC calculation (that gives us \(\mathbf{t}^{0}_{**}\)), which could then have practical use if the Kowalski-Piecuch-auxiliary equation can be solved (at least approximately due to its FCI complexity).
If \(\langle e^{\widehat{T}(\mathbf{t}^{0}_{**})}\Phi_{0}\mid e^{\widehat{T}( \mathbf{t}^{1}_{*})}\Phi_{0}\rangle=0\) and \(\mathbf{t}^{1}_{*}\) is assumed to be non-degenerate, then \(e^{\widehat{T}(\mathbf{t}^{0}_{**})}|\Phi_{0}\rangle\) and \(e^{\widehat{T}(\mathbf{t}^{1}_{*})}|\Phi_{0}\rangle\) represent different eigenstates. This means that \(e^{\widehat{T}(\mathbf{t}^{0}_{**})}|\Phi_{0}\rangle\) is an approximation to an eigenstate _different_ from \(e^{\widehat{T}(\mathbf{t}^{1}_{*})}|\Phi_{0}\rangle\). In this case, it does not make sense to try to connect these solutions and a comparison between the energies in terms of an energy error estimate is not meaningful (in agreement with a divergent constant \(C\)). Conversely, if the energy difference \(|\mathcal{E}_{\mathrm{CC}}(\mathbf{t}^{0}_{**})-\mathcal{E}_{\mathrm{CC}}( \mathbf{t}^{1}_{*})|\) diverges for finite cluster amplitudes, then \(\langle e^{\widehat{T}(\mathbf{t}^{0}_{**})}\Phi_{0}\mid e^{\widehat{T}( \mathbf{t}^{1}_{*})}\Phi_{0}\rangle\to 0\) since we are trying to connect solutions at different truncation levels that are orthogonal (without degeneracies this means solutions belonging to different eigenvalues).
In the beginning of this section we referenced the origin of the energy error estimate above to the the Fundamental theorem of the formalism of \(\beta\)-nested equation due to Kowalski and Piecuch [22]. This theorem offers an explicit formula for the noniterative correction required to supplement the energy obtained from an (approximate) SRCC calculations, e.g., on the CCSD level, in order to retrieve the FCI result (see Eq. (6) in [58]). The approach of addressing the many-electron correlation problem, based on this formula, is called the _method of moments of coupled-cluster equations_ (MMCC) [22; 58] In this regard the Kowalski-Piecuch homotopy has also helped the development of the completely renormalized (CR) CC and EOM-CC methods and generalizations that exist under the name \(\mathrm{CC}(P;Q)\) (see [59] and references therein).
Figure 3: Visualization of the existence result of the Kowalski–Piecuch homotopy. Under certain assumptions there is a “tube” in amplitude space that connects a truncated CC solution to a FCC (FCI) solution. In principle, such trajectory allows us to select which solutions of a truncated CC calculation are “physical”.
## VII Discussion
In this manuscript, we elaborated on recent advances in applying homotopy methods to CC theory. We introduced the concept of homotopy continuation methods, which have proven successful in solving polynomial systems in the wider applied mathematics community. We then delved into the challenges faced by state-of-the-art approximation techniques when solving the CC equations in a non-perturbative regime, and we discussed how homotopy continuation methods can potentially overcome these obstacles.
Despite the promising benefits and previous research by pioneers such as Zivkovic and Monkhorst [18], as well as Kowalski and Piecuch [22, 23, 24, 19], the high dimensionality, non-linearity, and steep scaling of algebro-computational methods have hindered widespread adoption of homotopy continuation methods in CC theory. However, significant progress has been made in scaling algebro-computational methods [50, 51, 52, 53, 54]. We are convinced that in the near future, these approaches will be adopted and further developed to be applied to CC theory. In this manuscript, we present two exciting use cases where homotopy methods can make a significant impact.
The first application involves using homotopy methods to compute all roots of the CC equations [40]. We outline the advantages of having access to all roots in CC theory, and how this can extend CC methods beyond an application in the perturbative regime. The main challenge is that, despite the great progress in the field, the scaling of algebro-computational methods is still unfavorable for high-dimensional problems like the CC equations. However, computational algebraic tools are developed to find the roots of polynomial systems in the most general sense, in particular, they do not exploit the structured nature and symmetries of the CC equations. These provide opportunities to overcome unfavorable scaling, which is the subject of current investigations. In this article, we outlined the first step for homotopy approaches, which is an accurate estimate of the number of roots. We here build upon results by Kowalski and Piecuch [22] that reduced the estimated number of roots to \(3^{n_{s}}4^{n_{d}}\). We elaborate on how to reduce the order of the CC polynomials to be of merely second order by extending the CC equations to an algebraic variety \(\tilde{\mathcal{S}}\). The obtained bound is \(2^{n_{s}+2n_{d}}\), which provides a significant improvement to the existing bound.
The second use case involves homotopy methods to continue CC solutions from one level of accuracy to another, in particular, connecting solutions from a truncated CC scheme to the FCI regime. This specific use of homotopy is, in principle, a very important tool to justify what elements of \(\mathcal{S}(G)\) constitute physical solutions that approximate eigenstates. The Kowalski-Piecuch homotopy (with parameter \(\lambda\in[0,1]\)) allows us to create trajectories that connect solutions at different levels of accuracy and we have in this article discussed a mathematical existence result that guarantees such curves as a function of \(\lambda\) (under very technical assumptions that we have here omitted). Furthermore, building on the \(\beta\)-nested equations of Kowalski and Piecuch, an _a posteriori_ energy error estimate has previously been derived. This result is not restricted to the ground state. It would be interesting if future work could reveal more about the role of the Kowalski-Piecuch-auxiliary equations that might be of practical use. It is noteworthy how these auxiliary equations bear a resemblance to the tailored CC method [60, 61] that might provide further insights.
## VIII Acknowledgment
AL acknowledges funding through ERC StG REGAL under agreement No. 101041487 and RCN (the Research Council of Norway) under agreement 287906 (CCerror) and 262695 (Hylleraas Centre for Quantum Molecular Sciences). F.M.F acknowledges the funding from the Air Force Office of Scientific Research under award number FA9550-18-1-0095 and from the Simons Targeted Grants in Mathematics and Physical Sciences on Moire Materials Magic.
|
2308.02638 | Nature of even and odd magic angles in helical twisted trilayer graphene | Helical twisted trilayer graphene exhibits zero-energy flat bands with large
degeneracy in the chiral limit. The flat bands emerge at a discrete set of
magic twist angles and feature properties intrinsically distinct from those
realized in twisted bilayer graphene. Their degeneracy and the associated band
Chern numbers depend on the parity of the magic angles. Two degenerate flat
bands with Chern numbers $C_A=2$ and $C_B=-1$ arise at odd magic angles,
whereas even magic angles display four flat bands, with Chern number
$C_{A/B}=\pm1$, together with a Dirac cone crossing at zero energy. All bands
are sublattice polarized. We demonstrate the structure behind these flat bands
and obtain analytical expressions for the wavefunctions in all cases. Each
magic angle is identified with the vanishing of a zero-mode wavefunction at
high-symmetry position and momentum. The whole analytical structure results
from whether the vanishing is linear or quadratic for the, respectively, odd
and even magic angle. The $C_{3z}$ and $C_{2y}T$ symmetries are shown to play a
key role in establishing the flat bands. In contrast, the particle-hole
symmetry is not essential, except from gapping out the crossing Dirac cone at
even magic angles. | Daniele Guerci, Yuncheng Mao, Christophe Mora | 2023-08-04T18:00:13Z | http://arxiv.org/abs/2308.02638v1 | # Nature of even and odd magic angles in helical twisted trilayer graphene
###### Abstract
Helical twisted trilayer graphene exhibits zero-energy flat bands with large degeneracy in the chiral limit. The flat bands emerge at a discrete set of magic twist angles and feature properties intrinsically distinct from those realized in twisted bilayer graphene. Their degeneracy and the associated band Chern numbers depend on the parity of the magic angles. Two degenerate flat bands with Chern numbers \(C_{A}=2\) and \(C_{B}=-1\) arise at odd magic angles, whereas even magic angles display four flat bands, with Chern number \(C_{A/B}=\pm 1\), together with a Dirac cone crossing at zero energy. All bands are sublattice polarized. We demonstrate the structure behind these flat bands and obtain analytical expressions for the wavefunctions in all cases. Each magic angle is identified with the vanishing of a zero-mode wavefunction at high-symmetry position and momentum. The whole analytical structure results from whether the vanishing is linear or quadratic for the, respectively, odd and even magic angle. The \(C_{3z}\) and \(C_{2y}T\) symmetries are shown to play a key role in establishing the flat bands. In contrast, the particle-hole symmetry is not essential, except from gapping out the crossing Dirac cone at even magic angles.
## I Introduction
Twisted trilayer graphene (TTG) has recently received significant attention due to its structural similarities to twisted bilayer graphene while exhibiting distinct features. Different configurations of TTG have been considered based on the relative twist directions of the top and bottom layers: (1) mirror symmetrical TTG [1; 2; 3; 4; 5; 6; 7; 8], where the top and bottom layers are rotated in the same direction by the same angle relative to the middle layer, (2) twisted monolayer-bilayer graphene [9; 10; 11; 12; 13; 14], where Bernal stacked bilayer graphene is rotated with a small angle relative to the third layer, and (3) helical TTG [15; 16; 17; 18; 19; 20; 21; 22], where the top and bottom layers are rotated in opposite directions. By adding an additional graphene layer on top of the bilayer, TTG offers additional "knobs" to manipulate the system's physical properties. Besides the twist angle, the layer shifting and displacement field have also been identified as key factors for altering the physical properties of TTG. These features open up promising avenues for studying and controlling the unique electronic properties of TTG systems.
Recent theoretical studies [18; 19; 20; 21; 22; 23; 24; 25] and experimental findings [15] have focused on the helical configuration of TTG [16; 17]. This specific structure is of interest due to the presence of two non-commensurate moire interference patterns, resulting in a moire quasiperiodic crystal [15]. In the case of a small twist angle, the separation between length scales leads to the introduction of an effective moire lattice, defined as the closest commensurate ratio [18], along with a small deviation that gives rise to the supermoire lengthscale [18; 19; 25]. The slow supermoire periodicity can be seen as a relative shift between the two different moire scales and parametrized with a displacement vector [18; 19].
Theoretical studies on helical trilayer graphene [20; 21; 22] have revealed the significant impact of lattice relaxation effects on the structure at the supermoire scale. These studies have shown that energetically favorable ABA and BAB stacking regions expand at the expense of AAA regions. This leads to a real-space pattern characterized by triangular domains, where a sizeable gap separates the two central low-energy bands from the remote ones. The ABA regions are characterized by a total Chern number \(C_{\rm tot}=1\) for the central bands, while the BAB regions have \(C_{\rm tot}=-1\), resulting in a real-space Chern mosaic [19]. This mosaic is separated by a network of chiral gapless regions. In the chiral limit [26], the electronic bandstructure exhibits perfectly flat bands at a discrete series of magic angles. These flat bands possess interesting properties distinct from those observed in twisted bilayer graphene [26; 27; 28; 29; 30; 31; 32].
The study of flat bands, or zero modes, in the chiral limit consists in finding the kernel of an operator with a purely holomorphic derivative and an abelian or non-abelian periodic potential [33; 34; 35; 36; 37; 38; 39]. Investigating the mathematical structure of this operator for helical trilayer graphene, Popov and Tarnopolsky [25] have recently identified two possible scenarios of flat bands for symmetric stackings (AAA, ABA, or BAB). The first scenario, originally discussed in Ref. [19] (see also Ref. [20]), consists of two flat bands: a color-entangled Chern 2 band and a Chern -1 band. The color-entangled band [39; 40; 41; 42; 43; 44; 45] is particularly interesting as it cannot be simply reduced to a single Landau level. The second scenario, originally proposed in Ref. [24], features a fourfold degenerate flatband manifold with an additional Dirac cone crossing the flat bands at the \(\Gamma\) point.
In this work, we demonstrate that helical trilayer graphene with ABA (or BAB) stackings and equal twist angles displays a systematic series of magic angles, where the two scenarios are alternatively realized. Odd magic angles feature twofold degenerate flat bands following the first scenario, while even magic angles have four degenerate flat bands and a Dirac cone as per the second scenario. In each case, we prove the emergence of the flat
band structure and derive analytical expression for the zero-energy wavefunctions and the resulting Chern numbers.
Our theory reveals interesting relations between the dimension of the vector space spanned by the zero-energy modes and the total Chern number of the band. It provides an orthogonality relation between chiral and anti-chiral zero modes and demonstrates how the \(C_{3z}\) and \(C_{2y}T\) symmetries constrain the wavefunctions asymptotics and ultimately protect the emergence of the flat bands, in contrast with particle-hole symmetry which proves inessential. Finally, we briefly investigate the fate of the flat bands when breaking both the particle-hole and chiral symmetries.
The plan of the paper is as follows. In Sec. II we introduce the model and the zero-mode equation in the chiral limit. Then, we discuss the symmetries of the model protecting three zero-energy Dirac cones at \(K\), \(K^{\prime}\) and \(\Gamma\) of the mini Brillouin zone in Sec. II.2. Finally, we give in Sec. II.3 the geometrical relations connecting the chiral and anti-chiral flatband sectors. In Sec. III we move to the investigation of the zero-modes discussing the different properties realized at even and odd magic angles. We provide the analytical solution of the wavefunctions of the different flatbands and we explain the different properties of odd Sec. III.1 and even Sec. III.2 magic angles employing symmetry arguments as well as analytical results. In Sec. IV we discuss the effect of the breaking of the particle-hole symmetry \(P\). Finally, we give a summary of the main results of our work in Sec. V.
## II The chiral model for helical trilayer graphene
Helical trilayer graphene is formed by stacking three graphene layers in a staircase configuration with equal twist angles, as depicted in Fig. 1. It shows two incommensurate moire interference patterns, resulting in a superimposed modulation of the relative spatial shift between the two patterns. This long distance periodic modulation defines a triangular supermoire, or moire-of-moire lattice. It can be seen as a slow variation of the atomic registry, first at the moire scale and then at the supermoire scale as shown in Fig. 1. The details of the model [19] we use for the single-particle Hamiltonian are given in Appendix A. It is a continuum model [46; 47; 48; 49] parametrized by the local relative shift between the two moires. The parametrization evolves continuously between AAA, ABA and BAB local stacking configurations over the supermoire unit cell.
The atomic relaxation in helical trilayer graphene has been investigated in recent works [20; 21; 22] where rearrangements of the atomic registry was demonstrated. Theoretical calculations [20] (see also [21; 22]) performed at small twist \(\approx 1.5^{\circ}\) have revealed that relaxation favours the formation of a triangular lattice with linear size \(\approx 400\)nm of large domains characterized by the energetically favourable ABA/BAB stacking expanding at the expense of AAA regions, separated by a network of domain walls hosting chiral edge modes [21].
In this work, we shall focus on the spatially hege-monic ABA stacking shown in Fig. 1b composed by the periodic modulation of the atomic configurations in Fig. 1c. The BAB stacking is simply the \(C_{2z}T\) symmetric of ABA. Furthermore, we aim for simplicity and a deeper analytical understanding by considering the chiral limit [26; 50; 51; 52; 53; 50] of our continuum model, see Appendix A. Away from the chiral limit, the flat bands become dispersive but their bandwidth is still much smaller than the energy gap to the remote bands [19; 20], and they keep their topological features.
### Zero modes
As discussed in Appendix A, the Hamiltonian for ABA stacking in the chiral limit takes the form
\[\mathcal{H}_{\rm ABA}(\mathbf{r})=\begin{pmatrix}0&\mathcal{D}(\mathbf{r})\\ \mathcal{D}^{\dagger}(\mathbf{r})&0\end{pmatrix}, \tag{1}\]
for a single graphene valley (\(K\)), in the "sublattice-Chern" basis \((\psi_{1},\psi_{2},\psi_{3},\chi_{1},\chi_{2},\chi_{3})\)[29; 51; 52; 53], where \(\psi_{\ell}\) and \(\chi_{\ell}\) refer respectively to the \(A\) and \(B\) sublattice, and \(\ell=1,2,3\) labels the three distinct layers from top to bottom. We have introduced the differential operator
\[\mathcal{D}(\mathbf{r})=-i\sqrt{2}I_{3\times 3}\partial+\mathcal{A}(\mathbf{r}), \tag{2}\]
Figure 1: (a) Long wavelength periodicity at the supermoiré lattice. Each blue and orange point corresponds to a position where a precise AA stacking occurs between, respectively, the top-middle and the middle-bottom layers. The blue and orange sets of points show the two incommensurate moiré patterns. (b) ABA configuration characterized by a relative shift of \(\mathbf{r}_{0}=(\mathbf{a}_{1}-\mathbf{a}_{2})/3\) between the two moiré patterns. (c) The resulting moiré pattern involves the atomic scale configurations ABA, BAA and AAB where the atom of one layer lie at the center of two overlying hexagons. (d) Momentum space Brillouin zone with Dirac zero modes at \(K\), \(K^{\prime}\) and \(\Gamma\), \(\mathbf{q}_{j}\) shows the vectors defining the periodicity at the moiré scale.
with the derivative \(\partial=(\partial_{x}-i\partial_{y})/\sqrt{2}\) associated to the complex coordinate \(z=(x+iy)/\sqrt{2}\), and the non-abelian traceless gauge potential
\[\mathcal{A}(\mathbf{r})=\begin{pmatrix}0&\alpha U_{\omega}(\mathbf{r})&0\\ \alpha U_{0}(-\mathbf{r})&0&\alpha U_{0}(\mathbf{r})\\ 0&\alpha U_{\omega}(-\mathbf{r})&0\end{pmatrix}, \tag{3}\]
resulting from electron tunneling between the layers with the dimensionless strength \(\alpha\). Given the wavevector modulation \(\mathbf{q}_{j+1}=ie^{2i\pi j/3}\) shown in Fig. 1c, for which we use a complex notation [16], the moire potentials are given by
\[U_{0}(\mathbf{r})=\sum_{j=1}^{3}e^{-i\mathbf{q}_{j}\cdot\mathbf{r}}, \tag{4}\]
\(U_{\omega}(\mathbf{r})=U_{0}(\mathbf{r}+\mathbf{r}_{0})\) and \(U_{\omega^{*}}(\mathbf{r})=U_{0}(\mathbf{r}-\mathbf{r}_{0})\) with \(\mathbf{r}_{0}=(\mathbf{a}_{1}-\mathbf{a}_{2})/3\). The moire lattice vectors are \(\mathbf{a}_{j}=4\pi e^{i\pi/6}e^{i2\pi(j-1)/3}/3\) with \(j=1,2\). In the expressions above, all energy scales are expressed in unit of \(v_{F}k_{\theta}\), and all momentum (length) scales in units of \(k_{\theta}\) (\(1/k_{\theta}\)) with the moire momentum \(k_{\theta}=\theta K_{D}\), \(K_{D}=4\pi/3a_{\text{G}}\) and graphene lattice constant \(a_{\text{G}}\approx 2.46\text{\AA}\). \(\theta\) is the twist angle between consecutive layers and \(\alpha=w_{\text{AB}}/v_{F}k_{\theta}\).
An inspiring mathematical structure emerges [19; 26] in the chiral limit. \(\mathcal{H}_{\text{ABA}}\) hence anticommutes \(\{\mathcal{H}_{\text{ABA}},\Lambda^{z}\}=0\) with the chiral operator
\[\Lambda^{z}=\begin{pmatrix}I_{3\times 3}&0\\ 0&-I_{3\times 3}\end{pmatrix}, \tag{5}\]
and the search for zero-energy modes decomposes into
\[\mathcal{D}(\mathbf{r})\mathbf{\chi}_{\mathbf{k}}(\mathbf{r})=0,\quad\mathcal{D}^{\dagger}( \mathbf{r})\mathbf{\psi}_{\mathbf{k}}(\mathbf{r})=0, \tag{6}\]
where \(\mathbf{\psi}\) and \(\mathbf{\chi}\) are eigenstates of the chiral operator \(\Lambda^{z}\) with positive \(\Lambda^{z}=1\) (chiral sector) and negative \(\Lambda^{z}=-1\) (anti-chiral sector) eigenvalue, respectively. The chiral and anti-chiral sectors also refer to the \(A\)-sublattice and \(B\)-sublattice polarized states, respectively. The zero modes must also satisfy the Bloch periodic boundary conditions
\[\begin{split}\mathbf{\psi}_{\mathbf{k}}(\mathbf{r}+\mathbf{a}_{1/2})& =e^{i\mathbf{k}\cdot\mathbf{a}_{1/2}}U_{\varphi}\mathbf{\psi}_{\mathbf{k}}(\mathbf{r} ),\\ \mathbf{\chi}_{\mathbf{k}}(\mathbf{r}+\mathbf{a}_{1/2})&=e^{i\mathbf{k} \cdot\mathbf{a}_{1/2}}U_{\varphi}\mathbf{\chi}_{\mathbf{k}}(\mathbf{r}),\end{split} \tag{7}\]
inherited from Eq. 11, with \(U_{\varphi}=\text{diag}[\omega^{*},1,\omega]\). The remainder of this paper will be devoted to analyzing the properties of the zero modes, which are solutions of Eq.(6) under the boundary conditions specified in Eq.(7).
### Symmetries and Dirac cones
The Hamiltonian \(\mathcal{H}_{\text{ABA}}\) (1) is invariant under the spatial symmetries \(C_{3z}\) and \(C_{2y}T\) forming the space group \(P32^{\prime}1\) (\(\#150.27\) in the BNS notation [54; 55]). In the sublattice basis we have:
\[C_{3z}\mathcal{H}_{\text{ABA}}(\mathbf{r})C_{3z}^{-1}=\mathcal{H}_{\text{ABA}}( C_{3z}\mathbf{r}), \tag{8}\]
where
\[C_{3z}=\begin{pmatrix}\omega^{*}&0&0\\ 0&1&0\\ 0&0&\omega^{*}\end{pmatrix}\otimes\begin{pmatrix}\omega&0\\ 0&\omega^{*}\end{pmatrix}. \tag{9}\]
\(C_{2y}T\) is the composition of a spinless time-reversal symmetry and a two-fold rotation around the \(y\) axis:
\[C_{2y}T\mathcal{H}_{\text{ABA}}(\mathbf{r})\left(C_{2y}T\right)^{-1}=\mathcal{H}_{ \text{ABA}}(C_{2y}\mathbf{r}), \tag{10}\]
where the transformation exchanges top and bottom layer and includes the complex conjugation operator \(\mathcal{K}\):
\[C_{2y}T=\begin{pmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{pmatrix}\otimes I_{2\times 2}\mathcal{K}. \tag{11}\]
In addition to these spatial symmetries, the model also exhibits an emerging particle-hole symmetry [16; 18; 19]
\[P=\begin{pmatrix}0&0&1\\ 0&-1&0\\ 1&0&0\end{pmatrix}\otimes\sigma^{0}, \tag{12}\]
anticommuting with \(\mathcal{H}_{\text{ABA}}\)
\[P\mathcal{H}_{\text{ABA}}(\mathbf{r})P^{-1}=-\mathcal{H}_{\text{ABA}}(-\mathbf{r}). \tag{13}\]
In general, _i.e._ for all twist angles \(\theta\), the Hamiltonian \(\mathcal{H}_{\text{ABA}}\) possesses three pairs (chiral and anti-chiral) of zero modes from which three Dirac cones emerge. They are located at \(\Gamma\), \(K\) and \(K^{\prime}\) of the mini Brillouin zone Fid. 1c corresponding to \(\mathbf{k}=0,\mathbf{q}_{1},-\mathbf{q}_{1}\), and originate from the Dirac cones of the individual three graphene layers. The zero modes at \(\Gamma\) are protected by particle-hole symmetry \(P\) whereas those at \(K\) and \(K^{\prime}\) are stabilized by the anti-unitary particle-hole operator \(PC_{2y}T\), as further discussed in Appendix B. We remark that our symmetry analysis is also valid away from the chiral limit [16; 18].
Magic angles are specific values of the twist angle \(\theta\) (or \(\alpha\) as they are related to each other) at which the two central bands of \(\mathcal{H}_{\text{ABA}}\) become perfectly flat. At these angles, the zero modes at \(\Gamma\), \(K\), and \(K^{\prime}\) are no longer unique, and an extensive degenerate set of zero modes emerges, forming flat bands.
### Geometrical relations
Irrespective of the twist angle and value of \(\alpha\), we can derive a set of identities which shows an interesting structure for the zero modes. Chiral and anti-chiral zero-energy modes satisfy the relation:
\[v(\mathbf{r})=\bar{\mathbf{\chi}}_{\mathbf{k}_{1}}(\mathbf{r})\cdot\mathbf{\psi}_{\mathbf{k}_{2}}(\mathbf{r} )=0, \tag{14}\]
for \(\mathbf{k}_{1}\neq\mathbf{k}_{2}\) and arbitrary \(\mathbf{r}\) with \(\bar{\mathbf{\chi}}_{\mathbf{k}_{2}}(\mathbf{r})\equiv\mathbf{\chi}_{\mathbf{k}_{2}}^{*}(\mathbf{r})\). The proof of Eq. (14) is straightforward. A direct computation shows that \(\bar{\partial}v(\mathbf{r})=0\implies v(\mathbf{r})=v(z)\) by simply using that \(\mathbf{\chi}_{\mathbf{k}_{1}}(\mathbf{r})\) and \(\mathbf{\psi}_{\mathbf{k}_{2}}(\mathbf{r})\) are zero modes of \(\mathcal{D}(\mathbf{r})\) and \(\mathcal{D}^{\dagger}(\mathbf{r})\), respectively. \(v(z)=0\) then follows from Liouville's theorem and periodicity over the moire unit cell. Eq. (14) tells us that the chiral \(\mathbf{\psi}\) and anti-chiral \(\bar{\mathbf{\chi}}\) solutions of different momenta at a given \(\mathbf{r}\) are orthogonal to each other. In addition, we also find that a chiral zero mode can be generated from a pair of anti-chiral solutions. Namely,
\[\mathbf{\psi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}(\mathbf{r})=\bar{\mathbf{\chi}}_{\mathbf{k}_{1}}(\mathbf{r} )\times\bar{\mathbf{\chi}}_{\mathbf{k}_{2}}(\mathbf{r}) \tag{15}\]
solves \(\mathcal{D}^{\dagger}(\mathbf{r})\mathbf{\psi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}(\mathbf{r})=0\). Similarly,
\[\mathbf{\chi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}(\mathbf{r})=\bar{\mathbf{\psi}}_{\mathbf{k}_{1}}(\mathbf{r })\times\bar{\mathbf{\psi}}_{\mathbf{k}_{2}}(\mathbf{r}), \tag{16}\]
solves \(\mathcal{D}(\mathbf{r})\mathbf{\chi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}(\mathbf{r})=0\). Expressions similar to Eq. (14) and Eq. (15) have been derived in Ref. [25] but with a different choice of gauge.
We introduce the Wronskian [25; 19] of the Dirac spinors in the chiral sector:
\[W_{A}(\mathbf{r})=\mathbf{\psi}_{\Gamma}(\mathbf{r})\cdot\left[\mathbf{\psi}_{K}(\mathbf{r}) \times\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\right], \tag{17}\]
which obeys the relation \(\bar{\partial}W_{A}=0\). Thus, according to Liouville's theorem, we have \(W_{A}(\mathbf{r})=W_{A}\). \(W_{A}\neq 0\) since, unlike \(v(\mathbf{r})\), Eq. (17) is invariant under translations of moire lattice vectors. \(W_{A}\neq 0\) implies that the three vectors \(\mathbf{\psi}_{\Gamma}\), \(\mathbf{\psi}_{K}\), and \(\mathbf{\psi}_{K^{\prime}}\) are linearly independent. Similarly, we can define the Wronskian in the anti-chiral sector
\[W_{B}(\mathbf{r})=\mathbf{\chi}_{\Gamma}(\mathbf{r})\cdot\left[\mathbf{\chi}_{K}(\mathbf{r}) \times\mathbf{\chi}_{K^{\prime}}(\mathbf{r})\right] \tag{18}\]
satisfying the condition \(\partial W_{B}=0\), which implies \(W_{B}(\mathbf{r})=W_{B}\). Combining the definitions of the Wronskians with Eqs.(15) and(16), we find that \(W_{A}\) and \(W_{B}\) are both proportional, up to a phase, to the scalar products
\[\bar{\chi}_{\Gamma}(\mathbf{r})\cdot\mathbf{\psi}_{\Gamma}(\mathbf{r}),\quad\bar{\chi}_{K }(\mathbf{r})\cdot\mathbf{\psi}_{K}(\mathbf{r}),\quad\bar{\chi}_{K^{\prime}}(\mathbf{r})\cdot \mathbf{\psi}_{K^{\prime}}(\mathbf{r}), \tag{19}\]
between the chiral and anti-chiral zero modes at \(\Gamma\), \(K\), and \(K^{\prime}\). Consequently, \(W_{A}\) and \(W_{B}\) vanish simultaneously with these scalar products. As illustrated in the top panel of Fig. 2, the simultaneous vanishing of \(|W_{A}|\) and \(|W_{B}|\) defines the positions of the magic angles [25; 19].
From these different relations, an intuitive picture emerges. At non-magic angles, the set of chiral vectors \(\mathcal{C}_{A}\equiv\{\mathbf{\psi}_{\Gamma},\mathbf{\psi}_{K},\mathbf{\psi}_{K^{\prime}}\}\) is linearly independent and span the full three-dimensional space of layers at each \(\mathbf{r}\). The same holds for the anti-chiral set \(\mathcal{C}_{B}\equiv\{\mathbf{\chi}_{\Gamma},\mathbf{\chi}_{K},\mathbf{\chi}_{K^{\prime}}\}\) whereas chiral and anti-chiral vectors are mutually orthogonal when their momenta are different as a result of Eq. (14). This is depicted in Fig. 2a. The situation is quite different at magic angles because the scalar products of Eq. (19) all vanish. The chiral \(\mathcal{C}_{A}\) and anti-chiral \(\mathcal{C}_{B}\) sets then form two distinct subspaces which are orthogonal to each other with the three-dimensional layer space. There are three possible cases, illustrated in Fig. 2b and c - corresponding to the two scenarios of Ref. [25] - (i) the chiral set forms a subspace of dimension 2 and the anti-chiral set is forced to be one-dimensional, _i.e._ all three anti-chiral vectors are collinear. (ii) same as (i) but the role of chiral and anti-chiral are exchanged. (iii) both chiral and anti-chiral sets have dimension 1.
As we will demonstrate in Sec. III, the chiral and anti-chiral flat bands that emerge at the magic angle are generated by the two sets \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\), leaving the subspace decomposition invariant: \(2+1\) for (i), \(1+2\) for (ii), and \(1+1\) for (iii). Additionally, for case (iii), we will show that the third dimension is filled by a pair of additional zero modes at the \(\Gamma\) point, corresponding to a crossing of the flat bands by a Dirac cone.
By decreasing the twist angle or increasing the value of \(\alpha\), we will uncover in Sec. III a series of magic angles alternating between cases (i) and (iii) with an even/odd effect. Interestingly, we will also observe that the dimension or rank of the flat band coincides with the absolute value of the Chern number, highlighting the relationship between the rank of the flat band and the number of lowest Landau levels comprising it [42; 43; 44; 45].
## III Even and odd series of magic angles
The sequence of consecutive magic angles is showcased by computing the renormalized Fermi velocity at \(K\) as a function of \(\alpha\), see Fig. 3a, in agreement with the Wronskians shown in Fig. 2. The odd and even magic angles are highlighted by red and green vertical lines and correspond, respectively, to the cases (i) and (iii) discussed
Figure 2: Top panel: Wronskian \(|W_{A}|\) (orange) and \(|W_{B}|\) (blue) for the chiral and anti-chiral sector as a function of \(\alpha\). \(|W_{A/B}|\) vanishes at the magic angles denoted as red and green vertical lines. Bottom panel: Figs. (a)-(c) show a schematic representation of the zero mode spinors \(\{K,K^{\prime},\Gamma\}\) in the chiral and anti-chiral sectors. Panel (a) represents a generic configuration away from the magic angle where the spinors are linearly independent and chiral and anti-chiral sectors satisfy Eq. (14), while (b) and (c) sketch the two scenario realized at the magic angle in the helical trilayer graphene. In panel (c) \(\mathbf{\phi}_{\Gamma A}\) and \(\bar{\mathbf{\phi}}_{\Gamma B}\) are the two additional zero modes degenerate with the flat bands at \(\Gamma\).
in Sec. II.3. For odd magic angles \(\alpha_{2n-1}^{*}\), the first being \(\alpha_{1}^{*}\approx 0.377\) (\(\theta_{1}^{*}\approx 1.687^{\circ}\)), the spectrum features two degenerate flat bands as displayed in Fig. 3b. For even magic angles \(\alpha_{2n}^{*}\) however, the first one is \(\alpha_{2}^{*}\approx 1.197\) (\(\theta_{2}^{*}\approx 0.532^{\circ}\)), four degenerate flat bands arise coexisting with a Dirac cone crossing them at \(\Gamma\), as shown in Fig. 3c. The structure and degeneracy of the zero-energy bands thus repeats periodically and depends on the parity of the magic angle label.
Furthermore, inspecting more closely the sequence of magic angles, we find that the difference between consecutive values rapidly approaches a constant value \(\alpha_{2n+1}^{*}-\alpha_{2n-1}^{*}\approx\alpha_{2n+2}^{*}-\alpha_{2n}^{*} \simeq 1.214\), as shown in Fig. 4, similarly to the twisted bilayer case [26, 56, 57, 58]. In the following, we demonstrate that the distinct nature of the even and odd magic angles arises from symmetry considerations. These constraints dictate the behavior of the zero-mode wavefunction at the high-symmetry points together with the identities presented in Sec. II.3.
### Zero modes for odd magic angles
To gain analytical insight into the origin of the odd magic angle, we study the anti-chiral zero mode \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\) in the vicinity of the high-symmetry point AA (\(\mathbf{r}=0\)). The wavefunctions around AA regions are not described by pseudo Landau levels [59], reflecting a charge distribution of the flatbands different from the one realized in twisted bilayer graphene [20]. As detailed in Appendix C.1, the wavefunction \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\) can be formally expanded in powers of \(z\) and \(\bar{z}\equiv z^{*}\) close to \(r=0\). Enforcing the symmetries \(C_{3z}\), \(C_{2y}T\) and \(P\) constrains and simplifies the resulting expansion. To leading order, we find
\[\mathbf{\chi}_{\Gamma}(\mathbf{r})=\begin{pmatrix}0\\ \chi_{2}\\ 0\end{pmatrix}+O(\bar{z}), \tag{20}\]
where \(C_{2y}T\) (see Eq. (11)) imposes that \(\chi_{2}\equiv\chi_{\Gamma 2}(0)\) is a real coefficient. Plotting the real \(\chi_{2}\) as a function of \(\alpha\) in Fig. 5a, we find that it vanishes for all odd magic angles. The condition \(\chi_{2}=0\), implying \(\mathbf{\chi}_{\Gamma}(0)=0\), thus defines the series of odd magic angles and yields a vanishing Wronskian from Eq. (18). The vertical red lines of Fig. 5 exactly matches the one obtained from the Wronskian in Fig. 2.
Beyond Eq. (20), the next order obeying symmetries is \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\simeq\bar{z}(\chi_{1}^{\prime},0,\chi_{1}^{\prime *})\) at the odd magic angles. Here, \(\chi_{1}^{\prime}\) is an arbitrary complex coefficient dependent on the magic angle. The presence of this simple zero is sufficient to predict and explicitly construct [19] the whole anti-chiral flat band following the seminal reasoning of Tarnopolsky et al. [26] in twisted bilayer graphene, see also Refs. [27, 29, 30]. The zero-mode wavefunctions exhibit the analytical expression:
\[\mathbf{\chi}_{\mathbf{k}}(\mathbf{r})=\bar{\eta}_{\mathbf{k}}(\bar{z})\mathbf{\chi}_{\Gamma}(\bm {r}), \tag{21}\]
where the antiholomorphic \(\bar{\eta}_{\mathbf{k}}(\bar{z})=\eta_{\mathbf{k}}^{*}(-z)\) is related to the meromorphic function describing the lowest Landau level (LLL) on a torus [60, 61]:
\[\eta_{\mathbf{k}}(z)=e^{ik_{1}z/a_{1}}\frac{\vartheta_{1}[z/a_{1}-k/b_{2},\omega] }{\vartheta_{1}[z/a_{1},\omega]} \tag{22}\]
with the notation \(k_{1}=\mathbf{k}\cdot\mathbf{a}_{1}\) and the Jacobi theta-function
\[\vartheta_{1}[z,\omega]=\sum_{n\in\mathbb{Z}}e^{i\pi\omega(n+1/2)^{2}}e^{2i \pi(z-1/2)(n+1/2)} \tag{23}\]
which vanishes at \(z=0\) and results in a Bloch periodicity (7) for Eq. (21). Momentum space boundary conditions [30] on the self-periodic part of the Bloch state \(\mathbf{u}_{\bar{k}}(\mathbf{r})=\mathbf{\chi}_{\mathbf{k}}(\mathbf{r})e^{-i\mathbf{k}\cdot\mathbf{r}}\) give
\[\mathbf{u}_{\bar{k}+\bar{b}_{\bar{z}}}(\mathbf{r})=e^{-i\mathbf{b}_{\bar{z}}\cdot\mathbf{r}}e^ {i\phi_{\bar{z},b_{\bar{z}}}}\mathbf{u}_{\bar{k}}(\mathbf{r}), \tag{24}\]
Figure 4: a) Distance between neighboring magic angles in the even and odd sequence. b) Sequence of magic angles \(\alpha_{n}\). Increasing the order \(n\) the distance between nearest neighbour magic angles in the even and odd sectors approaches a constant value \(\approx 1.214\) represented by the horizontal gray line in a).
Figure 3: a) Renormalized Fermi velocity at \(K\) as a function of \(\alpha\). Vertical red and green lines denote odd \(\alpha_{2n-1}^{*}\) and even \(\alpha_{2n}^{*}\) magic angles, respectively. Panel b) and c) show the bandstructure at the first magic angle \(\alpha_{1}^{*}\approx 0.377\) (\(\theta_{1}^{*}\approx 1.687^{\circ}\)) and the second magic angle \(\alpha_{2}^{*}\approx 1.197\) (\(\theta_{2}^{*}\approx 0.532^{\circ}\)), respectively. Red lines denote the flat bands, while \(\times 2\) and \(\times 4\) give the number of zero modes per \(\mathbf{k}\)-point for odd and even magic angles, respectively.
with \(\phi_{k,b_{1}}=-2\pi\bar{k}/\bar{b}_{2}+\pi-\pi\bar{b}_{1}/\bar{b}_{2}\) and \(\phi_{k,b_{2}}=\pi\) corresponding to a flat band with total Chern number \(C_{B}=-1\)[30].
Thanks to the spatial symmetries, the vanishing of \(\chi_{2}\) at the odd magic angles is thus sufficient to predict a flat band (21) in the anti-chiral sector with Chern number \(-1\). Moreover, employing Eq. (16) also determines the chiral sector. Eq. (16), evaluated at the origin \(\mathbf{r}=0\) with \(\mathbf{k}_{1}=K\) and \(\mathbf{k}_{2}=K^{\prime}\), gives
\[\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)=\bar{\mathbf{\chi}}_{\Gamma}(0)=0, \tag{25}\]
resulting in the fact that \(\mathbf{\psi}_{K}(0)\) and \(\mathbf{\psi}_{K^{\prime}}(0)\) are collinear vectors, see Fig. 5a. Expanding \(\mathbf{\psi}_{K}\) and \(\mathbf{\psi}_{K^{\prime}}\) around \(\mathbf{r}=0\) and enforcing the symmetries as in Appendix C.1, we find to leading order \(\mathbf{\psi}_{K}(0)=(\psi_{1},0,\psi_{3})\) and \(\mathbf{\psi}_{K}(0)=(\psi_{3}^{*},0,\psi_{1}^{*})\) where \(\psi_{1/3}\) are complex coefficients. Using Eq. (25) yields
\[\chi_{2}=|\psi_{3}|^{2}-|\psi_{1}|^{2}=0\implies\psi_{1}=\psi_{3}e^{i\varphi}. \tag{26}\]
We choose a gauge with \(\varphi=0\) and find
\[\mathbf{\psi}_{K}(0)=\mathbf{\psi}_{K^{\prime}}(0). \tag{27}\]
As shown in Ref. [19], this identity is sufficient to construct the flat band wavefunctions for the chiral sector,
\[\mathbf{\psi}_{\mathbf{k}}(\mathbf{r})=a_{k}\eta_{\mathbf{k}+K^{\prime}}(z)\mathbf{\psi}_{K}(\bm {r})+a_{-k}\eta_{\mathbf{k}+K}(z)\mathbf{\psi}_{K^{\prime}}(\mathbf{r}), \tag{28}\]
satisfying the Bloch periodicity Eq. (7), with the holomorphic function defined in Eq. (22) and \(a_{k}=\vartheta_{1}[(k+K)/b_{2},\omega]\). Momentum space boundary conditions give \(C_{A}=2\) as shown in Ref. [19]. The magic relation (27) which is not associated to the vanishing of the chiral spinor \(\mathbf{\psi}\) justifies the charge density distribution homogeneity in the Chern 2 band [20].
Remarkably, all odd magic angles feature a chiral flat band of Chern 2, generated by the two vectors \(\mathbf{\psi}_{K}(\mathbf{r})\) and \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\), alongside an anti-chiral flat band of Chern \(-1\) where all states align collinearly with \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\). Moreover, Eq. (14) reveals that the chiral and anti-chiral flat band spaces are orthogonal to each other. Specifically, \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\) and all the anti-chiral wavefunctions are oriented in a direction perpendicular to the chiral plane formed by \(\mathbf{\psi}_{K}(\mathbf{r})\) and \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\). This provides a clear understanding of why a chiral Chern \(\pm 2\) band is accompanied by an anti-chiral Chern \(\mp 1\) band within a three-layer system.
Finally, we observe that the cross product \(\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)\) vanishes also for even magic angles, see black line in Fig. 5a but for a different reason that we shall explain in the next Section.
### Zero-modes for even magic angles
We now turn to the characterization of the zero-energy modes and flat bands for even magic angles. In contrast to the previous case, \(\mathbf{\chi}_{\Gamma}(0)\) remains finite at \(\alpha_{2n}^{*}\) as shown in Fig.5a. Consequently, the zero-mode construction discussed in SectionIII.1 does not apply for even magic angles. To make progress, we examine the behavior of the zero-mode solution \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) in the vicinity of the AB stacking point \(\mathbf{r}_{0}=(\mathbf{a}_{1}-\mathbf{a}_{2})/3\). By expanding the zero-mode equation (6) for the middle layer component \(\psi_{K^{\prime}2}\) to linear order in the deviation \(\mathbf{r}\), we obtain:
\[\bar{\partial}^{2}\psi_{K^{\prime}2}-\frac{9\alpha^{2}}{2}\left(\frac{z^{2}}{ 2}+\frac{\bar{z}}{\sqrt{2}}\right)\psi_{K^{\prime}2}=0, \tag{29}\]
while top and bottom layer amplitudes are given by
\[\begin{split}&\bar{\partial}\psi_{K^{\prime}1}=3\alpha z\psi_{K^{ \prime}2}/\sqrt{2},\\ &\psi_{K^{\prime}3}=\left(i\sqrt{2}\bar{\partial}\psi_{K^{\prime }2}-3\alpha z\psi_{K^{\prime}1}/\sqrt{2}\right)/(3\alpha).\end{split} \tag{30}\]
We obtain the general solution of Eq. (29):
\[\psi_{K^{\prime}2}(\mathbf{r}+\mathbf{r}_{0})=\gamma_{A}Ai(\zeta)+\gamma_{B}Bi(\zeta), \tag{31}\]
where \(\zeta=(2\alpha)^{2/3}\left(z^{2}/2+\bar{z}/\sqrt{2}\right)\) and we retain both Airy functions \(Ai(z)\) and \(Bi(z)\)[62]. A similar behavior of the zero-mode wavefunction has been obtained for twisted bilayer graphene around the AB/BA points in Ref. [56]. \(C_{3z}\) rotation centered at the high-symmetry point \(\mathbf{r}_{0}\) implies:
\[\mathbf{\psi}_{K^{\prime}}(C_{3z}\mathbf{r}+\mathbf{r}_{0})=\begin{pmatrix}\omega^{*}&0&0 \\ 0&\omega^{*}&0\\ 0&0&1\end{pmatrix}\mathbf{\psi}_{K^{\prime}}(\mathbf{r}+\mathbf{r}_{0}), \tag{32}\]
employing the McLaurin expansion of the Airy functions [62] we find \(\gamma_{A}=-3^{2/3}N\), \(\gamma_{B}=3^{1/6}N\):
\[\mathbf{\psi}_{K^{\prime}}(\mathbf{r}+\mathbf{r}_{0})=\begin{pmatrix}0\\ 0\\ \psi_{3}^{(0)}\end{pmatrix}-\frac{3i\alpha\psi_{3}^{(0)}}{\sqrt{2}}\begin{pmatrix} 0\\ \bar{z}\\ 0\end{pmatrix}+O(z^{2}), \tag{33}\]
Figure 5: a) Wavefunction absolute value \(|\mathbf{\chi}_{\Gamma}(0)|\) (blue) and cross product \(|\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)|\) (black) as a function of \(\alpha\). b) Wavefunction absolute values \(|\mathbf{\chi}_{K}(\mathbf{r}_{0})|\) and \(|\mathbf{\psi}_{K^{\prime}}(\mathbf{r}_{0})|\) as a function of \(\alpha\). The grey solid line shows the renormalized Fermi velocity \(v^{*}/v_{F}\). Vertical green and red lines show the location of odd and even magic angle, respectively.
where \(\psi_{3}^{(0)}\equiv\psi_{K^{\prime}3}(\mathbf{r}_{0})\). In addition, the \(PC_{2y}T\) symmetry gives:
\[\mathbf{\psi}_{K^{\prime}}(\mathbf{r}+\mathbf{r}_{0})=\begin{pmatrix}-1&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix}\mathbf{\psi}_{K^{\prime}}^{*}(C_{2x}\mathbf{r}+\mathbf{r}_{0}), \tag{34}\]
implying \(\Re\psi_{3}^{(0)}=0\). Consequently, \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r}_{0})\) is determined by a single real number \(\Im\psi_{3}^{(0)}\). The condition for the even magic angle is, therefore, the vanishing of this coefficient, such that \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r}_{0})=0\), and the Wronskian \(W_{A}\) also becomes zero as per Eq.(17). The vertical green lines displayed in Fig. 5b align with those in Fig. 2. At even magic angles, the wavefunction from Eq. (33) further expands as \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r}_{0}+\mathbf{r})\simeq z^{2}(a,b,0)\) at small \(\mathbf{r}\) (with complex coefficients \(a\) and \(b\)), corresponding to a double zero. This structure enables the following analytical form for the chiral flat bands
\[\mathbf{\psi}_{\mathbf{k}}(\mathbf{r})=\eta_{\mathbf{k}^{\prime}}^{(0)}(z)\eta_{\mathbf{k}-\mathbf{k} ^{\prime}+K}^{(0)}(z)\mathbf{\psi}_{K^{\prime}}(\mathbf{r}), \tag{35}\]
with the meromorphic and periodic function
\[\eta_{\mathbf{k}}^{(0)}(z)=e^{ik_{1}z/a_{1}}\frac{\vartheta_{1}[(z-z_{0})/a_{1}-k/ b_{2},\omega]}{\vartheta_{1}[(z-z_{0})/a_{1},\omega]}, \tag{36}\]
\(z_{0}\) denotes the complex representation of \(\mathbf{r}_{0}\). Despite the fact that \(\mathbf{k}^{\prime}\) is an arbitrary wavevector in Eq. (35), at most two distinct values of \(\mathbf{k}^{\prime}\) yield independent wavefunctions. Setting for instance, \(\mathbf{k}^{\prime}=K^{\prime}\) and \(\mathbf{k}^{\prime}=0\), we obtain two degenerate flat bands described by the wavefunctions \(\mathbf{\psi}_{\mathbf{k}}^{(1)}(\mathbf{r})=\eta_{K^{\prime}}^{(0)}(z)\eta_{K^{\prime}+ \mathbf{k}}^{(2)}(z)\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) and \(\mathbf{\psi}_{\mathbf{k}}^{(2)}(\mathbf{r})=\eta_{\mathbf{k}+K}^{(0)}(z)\mathbf{\psi}_{K^{\prime} }(\mathbf{r})\).
A very similar analytical form was introduced by Popov and Tarnopolsky [24; 25] to describe the fourfold degenerate flat bands but at AAA stacking magic angles. Mathematically, the construction of Eq. (35) is possible because the poles brought by the two functions \(\eta_{\mathbf{k}}^{(0)}\) at \(z=z_{0}\) are precisely cancelled by the double zero of \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) at \(\mathbf{r}_{0}\) (\(z_{0}\)). The resulting wavefunctions are finite everywhere, obey the Bloch periodic boundary conditions Eq. (7), and solve the zero-mode Eq. (6). All chiral flat band wavefunctions are collinear to \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) and thus span a one-dimensional space. It readily explains the vanishing of the cross product \(\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)\) also at even magic angle, as shown in Fig. 5a. The above derivation relies on the double zero of \(\mathbf{\psi}_{K^{\prime}}\) at \(\mathbf{r}_{0}\). Alternatively, we can reconstruct the twofold degenerate flat band from the quadratic vanishing of \(\mathbf{\psi}_{\Gamma}^{(1)}(\mathbf{r})\) at \(\mathbf{r}=0\). As discussed in Appendix C.3, the advantage of this latter approach is that it does not require the particle-hole symmetry \(P\), only \(C_{3z}\) and \(C_{2y}T\).
Moving to the anti-chiral sector, we can repeat the same analysis for the wavefunction \(\mathbf{\chi}_{K}(\mathbf{r})\) in the vicin- of \(\mathbf{r}_{0}\). \(\mathbf{\chi}_{K}(\mathbf{r}_{0})=0\) at even magic angles, as shown in Fig. 5b, and the expansion around \(r_{0}\) is quadratic indicating a double zero. The same construction thus extends to the anti-chiral sector, resulting in two degenerate flat bands. Additionally, by applying momentum space boundary conditions, we can determine that each chiral flat band possesses a Chern number \(C_{A}=+1\), whereas the anti-chiral flat bands have \(C_{B}=-1\). In total, we prove that even magic angles feature a two-fold degenerate set of bands in each sublattice sector, or four flat bands with a vanishing total Chern number.
The chiral flat bands align with \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) while the anti-chiral with \(\mathbf{\chi}_{K}(\mathbf{r})\), leaving room for a third direction in the layer space. By following the arguments presented in Ref.[25], one can analytically demonstrate the existence of a pair of additional zero modes at \(\Gamma\). These zero modes correspond to a Dirac cone crossing the flat bands in Fig.3c. The proof is outlined as follows. From Eq. (14), we know that \(\mathbf{\psi}_{K}(\mathbf{r})\) and \(\mathbf{\chi}_{K^{\prime}}(\mathbf{r})\) are orthogonal to each other. Both wavefunctions vanish at \(\mathbf{r}_{0}\) and remain finite everywhere else. Consequently, we can define a function \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) such that:
\[\mathbf{\psi}_{K^{\prime}}(\mathbf{r})=\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})\times\bar{\bm {\chi}}_{K}(\mathbf{r}), \tag{37}\]
Here, we emphasize that this expression does not uniquely define \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\). The subscript \(\Gamma\) indicates that \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) has the Bloch periodicity of the \(\Gamma\) point, as inferred from the periodicity of the two other functions. Applying the operator \(\mathcal{D}^{\dagger}(\mathbf{r})\) to both sides of Eq. (37), we arrive at
\[0=-\left[\mathcal{D}^{*}(\mathbf{r})\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})\right]\times \bar{\mathbf{\chi}}_{K}(\mathbf{r}), \tag{38}\]
which shows that \(\mathcal{D}^{*}(\mathbf{r})\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})\) must be proportional to \(\bar{\mathbf{\chi}}_{K}(\mathbf{r})\), or
\[\mathcal{D}^{*}(\mathbf{r})\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})=f(\mathbf{r})\bar{\mathbf{\chi} }_{K}(\mathbf{r}) \tag{39}\]
with some periodic function \(f(\mathbf{r})\). We introduce the function \(g(\mathbf{r})\), solution of \(\bar{\partial}g(\mathbf{r})=f(\mathbf{r})\), and shift \(\bar{\mathbf{\phi}}_{\Gamma}\) as
\[\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})\rightarrow\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})-ig( \mathbf{r})\bar{\mathbf{\chi}}_{K}(\mathbf{r})/\sqrt{2} \tag{40}\]
to finally obtain
\[\mathcal{D}(\mathbf{r})\mathbf{\phi}_{\Gamma}(\mathbf{r})=0. \tag{41}\]
This last equation demonstrates that we have constructed an additional anti-chiral zero-energy solution. Due to its definition in Eq.(37), \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) cannot be proportional to \(\mathbf{\chi}_{K}(\mathbf{r})\) and therefore lies outside the one-dimensional anti-chiral subspace. Eq.(37) also implies that \(\bar{\mathbf{\phi}}_{\Gamma}(\mathbf{r})\cdot\mathbf{\psi}_{K^{\prime}}(\mathbf{r})=0\): \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) and \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) are orthogonal. As a result, \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) occupies the third vacant direction. As already discussed earlier, \(\mathbf{\phi}_{\Gamma}(\mathbf{r})\) satisfies the Bloch periodic condition with momentum at \(\Gamma\). A similar construction yields a second zero-energy state at \(\Gamma\) in the chiral sector, also spanning the third direction orthogonal to both the chiral and anti-chiral flat bands. This result concludes the characterization of the low-energy spectrum at even magic angles.
Breaking the particle-hole symmetry \(P\)
The twist-angle dependency in the Pauli matrices obtained by replacing \(\mathbf{\sigma}\) with \(\mathbf{\sigma}_{\pm\theta}\), see Eq. 13, breaks the particle-hole symmetry \(P\) (12). The symmetry breaking is negligible in the small twist angle regime \(\theta\) and has been ignored in the previous analysis. In the following we first discuss the stability of the zero modes to this perturbation. Then, we move to consider the effect of the perturbation away from the chiral limit.
### Robustness of the flat bands
In the chiral limit the twist angle difference between top and bottom layer \(\mathbf{\sigma}_{\pm\theta}\) enters in the Hamiltonian \(H_{\rm ABA}\) (1) as:
\[\mathcal{D}(\mathbf{r})=-i\sqrt{2}M_{-\theta}\partial+\mathcal{A}(\mathbf{r}), \tag{42}\]
where \(M_{\theta}=\mathrm{diag}\left(e^{i\theta},1,e^{-i\theta}\right)\). Differently from twisted bilayer graphene [26] the layer dependent phases \(M_{\theta}\) in Eq. (42) cannot be gauged away since \([M_{\theta/2},\mathcal{A}(\mathbf{r})]\neq 0\). As mentioned before, the layer dependent phase breaks the particle-hole symmetry \(P\)[18; 20] while \(C_{3z}\) and \(C_{2y}T\) are still symmetries of \(\mathcal{H}_{\rm ABA}\). In the chiral limit \(w_{\rm AA}=0\) the Hamiltonian \(\mathcal{H}_{\rm ABA}\) is characterized by three Dirac cones at \(K\), \(K^{\prime}\) and \(\Gamma\) that are protected by \(C_{3z}\) and the chiral symmetry \(\Lambda_{z}\). From Eq. (42) we readily realize that the orthogonality relation (14) transforms into the identity:
\[v_{\theta}(\mathbf{r})=\bar{\mathbf{\chi}}_{\mathbf{k}_{1}}(\mathbf{r})\cdot[M_{\theta}\mathbf{ \psi}_{\mathbf{k}_{2}}(\mathbf{r})]=0 \tag{43}\]
for \(\mathbf{k}_{1}\neq\mathbf{k}_{2}\) and arbitrary \(\mathbf{r}\). We emphasize that this is no longer a vector product as it is not definite positive. Relations (15) and (16) are also modified accordingly and \(\mathbf{\chi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}=[M_{\theta}\bar{\mathbf{\chi}}_{\mathbf{k}_{1}}] \times[M_{\theta}\bar{\mathbf{\chi}}_{\mathbf{k}_{2}}]\) and \(\mathbf{\chi}_{-\mathbf{k}_{1}-\mathbf{k}_{2}}=[M_{-\theta}\mathbf{\bar{\psi}}_{\mathbf{k}_{1}}] \times[M_{-\theta}\mathbf{\bar{\psi}}_{\mathbf{k}_{2}}]\), while Eqs. (17) and (18) are still satisfied.
At odd magic angles, the linear vanishing of \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\) around \(\mathbf{r}=0\) still holds, protected by the symmetries \(C_{3z}\) and \(C_{2y}T\). The positions of the magic angles are only slightly shifted by the particle-hole symmetry breaking term, \(\alpha_{1}^{*}\approx 0.3772\) for instance. Moreover, since
\[\bar{\mathbf{\chi}}_{\Gamma}(0)=0=[M_{\theta}\mathbf{\psi}_{K}]\times[M_{\theta}\mathbf{ \psi}_{K^{\prime}}], \tag{44}\]
we still obtain that \(\mathbf{\psi}_{K}(0)=\mathbf{\psi}_{K^{\prime}}(0)\) at the odd magic angle, yielding the Chern 2 zero-mode chiral solution of Eq. (28) together with the anti-chiral Chern 1 band. The structure of the flat bands are thus fully stable under particle-hole symmetry breaking as confirmed by our numerical calculations.
At even magic angles, as discussed in Appendix C.3, the \(C_{3z}\) and \(C_{2y}T\) symmetries alone are sufficient to preserve a double zero in one of the two wavefunctions \(\mathbf{\psi}_{\Gamma}(\mathbf{r})\) at \(\mathbf{r}=0\), which automatically yields the two flat bands in the anti-chiral sector. The analytical expressions for these bands are explicitly given in Eq. (135). It is worth noting that the choice of \(\mathbf{\psi}_{\Gamma}(\mathbf{r})\) vanishing at \(\mathbf{r}=0\) is not continuously connected to the zero-mode solution \(\mathbf{\psi}_{\Gamma}\) away from magic angles; instead, it requires the admixture with the first excited band, which occurs at even magic angles. In the anti-chiral sectors, the two flat bands are protected by the conservation of \(\mathrm{Tr}\Lambda^{z}=0\) (see Eq. (5)) within the zero-energy manifold. This conservation condition dictates that the two chiral flat bands must be accompanied by two anti-chiral flat bands. Consequently, we find that the fourfold degenerate flat band structure is maintained at even magic angles, even in the presence of particle-hole symmetry breaking. Our numerical calculations confirm this protection. Finally, as already noted in Ref. [20], the additional Dirac cone at \(\Gamma\) is gapped by breaking \(P\). The gap is however quite small, on the order of the energy scale \(\sim\theta v_{F}k_{\theta}\).
### Away from the chiral limit
The primary emphasis of this work has been on the chiral limit, which allowed us to provide analytical wavefunctions for the flat bands. We now introduce a slight deviation from this limit by incorporating a finite corrugation factor \(w_{AA}/w_{AB}=0.05\), while still accounting for the particle-hole symmetry breaking term. In this case, all zero-energy modes are lifted and gaps open between bands as illustrated in Fig. 6 in the vicinity of the first two magic angles. At twist angles different from magic values, the Dirac cones at \(\Gamma\), \(K\) and \(K^{\prime}\) become gapped when both the chiral and particle-hole symmetries are broken. At magic angles, whether even or odd, the flat bands acquires a finite bandwidth, on the order of a few meV for \(w_{AA}/w_{AB}=0.05\), and separate from each other.
As discussed in Sec. III.1, the odd magic angles in the chiral limit exhibit a flat band with Chern number
Figure 6: Separated middle bands of the first (left panel) and the second (right panel) magic angles, labeled with the Chern number associated to each band. The separated band are obtained by breaking simultaneously the particle-hole and the chiral symmetry. To amplify the effect of the particle-hole symmetry breaking the twist angle in \(\mathbf{\sigma}_{\pm\theta}\) is multiplied by a factor 20.
\(+2\) (\(-1\)) which is fully polarized on the A(B) sublattice. Remarkably, even when departing from the chiral limit, where the two sublattices become mixed, the resulting two bands still retain their Chern numbers of \(+2\) and \(-1\), as shown in Fig.6 for the first magic angle. These Chern numbers are in fact robust and persist up to significant values of the corrugation as the gap to the remote bands is large.
In contrast to this, as discussed in Sec. III.2, we identified a set of four flat bands with Chern numbers \(+1,+1,-1,-1\), and complete sublattice polarization in the chiral limit for even magic angles. The Dirac cone crossing at \(\Gamma\), as shown in the previous section, is gapped out by particle-hole symmetry breaking. When we move outside the chiral limit, the mixing of sublattices leads to the rearrangement of Chern numbers to \(+1,0,0,-1\), as depicted in Fig.6, for the four split bands at a small corrugation of \(w_{AA}/w_{AB}=0.05\). Increasing the corrugation further leads to topological transitions that involve the Dirac cone at \(\Gamma\) and change the topological properties of the flat bands. Their study is beyond the scope of this work.
## V Conclusions
This work analyses the mathematical structure of flat bands in equal-twist helical trilayer graphene in the chiral limit and for ABA stacking. We determine analytical expressions for all zero-energy wavefunctions at magic angles together with the band Chern numbers. We derive an orthogonality relation between chiral and anti-chiral zero modes, which constrains the number of generators in the zero-energy manifold and reveals a connection between the dimensionality of the vector space spanned by the zero modes and the total Chern number of the band.
In contrast to twisted bilayer graphene, helical twisted trilayer graphene exhibits an even/odd variation in the composition and features of the flat bands. At odd magic angles, we find a twofold degenerate zero-energy manifold at each \(\mathbf{k}\) point, comprising an anti-chiral flat band with Chern number \(C_{B}=-1\) and a chiral flat band with Chern number \(C_{A}=2\). The latter, being generated by two linearly independent spinors, cannot be reduced to the lowest Landau level, leading to interesting implications on the properties of the correlated ground state [19], which will be the topic of future studies.
Even magic angles, on the other hand, are distinguished by a four-fold degenerate manifold, where both chiral and anti-chiral flat bands have pairs of, or double, zeros, resulting in two zero-modes in each sector. The two sublattice-polarized bands in a given sector are all collinear to a single space-dependent spinor and carry the same Chern number, \(C_{A}=+1\) for the chiral \(A\)-polarized bands, \(C_{B}=-1\) for the anti-chiral \(B\)-polarized bands. We also demonstrate that in addition to the two flat bands, there must be two additional degenerate zero modes within the zero-energy manifold, thereby explaining the presence of a Dirac cone crossing the flat bands at \(\Gamma\).
We also investigate the stability of flat bands and zero modes at magic angles under a weak breaking of particle-hole symmetry. We find that all the features listed above remain valid, with the exception of the Dirac cone, which becomes slightly gapped by the perturbation. Only the joint breaking of particle-hole and chiral symmetries gaps out and splits all bands. Interestingly, at odd magic angle, the resulting isolated bands retain the Chern numbers \(+2,-1\) analytically determined in the chiral limit. In the light of of Ref. [63], it is natural to ask which features protect the emergence and properties of the flat bands. Our analysis highlights the importance of the structure of the differential operator \(\mathcal{D}=-i\sqrt{2}\partial+\mathcal{A}\), where \(\mathcal{A}\) is a non-abelian traceless SU(3) gauge potential and there is clearly a natural generalization to the SU(N) case for multilayer stackings with \(N>3\), where Chern bands with \(C>2\) are expected. In addition, the symmetries \(C_{3z}\) and \(C_{2y}T\) appear to be crucial for maintaining exactly flat bands tuned only by the twist angle, whereas particle-hole symmetry is not needed. We emphasize that, in contrast to twisted bilayer graphene, the \(C_{2z}T\) symmetry is broken here and the model belongs to the Altland-Zirnbauer [64] class AIII instead of CI, although both classes can have protected Dirac cone on a 2D surface.
Finally, the emergence of isolated bands with non-zero Chern values in a more realistic context, where both chiral and particle-hole symmetries are broken, opens up exciting possibilities for realizing an anomalous quantum Hall effect without the need for an almost aligned hBN substrate. Moreover, the potential for fractional Hall states consisting of bands with Chern numbers of \(\pm 2\) represents a promising direction for future investigations.
###### Acknowledgements.
We acknowledge discussions with Jie Wang, Jen Cano and Eslam Khalaf. C.M. and Y.M. acknowledge support by the French National Research Agency (project TWISTGRAPH, ANR-21-CE47-0018). D.G. acknowledges support from the Flatiron Institute, a division of the Simons Foundation.
## Appendix A Local Hamiltonian
We review here some fundamental properties of the local Hamiltonian describing equal twist helical trilayer graphene. A more general formulation, which includes non-equal twist configurations, is provided in Ref [18; 25]. In the basis \(\mathbf{\Psi}=(\psi_{1},\chi_{1},\psi_{2},\chi_{2},\psi_{3},\chi_{3})\) where \(\psi\), \(\chi\) correspond to the wave function amplitude on the A and B sublattices, respectively, the Hamiltonian near valley
K reads [18; 19]
\[H_{\rm eTTG}(\mathbf{r};\mathbf{\phi})=\begin{pmatrix}v_{F}\hat{\mathbf{k}}\cdot\mathbf{\sigma}_ {\theta}&T(\mathbf{r},\mathbf{\phi})&0\\ h.c.&v_{F}\hat{\mathbf{k}}\cdot\mathbf{\sigma}&T(\mathbf{r},-\mathbf{\phi})\\ 0&h.c.&v_{F}\hat{\mathbf{k}}\cdot\mathbf{\sigma}_{-\theta}\end{pmatrix}, \tag{10}\]
where \(v_{F}\approx 10^{6}\)m/s is the graphene velocity. The set of phases \(\mathbf{\phi}=(\phi_{1},\phi_{2},\phi_{3})\) parametrizes the position on the supermozite lattice. With the choice of gauge \(\phi_{1}=0\),
\[\mathbf{R}=\frac{\phi_{2}}{\pi}\mathbf{a}_{1}^{\rm MM}+\frac{\phi_{3}}{\pi} \mathbf{a}_{2}^{\rm MM}, \tag{11}\]
with the supermozite lattice vectors \(\mathbf{a}_{1/2}^{\rm MM}=\frac{4\pi}{3\theta k_{g}}e^{\mp i\pi/3}\). \(\mathbf{\phi}\) also controls the relative shift between the two moire patterns and a change of gauge shifts all phases \(\phi_{j}\) by the same amount. \(\mathbf{R}=0\) defines the AAA stacking, whereas ABA and BAB correspond to \(\mathbf{R}=(\mathbf{a}_{2}^{\rm MM}-\mathbf{a}_{1}^{\rm MM})/3\) and \(\mathbf{R}=(\mathbf{a}_{1}^{\rm MM}-\mathbf{a}_{2}^{\rm MM})/3\) parametrized by \(\mathbf{\phi}=\pm(0,-\pi/3,+\pi/3)\).
\(\mathbf{\sigma}\) is the vector of Pauli matrices in the sublattice space, \(\mathbf{\sigma}_{\theta}\equiv e^{i\theta\sigma^{*}/2}\mathbf{\sigma}e^{-i\theta\sigma ^{*}/2}\) and \(\hat{\mathbf{k}}=-i\nabla_{\mathbf{r}}\). The tunneling between different layers is described by the moire potential:
\[T(\mathbf{r},\mathbf{\phi})=\sum_{j=1}^{3}T_{j}e^{-i\mathbf{r}\cdot\mathbf{q}_{j}}e^{-i\phi_{ j}}, \tag{12}\]
where \(T_{j+1}=w_{\rm AA}\sigma^{0}+w_{\rm AB}[\sigma^{x}\cos 2\pi j/3+\sigma^{y}\sin 2 \pi j/3]\), \(w_{\rm AB}=110\)meV and \(w_{\rm AA}=rw_{\rm AB}\) with \(r\) dimensionless parameter quantifying atomic corrugation, using complex notation \(\mathbf{q}_{j+1}=ik_{\theta}e^{2i\pi j/3}\)[16] with \(k_{\theta}=\theta K_{D}\), \(K_{D}=4\pi/3a_{\rm G}\) and \(a_{\rm G}\approx 2.46\)A. The moire lattice is characterized by the reciprocal lattice vectors \(\mathbf{b}_{1/2}=\mathbf{q}_{1}-\mathbf{q}_{2/3}\) and primitive vectors \(\mathbf{a}_{1/2}\). Ignoring the twist angle dependency in the Pauli matrices \(\mathbf{\sigma}_{\pm\theta}\) the Hamiltonian (10) becomes invariant under the particle-hole symmetry:
\[PH_{\rm eTTG}(\mathbf{r};\mathbf{\phi})P^{-1}=-H_{\rm eTTG}(-\mathbf{r};\mathbf{\phi}), \tag{13}\]
with \(P\) given in Eq. (12). Under moire lattice translations we have:
\[H_{\rm eTTG}(\mathbf{r}+\mathbf{a}_{1/2};\mathbf{\phi})=U_{\varphi}H_{\rm eTTG}(\mathbf{r}; \mathbf{\phi})U_{\varphi}^{\dagger}, \tag{14}\]
where the matrix \(U_{\varphi}={\rm diag}[\omega^{*},1,\omega]\otimes\sigma^{0}\) corresponds to a layer dependent phase factor \(\omega=\exp(2\pi i/3)\).
\(H_{\rm eTTG}\) also exhibits a supermoire periodicity with \(\mathbf{R}\) (or \(\mathbf{\phi}\)). One can show [19] the following identity
\[H_{\rm eTTG}\left(\mathbf{r}+\frac{\mathbf{a}_{1}}{2};\mathbf{\phi}+\Delta\mathbf{\phi}_{l} \right)=\tilde{U}H_{\rm eTTG}(\mathbf{r};\mathbf{\phi})\tilde{U}^{\dagger}, \tag{15}\]
where \(\tilde{U}={\rm diag}(1,1,\omega)\). The phase shifts \(\Delta\mathbf{\phi}_{1}=(0,\pi,0)\), \(\Delta\mathbf{\phi}_{2}=(0,0,\pi)\) correspond respectively to \(\mathbf{R}\to\mathbf{R}+\mathbf{a}_{1}^{\rm MM}\) and \(\mathbf{R}\to\mathbf{R}+\mathbf{a}_{2}^{\rm MM}\). As a result, the AAA stacking points are periodically replicated, forming a triangular lattice generated by \(\mathbf{a}_{1/2}^{\rm MM}\) and characterized by ABA and BAB domains [19] (see also [20; 21]).
As noted above, we focus on the ABA stacking configuration by setting \(\mathbf{\phi}=(0,2\pi/3,-2\pi/3)\) and the chiral limit \(w_{AA}=0\) (suppressed tunneling between A and A orbitals). The resulting Hamiltonian is given by Eq. (1) in the sublattice-Chern basis.
## Appendix B Particle-hole symmetry \(P\) protected Dirac cones
We discuss the symmetries protecting the Dirac cones at the high-symmetry points \(K\), \(K^{\prime}\) and \(\Gamma\) in the ABA stacking configuration.
The irreducible representations of the space group composed by \(C_{3z}\) and \(C_{2y}T\) at the high-symmetry momenta \(\Gamma\) and \(K\) (\(K^{\prime}\)) verify the \(C_{3}\) point group character table and are all one-dimensional. This can be understood more directly by considering an eigenstate of \(C_{3z}\), \(C_{3z}|\omega\rangle=\omega|\omega\rangle\). From the relation
\[(C_{2y}T)C_{3z}(C_{2y}T)^{-1}=C_{3z}^{-1}, \tag{16}\]
we obtain \(C_{3z}C_{2y}T|\omega\rangle=\omega\,C_{2y}T|\omega\rangle\). \(C_{2y}T\) thus does not circulate between the eigenstates of \(C_{3z}\) and cannot protect a twofold degeneracy. The three zero-energy Dirac cones at \(K\), \(K^{\prime}\) and \(\Gamma\) arising in the band spectrum of ABA trilayer graphene [18; 19] are therefore stable in the presence of the particle-hole symmetry \(P\) (12). In the chiral limit \(w_{\rm AA}=0\), the Dirac cones are protected by \(\Lambda^{z}\) and persists even in the absence of \(P\).
Since \(P\) and \(C_{3z}\) commute, if the spectrum at \(\Gamma\) hosts two states with \(C_{3z}\) eigenvalues \(\omega\), \(\omega^{*}\) near charge neutrality (and no other states), particle-hole symmetry \(P\) automatically pins these two states at zero energy. This is proven by contradiction: if we assume that the two states sit at opposite non-vanishing energies, then \(P\) permutes them. This is however impossible since \(P\) cannot change the \(C_{3z}\) eigenvalue which completes the proof. In fact, \(P\) restricted to these two states must be the identity as it commutes with \(C_{3z}\). It further shows that breaking \(C_{3z}\) does not lift the Dirac crossings as the trivial (identity) representation of \(P=I_{2\times 2}\) cannot deform continuously to the traceless \(\sigma^{x}\) matrix permuting states with opposite non-zero energies. \(K\) and \(K^{\prime}\) are however not stable under \(P\) - which permutes \(K\) and \(K^{\prime}\) - and the stability of their Dirac cones must come from a different operator. \(C_{2y}T\) sends \(\mathbf{k}\to-C_{2y}\mathbf{k}\) and therefore also permutes \(K\) and \(K^{\prime}\). Combining \(P\) and \(C_{2y}T\) we find:
\[PC_{2y}T\mathcal{H}_{\rm ABA}(\mathbf{r})\,(PC_{2y}T)^{-1}=-\mathcal{H}_{\rm ABA}(-C_ {2y}\mathbf{r}), \tag{17}\]
with
\[P^{\prime}=PC_{2y}T=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix}\otimes I_{2\times 2}\mathcal{K}, \tag{18}\]
which leaves \(K\) and \(K^{\prime}\) invariant and acts as a (anti-unitary) particle-hole operator.
being anti-unitary \(P^{\prime}\) cannot permute the eigenvalues \((\omega,\omega^{*})\) which implies that an isolated pair of states at \(K\) and \(K^{\prime}\) is degenerate and pinned at zero energy, in the two-dimensional subspace \(P^{\prime}=I_{2\times 2}\mathcal{K}\).
## Appendix C Symmetries of the zero-mode wavefunction
The symmetries of the Hamiltonian \(\mathcal{H}_{\rm ABA}\) discussed in Section (II.2) constraints the properties of the zero-mode wavefunction and gives insight on the nature of the different magic angles. Considering the symmetry \(g\) the A sublattice polarized solution \(\mathbf{\psi}\) we have:
\[\mathbf{\psi}_{\mathbf{k}}(\mathbf{r})=e^{i\Xi_{g}}M_{g}\mathbf{\psi}_{g\mathbf{k}}(g\mathbf{r}), \tag{10}\]
where the action of the symmetry \(g\) on layer, momentum and space degrees of freedom is given in table 1. Similar expressions are also obtained for the B sublattice zero mode \(\mathbf{\chi}_{\mathbf{k}}\). The phase \(\Xi_{g}\) is fixed by taking the decoupled limit \(\alpha=0\) and depends on \(\mathbf{k}\).
We will now employ the symmetry relations in Eq. (10) of the model to constrain the zero mode wavefunctions in the \(\mathcal{C}_{\mathcal{A}}\) and \(\mathcal{C}_{B}\) sectors around high-symmetry points \(\mathbf{r}\) in the moire unit cell.
### Odd magic angles
Thanks to Eq. (25) odd magic angles are determined by looking at the behavior of \(\mathbf{\chi}_{\Gamma}(\mathbf{r})\) around \(\mathbf{r}\approx 0\) where the kernel \(\mathcal{D}(\mathbf{r})\) takes the form:
\[\mathcal{D}(\mathbf{r})\simeq\begin{pmatrix}-i\sqrt{2}\partial&-3\alpha z/\sqrt{2 }&0\\ 3\alpha&-i\sqrt{2}\partial&3\alpha\\ 0&3\alpha z/\sqrt{2}&-i\sqrt{2}\partial\end{pmatrix}. \tag{11}\]
The solution of the zero-mode equation is obtained performing a Taylor expansion in \(z\) and \(\bar{z}\). Employing Eq. (10) we readily realize that \(\chi_{\Gamma 2}(C_{3z}\mathbf{r})=\chi_{\Gamma 2}(\mathbf{r})\) and \(\chi_{\Gamma 1/3}(C_{3z}\mathbf{r})=\omega^{*}\chi_{\Gamma 1/3}(\mathbf{r})\). In addition we also have \(\chi_{\Gamma 2}(C_{2y}\mathbf{r})=\chi_{\Gamma 2}^{*}(\mathbf{r})\) and \(\chi_{\Gamma 1/3}(C_{2y}\mathbf{r})=\chi_{\Gamma 3/1}^{*}(\mathbf{r})\) implying:
\[\mathbf{\chi}_{\Gamma}(\mathbf{r})\simeq\chi_{2}\begin{pmatrix}3i\alpha z^{2}/4\\ 1\\ -3i\alpha z^{2}/4\end{pmatrix}+\begin{pmatrix}\chi_{1}^{\prime}\bar{z}\\ 3\sqrt{2}\alpha\Im\chi_{1}^{\prime}z\bar{z}\\ \chi_{1}^{\prime*}\bar{z}\end{pmatrix}, \tag{12}\]
where \(\chi_{2}\equiv\chi_{\Gamma 2}(0)\in\mathbb{R}\) while \(\chi_{1}^{\prime}\equiv\partial\chi_{\Gamma 1}|_{0}\in\mathbb{C}\). Thus, \(C_{3z}\) and \(C_{2y}T\) reduces the perfect flatness of the entire band to the vanishing of a single real number \(\chi_{2}\)[63]. Notice that if we further impose the particle-hole symmetry \(P\) we have \(\Re\chi_{1}^{\prime}=0\). We readily realize that the expression (12) solves \(\mathcal{D}\mathbf{\chi}_{\Gamma}=0\) up to small terms of the order \(z^{2}\bar{z}\). At the magic angle we have \(\chi_{2}=0\) and \(\mathbf{\chi}_{\Gamma}\) has a simple zero for \(\mathbf{r}\to 0\):
\[\mathbf{\chi}_{\Gamma}(\mathbf{r}\to 0)\sim\bar{z}\left(\chi_{1}^{\prime},0,\chi_{1}^{ \prime*}\right)^{T}. \tag{13}\]
We emphasize that the simple zero at the odd magic angles where \(\chi_{2}=0\) persists also in the absence of the particle-hole symmetry \(P\). The vanishing of \(\mathbf{\chi}_{\Gamma}(0)\) implies \(\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)=0\) resulting in the fact that \(\mathbf{\psi}_{K}(0)\) and \(\mathbf{\psi}_{K^{\prime}}(0)\) are colinear. Eq. (10) for \(C_{3z}\) gives \(\psi_{K/K^{\prime}2}(C_{3z}\mathbf{r})=\omega\psi_{K/K^{\prime}2}(\mathbf{r})\) and \(\psi_{K/K^{\prime}1/3}(C_{3z}\mathbf{r})=\psi_{K/K^{\prime}1/3}(\mathbf{r})\). Furthermore, \(K\) and \(K^{\prime}\) zero modes are related by \(\psi_{K^{\prime}1/3}(\mathbf{r})=\psi_{K3/1}^{*}(C_{2y}\mathbf{r})\) and \(\bar{\psi}_{K^{\prime}2}(\mathbf{r})=\psi_{K2}^{*}(C_{2y}\mathbf{r})\). These symmetries reduces \(\mathbf{\psi}_{K}(0)\times\mathbf{\psi}_{K^{\prime}}(0)=0\) to \(\mathbf{\psi}_{K}(0)=\mathbf{\psi}_{K^{\prime}}(0)\), where the identity holds up to a phase, which gives rise to the the Chern 2 zero mode wavefunction in Eq. (28).
### Even magic angles
Even magic angles realizes the a 1+1 decomposition corresponding to one-dimensional flat bands in both chiral and anti-chiral sectors. The four-fold degeneracy of the flat band manifold, see Fig. 3c, originates from a double zero in the spinors \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) and \(\mathbf{\chi}_{K}(\mathbf{r})\) at \(\mathbf{r}_{0}\). The origin of this double zero can be explained by symmetry reasoning. To start with we expand \(\mathcal{D}^{\dagger}(\mathbf{r})\) around the AB stacking point \(\mathbf{r}_{0}\) finding:
\[\mathcal{D}^{\dagger}(\mathbf{r}+\mathbf{r}_{0})\simeq\begin{pmatrix}-i\sqrt{2}\bar{ \partial}&-3\alpha z/\sqrt{2}&0\\ 3\alpha z/\sqrt{2}&-i\sqrt{2}\bar{\partial}&3\alpha\\ 0&-3\alpha\bar{z}/\sqrt{2}&-i\sqrt{2}\bar{\partial}\end{pmatrix}. \tag{14}\]
Focusing on the chiral sector and fixing the center of the \(C_{3z}\) rotation around \(\mathbf{r}_{0}\) we find Eq. (32) which implies:
\[\mathbf{\psi}_{K^{\prime}}(\mathbf{r}+\mathbf{r}_{0})\simeq\psi_{3}^{(0)}\begin{pmatrix}0 \\ -i3\alpha\bar{z}/\sqrt{2}\\ 1\end{pmatrix}+\frac{z^{2}}{2}\begin{pmatrix}\psi_{1}^{(0)^{\prime\prime}}\\ \psi_{1}^{(0)^{\prime\prime}}\\ \psi_{2}^{(0)^{\prime\prime}}\\ 0\end{pmatrix}, \tag{15}\]
where \(\psi_{1/2}^{(0)^{\prime\prime}}\equiv\partial^{2}\psi_{K^{\prime}1/2}|_{\mathbf{r}_ {0}}\) and we have included terms up to second order in \(\mathbf{r}\). Imposing, the \(PC_{2y}T\) symmetry (34) implies \(\Re\psi_{3}^{(0)}=0\), \(\Re\psi_{1}^{(0)^{\prime\prime}}=0\) and \(\Im\psi_{2}^{(0)^{\prime\prime}}=0\).
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Symmetry & \(\mathbf{k}\) & \(\mathbf{r}\) & \(M_{g}\) \\ \hline \hline \(C_{3z}\) & \(C_{3z}\mathbf{k}\) & \(C_{3z}\mathbf{r}\) & \(\mathrm{diag}(\omega,1,\omega)\) \\ \hline \(P\) & \(-\mathbf{k}\) & \(-\mathbf{r}\) & \(\begin{pmatrix}0&0&-1\\ 0&1&0\\ -1&0&0\end{pmatrix}\) \\ \hline \hline \(C_{2y}T\) & \(-C_{2y}\mathbf{k}\) & \(C_{2y}\mathbf{r}\) & \(\begin{pmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{pmatrix}\) \\ \hline \(PC_{2y}T\) & \(C_{2y}\mathbf{k}\) & \(-C_{2y}\mathbf{r}\) & \(\mathrm{diag}(-1,1,-1)\mathcal{K}\) \\ \hline \end{tabular}
\end{table}
Table 1: Action of the symmetries of \(\mathcal{H}_{\rm ABA}\) on the zero-mode \(\mathbf{\psi}\). Rows refer to the different symmetries, from top to bottom \(C_{3z}\), \(P\), \(C_{2y}T\) and \(PC_{2y}T\) the last two involving the complex conjugation \(\mathcal{K}\). Columns show the action of the symmetry on the momentum \(\mathbf{k}\), space \(\mathbf{r}\) coordinates and \(M_{g}\) is the representation of the symmetry acting on the three dimensional layer degree of freedom.
Thus, at the magic angle where \(\psi_{3}^{(0)}=0\) we have
\[\mathbf{\psi}_{K^{\prime}}(\mathbf{r}\to 0+\mathbf{r}_{0})\sim z^{2}\left(\psi_{1}^{(0)^{ \prime\prime}},\psi_{2}^{(0)^{\prime\prime}},0\right)^{T}/2, \tag{100}\]
enabling to attach two lowest Landau levels with simple pole at \(\mathbf{r}_{0}\) to the zero mode spinor \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) (35). Following a similar line or reasoning one could show that the anti-chiral zero mode \(\mathbf{\chi}_{K}\) shows a double zero at \(\mathbf{r}_{0}\).
### Alternative derivation for even magic angles
We provide here an alternative argument for the protection of a twofold degenerate flat band at even magic angles. The construction in Sec. III.2 relies on the quadratic vanishing of \(\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) at \(\mathbf{r}=\mathbf{r}_{0}\) derived in Sec. C.2 which uses the \(PC_{2y}T\) symmetry, and therefore \(P\). However, the zero-mode flat bands also host a specific wavefunction \(\mathbf{\psi}_{\Gamma}^{(1)}(\mathbf{r})=\eta_{K^{\prime}}^{(0)}(z)\eta_{K^{\prime}}^ {(0)}(z)\mathbf{\psi}_{K^{\prime}}(\mathbf{r})\) which exhibits a double zero at \(\mathbf{r}=0\). As we show below, this double zero is protected solely by the \(C_{3z}\) and \(C_{2y}T\) symmetries. To linear order in \(z\), the zero mode equation takes the form:
\[-i\sqrt{2}\bar{\partial}\psi_{\Gamma 2}-3\alpha\bar{z}(\psi_{ \Gamma 1}-\psi_{\Gamma 3})/\sqrt{2},\] \[-i\sqrt{2}\bar{\partial}(\psi_{\Gamma 1}-\psi_{\Gamma 3})=0, \tag{101}\] \[-i\sqrt{2}\bar{\partial}(\psi_{\Gamma 1}+\psi_{\Gamma 3})+6 \alpha\psi_{\Gamma 2}=0.\]
Solving the system of differential equations and imposing \(C_{3z}\), Eq. (100), we find that the zero mode behaves as:
\[\mathbf{\psi}_{\Gamma}(\mathbf{r})\simeq\psi_{2}\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}-\frac{3\alpha i\bar{z}}{\sqrt{2}}\psi_{2}\begin{pmatrix}1\\ 0\\ 1\end{pmatrix}+z^{2}\begin{pmatrix}\psi_{1}^{\prime\prime}/2\\ 0\\ 0\\ \psi_{3}^{\prime\prime}/2\end{pmatrix} \tag{102}\]
where \(\psi_{2}\equiv\psi_{\Gamma 2}(0)\) and \(\psi_{1/3}^{\prime\prime}=\partial^{2}\psi_{\Gamma 1/3}|_{0}\). \(C_{2y}T\) gives the additional condition \(\psi_{\Gamma 1/3}(C_{2y}\mathbf{r})=\psi_{\Gamma 3/1}^{*}(\mathbf{r})\) and \(\psi_{\Gamma 2}(C_{2y}\mathbf{r})=\psi_{\Gamma 2}^{*}(\mathbf{r})\) implying \(\psi_{2}\in\mathbb{R}\) and \(\psi_{1}^{\prime\prime}=\psi^{\prime\prime}_{3}\). In summary, a double zero is expected,
\[\mathbf{\psi}_{\Gamma}(\mathbf{r}\to 0)\sim z^{2}\left(\psi_{1}^{{}^{\prime\prime}},0, \psi^{\prime\prime}_{1}^{*}\right)^{T}/2 \tag{103}\]
as soon as the real coefficient \(\psi_{2}\) vanishes. It corresponds to the specific solution \(\mathbf{\psi}_{\Gamma}^{(1)}(\mathbf{r})\) introduced above. Breaking \(P\) but keeping \(C_{3z}\) and \(C_{2y}T\) intact, one only moves the magic angle for which \(\psi_{2}=0\) since \(\psi_{2}\) remains a real number. The twofold degenerate flat band in the chiral sector is a direct consequence of Eq. (103), with the analytical structure
\[\mathbf{\psi}_{\mathbf{k}}(\mathbf{r})=\eta_{\mathbf{k}^{\prime}}^{(0)}(z)\eta_{\mathbf{k}-\mathbf{k}^ {\prime}}^{(0)}(z)\mathbf{\psi}_{\Gamma}(\mathbf{r}), \tag{104}\]
alternative to Eq. (35).
|
2310.17719 | Quark stars in $D_3$-$D_7$ holographic model | This work investigates static and dynamical quark star properties within a
$D_3-D_7$ holographic model. We solve the Tolman-Oppenheimer-Volkoff equations
for the quark matter equation of state obtained from the brane configuration.
We determine the mass-radius diagram for a range of model parameters and
compare with recent NICER observational data for the pulsars PSR J$0030+0451$
and PSR J$0740+6620$. Motivated by the GW170817 event detected by the
LIGO-Virgo collaboration, we also calculate the tidal deformability parameter
obtained for each component of the binary star system. We show that quark stars
composed of flavor-independent quark matter derived from the $D_3-D_7$
holographic model are not able to satisfy simultaneously the LIGO-Virgo and
NICER astrophysical bounds. | M. Aleixo, C. H. Lenzi, W. de Paula, R. da Rocha | 2023-10-26T18:26:04Z | http://arxiv.org/abs/2310.17719v3 | # Quark Stars in \(D_{3}\)-\(D_{7}\) Holographic Model
###### Abstract
This work investigates static and dynamical quark star properties within a \(D_{3}-D_{7}\) holographic model. We solve the Tolman-Oppenheimer-Volkoff equations for the quark matter equation of state obtained from the brane configuration and determine the range of model parameters in which the quark star family mass-radius diagram are compatible with recent NICER observational data for the pulsars PSR J0030+0451 and PSR J0740 + 6620. We show that the model supports stable configurations with maximum masses higher than 2 Solar masses, in line with the inferred masses of the pulsars PSR J\(1614-2230\), PSR J\(0348+0432\) and PSR J\(0740+6620\). Furthermore, we show that there is a parametrization in which the tidal deformability parameter obtained for each component of the binary star system is consistent with the GW170817 event detected by the LIGO-Virgo collaboration.
## 1 Introduction
The detection of the gravitational waves (GW) [1] and Gamma-ray burst (GRB) [2] from a binary neutron star (NS) merger, the GW170817 event, brought new valuable information for the description of compact star properties. In particular, the details of the NS structure become more relevant as the separation between each binary companion decreases [3]. In this context, the tidal deformability extracted from the GW170817 data [3; 4; 5] gives new dynamical constraints for NS models.
Understanding the composition of the NS interior is an important astrophysical open problem [6]. In their inner core, which is believed to achieve very high densities, few times the nuclear saturation density, theoretical models predict the existence of hyperons [7; 8; 9] or deconfined quark matter [10; 11; 12; 13]. Indeed, there are also indirect observational shreds of evidence that open the possibility of forming stable compact stars only with quark matter, known as quark stars (QS), which can play the role of laboratories to investigate the very fundamental physics underlying systems at supranuclear densities, under strong gravitational fields [14; 15; 16; 17; 18]. Therefore, exploring the possibility of a description of a NS with exotic content, or being a core of quark matter in hybrid stars [19; 20; 21; 22; 23] or QS [24; 25; 26; 27; 28] is an active area of study.
The AdS/CFT correspondence allows to treat strongly-coupled quantum systems in terms of gravitational duals [29]. There are applications of such proposal in many areas, from condensed matter systems [30] to the description of the quark-gluon plasma (QGP) produced in experiments of heavy-ions collisions [31; 32]. In particular, it is worth mentioning how close to experimental data [33] is the prediction of the shear viscosity-to-entropy ratio of the QGP from holographic models, which attains the lowest value among any kind of matter in Nature, the nearest to the Kovtun-Son-Starinets limit [34]. The original duality maps the generating functional of the correlation functions of \({\cal N}=4\) super Yang-Mills (SYM) theory in 4D flat space to partition functions of type IIB string theory in AdS\({}_{5}\times\) S\({}^{5}\)[35]. Within the holographic concept, there are many attempts to incorporate some features of quantum chromodynamics (QCD), such as confinement, chiral symmetry breaking, and the hadronic spectrum, besides the phase structure at large baryon-chemical potentials, and the equation of state governing high-density
regimes, as the ones expected to take place in the quarkyonic matter core of NS [36; 37; 38; 39; 40; 41; 42; 43].
Here we are mainly interested in the description of dense QCD matter for the analysis of the QS properties. For this end, we focus on the \(D_{3}-D_{7}\) system [44], where a configuration of \(N_{c}\)\(D_{3}\) branes and \(N_{f}\)\(D_{7}\) probe branes are considered1. By taking the 't Hooft limit, \(N_{c}\to\infty\), \(g_{s}\to 0\) with \(\lambda=g_{s}^{2}\,N_{c}\) fixed and large, in the near-horizon limit of \(D_{3}\) branes, one obtains \(AdS_{5}\times S^{5}\) with the \(N_{f}\)\(D_{7}\)-branes wrapping \(AdS_{5}\times S^{3}\)[45]. The presence of the \(D_{7}\) probe brane generates new degrees of freedom, whose low-energy dynamics are described by the Dirac-Born-Infeld (DBI) one, in \(AdS_{5}\times S^{3}\), where the time component of the U(1) gauge field is dual to the chemical potential \(\mu\). These degrees of freedom correspond to open string fluctuations on the \(D_{7}\)-brane. The asymptotic distance between the \(D_{3}\) and \(D_{7}\)-branes is a mass parameter \(m\), which, in this context, is interpreted as the constituent quark mass [46]. This interior open-open string duality maps operators of mesonic type, in the conformal field theory, to \(D_{7}\)-brane fluctuations, on the gravitational sector, additionally to the original AdS/CFT, whose gravity is regulated by the near-horizon geometry of \(D_{3}\)-branes. Gauge-invariant field theory bilinear operators are, in this way, dual objects mapped to fluctuations of the \(D_{7}\) probe brane living in the \(\text{AdS}_{5}\times S^{5}\) compactified space.
Footnote 1: \(N_{c}\) and \(N_{f}\) are the number of colors and flavors, respectively.
Considering the grand canonical ensemble, one can study the thermodynamic properties of the model, as implemented in Refs. [45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. The proposal regards obtaining the equation of state (EOS) for zero temperature of such holographic model and, with the use of the Tolman-Oppenheimer-Volkoff (TOV) equation for the hydrostatic equilibrium, to analyze static and dynamical properties of QS. There is a vast literature where holographic concepts were used to discuss compact stars, as reported by Refs. [46; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65] and references therein.
In what follows, we will obtain the free energy of the flavor fields, decoupled from the adjoint fields. After determining the holographic EOS for the quark matter, we calculate the mass distribution profile and the mass-radius diagram in terms of the constituent quark mass \(m\). By varying the parameter \(m\), we compare the results with the observational data analysis of the Neutron Star Interior Composition Explorer (NICER) on the values of mass and radius of the massive pulsars PSR \(\text{J0030}+0451\)[66; 67] and PSR \(\text{J0740}+6620\)[68; 69]. Finally, we consider an NS merger and compare the tidal deformability obtained in the holographic model with the data that comes from the LIGO-VIRGO Collaboration on the event GW170817 [1].
## 2 The holographic model
In the adopted framework, one considers the 't Hooft limit for the \(D_{3}-D_{7}\) system, obtaining an \(AdS_{5}\times S^{5}\) with the \(D_{7}\)-branes wrapping the \(AdS_{5}\times S^{3}\) space [45]. The metric reads
\[ds^{2}=\frac{u^{2}}{\mathcal{R}^{2}}\eta_{\mu\nu}dx^{\mu}dx^{\nu }+\frac{\mathcal{R}^{2}}{u^{2}}\left(d\bar{\rho}^{2}+\bar{\rho}^{2}d\Omega_{3} ^{2}+dy^{2}+dz^{2}\right)\,, \tag{1}\]
where \(\eta_{\mu\nu}\) is the Minkowski metric in 4 dimensions and \(\mathcal{R}\) is the AdS radius. The holographic coordinate \(u\) is written as \(u^{2}=\bar{\rho}^{2}+y^{2}+z^{2}\) and the coordinates \(\bar{\rho}\) and \(\Omega_{3}\) belong to the \(D_{7}\) brane world volume. The DBI action has the form
\[S_{D_{7}}=-N_{f}\,T_{D_{7}}\,\int d^{8}\xi\,e^{-\phi}\,\sqrt{-\, \det(g+2\pi\,\alpha^{\prime}\,F)}\,, \tag{2}\]
where \(T_{D_{7}}\) is the tension of the \(D_{7}\)-brane, \(g\) is the induced metric on the \(D_{7}\) worldvolume, the AdS radius was set to one, \(\phi\) is the dilaton field, \(\alpha^{\prime}\) is the inverse of the string tension and \(F\) is the field strength of a \(U(1)\) gauge field \(A^{\mu}\), whose only non-vanishing component is the temporal one \(A_{t}(\bar{\rho})\).
Since we are dealing with a supersymmetric intersection, the DBI Lagrangian can be written as
\[\mathcal{L}_{DBI}=-\mathcal{N}\,\bar{\rho}^{3}\,\sqrt{1+z^{\prime 2}\,-A_{t}^{ \prime 2}}\,, \tag{3}\]
where \(\mathcal{N}=\frac{\pi^{2}}{2}\,N_{f}\,T_{D_{7}}\). The variation of the Lagrangian with respect to \(z\) and \(A_{t}\) is zero. Therefore, one has two conserved quantities, \(c\) and \(d\), respectively given by
\[c=-\frac{1}{\mathcal{N}}\,\frac{\partial\mathcal{L}_{DBI}}{ \partial z^{\prime}}=\frac{\bar{\rho}^{3}\,z^{\prime}}{\sqrt{1+z^{\prime 2}-A_{t}^{ \prime 2}}}\,, \tag{4}\] \[d=\frac{1}{\mathcal{N}}\,\frac{\partial\mathcal{L}_{DBI}}{ \partial A_{t}^{\prime}}=\frac{\bar{\rho}^{3}\,A_{t}^{\prime}}{\sqrt{1+z^{ \prime 2}-A_{t}^{\prime 2}}}\,. \tag{5}\]
The holographic dictionary relates the constituent quark mass and the chemical potential \(\mu_{q}\) with the asymptotic boundary of the fields \(A_{t}\) and \(z\), specifically, one has \(A_{t}(\bar{\rho}\to\infty)=\ \mu_{q}\) and \(z(\bar{\rho}\to\infty)=\ m\). After this identification, one can show that the conserved quantities \(c\) and \(d\) are related to the physical quantities \(\mu_{q}\) and \(m\)[45]. At zero temperature, the thermodynamic potential in the grand canonical ensemble can be obtained from the regulated on-shell action [50]. When the chemical potential is greater than the constituent quark mass, the free energy density can be written as [56]
\[\mathcal{F}=\mathcal{F}_{\mathcal{N}=4}+\mathcal{F}_{flavor}\,. \tag{6}\]
The first part of the r.h.s. in Eq. (6) is associated with the color charge and vanishes in the zero temperature
limit [47]. In this case, the flavor contribution reads [55]
\[{\cal F}_{flavor}=-\frac{3}{4\,\pi^{2}}(\mu_{q}^{2}-m^{2})^{2}\,, \tag{7}\]
where the number of colors and flavors are three and the 't Hooft coupling constant \(\lambda\) was chosen to reproduce the Stefan-Boltzmann expression for large density.
## 3 Holographic compact stars
Considering the thermodynamic relation between the pressure and the free energy, \(p=-{\cal F}_{flavor}\), together with the expression \(\varepsilon=\mu_{q}\,\frac{\partial p}{\partial\mu_{q}}-p\), where \(\varepsilon\) is the energy density and the label \(q\) is associated to the quark, one obtains the EOS of the holographic model as [56]
\[\varepsilon=3p+\frac{2\sqrt{3}\,m^{2}}{\pi}\sqrt{p}\, \tag{8}\]
where \(p\) is the pressure. To verify that causality is respected in the model, it is useful to write the explicit expression of the sound velocity \(v_{s}\), which is given by
\[v_{s}=\sqrt{\frac{\partial p}{\partial\varepsilon}}=\sqrt{\frac{\pi\sqrt{p}} {\sqrt{3}\,m^{2}+3\pi\sqrt{p}}}\,. \tag{9}\]
To ensure the hydrostatic equilibrium for a spherically symmetric distribution of mass, one has to solve the TOV equations, written in natural units (\(G=c=1\)), given by
\[\frac{dp(r)}{dr} = -\frac{M(r)\,\rho(r)}{r^{2}}\left(1+\frac{4\pi r^{3}p(r)}{M(r)} \right)\left(1+\frac{p(r)}{\varepsilon(r)}\right) \tag{10}\] \[\times\,\left(1-\frac{2M(r)}{r}\right)^{-1},\] \[\frac{dM(r)}{dr} = 4\pi r^{2}\rho(r), \tag{11}\]
where the \(M(r)\) is the Misner-Sharp mass inside the radius \(r\) and \(\rho(r)\) is the mass density.
## 4 Tidal deformability
The LIGO-Virgo collaboration detected GW [1] and GRB from a binary NS merger [2], the GW170817 event. This system provides valuable information concerning the deformations due to the gravitational interaction between the two involved neutron stars [70], which can be given, to linear order, in terms of the dimensionless tidal deformability parameter \(\Lambda\)[71], reading
\[\Lambda=\frac{Q_{ij}}{\varepsilon_{ij}}\,, \tag{12}\]
where \(Q_{ij}\) is the quadrupole momentum and \(\varepsilon_{ij}\) is the tidal field. The induced quadrupole moment is associated with the deformation of a spherically symmetrical object with respect to the flattening of the poles. In terms of the second Love number \(k_{2}\), we have
\[\Lambda=\frac{2}{3}\,k_{2}\,C^{-5}, \tag{13}\]
where \(C=M/R\) is the compactness. On a quasi-static regime, the second Love number is given by [71]
\[k_{2} = \frac{8C^{5}}{5}(1\!-\!2C)^{2}\left(2\!+\!2C(y_{R}\!-\!1)\!-\!y_ {R}\right)\] \[\times\!\left\{2C\left(6-3y_{R}+3C(5y_{R}-8)\right)\right.\] \[\left.+4C^{3}\left(13-11y_{R}+C(3y_{R}-2)+2C^{2}(1+y_{R})\right)\right.\] \[\left.+3(1\!-\!2C)^{2}\left(2\!-\!y_{R}\!+\!2C(y_{R}\!-\!1) \right)\ln\left(1\!-\!2C\right)\right\}^{-1},\]
where \(y_{R}=y(R)\). The function \(y(r)\) is a solution of the differential equation \(r\,(dy/dr)+y^{2}+y\,F(r)+r^{2}\,Q(r)=0\), with
\[F(r) = \frac{1-4\pi r^{2}\left(\varepsilon(r)-p(r)\right)}{g(r)}\,,\] \[G(r) = \frac{4\pi}{g(r)}\left(5\varepsilon(r)+9p(r)+\frac{\varepsilon( r)+p(r)}{v_{s}^{2}(r)}-\frac{6}{4\pi r^{2}}\right)\] \[-4\left(\frac{m(r)+4\pi r^{3}p(r)}{r^{2}g(r)}\right)^{2}\,,\] \[g(r) = 1-\frac{2M(r)}{r}\,. \tag{15}\]
In addition, we define the chirp mass parameter \({\cal M}\) as
\[{\cal M}\equiv\left(\frac{m_{1}^{3}\,m_{2}^{3}}{m_{1}+m_{2}}\right)^{\frac{1 }{4}}\,, \tag{16}\]
which is a function of the masses of the two NS companions, \(m_{1}\) and \(m_{2}\). This parameter is relevant to describe the rate of energy transferred away through the gravitational waves. Indeed, the tidal deformability analysis from the observational data of the GW170817 data from LIGO-Virgo is made for a specific value of the system chirp mass [3].
## 5 Results
An important parameter to be analyzed is the speed of sound corresponding to the model. With this information, it is possible to check whether the model does not violate the causality principle (\(\partial p/\partial\varepsilon<1\)). Fig. 1 presents the speed of sound curves, \(v_{s}^{2}\), as a function of energy density \(\varepsilon\). As can be seen, all models do not violate the causality principle.
The solutions of the differential equations system given by Eqs. (8), (10) and (11) has been obtained for constituent quark masses ranging from \(m=300\) MeV to \(m=360\) MeV. The initial conditions used are \(p(0)=p_{c}\) and \(M(0)=0\), where \(p_{c}\) is the central pressure. The radius \(R\) of the star is defined by \(p(R)=0\). The outcome is the \(M(R)\) sequences of compact stars compatible with the adopted model. The rationale behind the choice of the range of values for \(m\) is the following: since \(m\) is interpreted as the constituent quark mass, a typical value can be obtained from the infrared value of the quark mass function [72; 73], which value of 345 MeV was obtained with lattice QCD calculations for the quark propagator [74]. The proposal of this work is to explore a range of values around this number in order to see if the model is able to describe observational data of statical and dynamical properties of NS. It will be shown that for \(m=300\) MeV the maximum mass reaches \(2M_{\odot}\), whereas for \(m=360\) MeV the model can describe the deformability parameter of the binary star system for the GW170817 event.
Figs. 2 and 3 present the radial profiles for the maximum star mass of each parametrization. Fig. 2 shows that the maximum central pressure is obtained for \(m=360\) MeV, while the minimum is attained for \(m=300\) MeV. Fig. 3 illustrates that the radius of the maximum star mass decreases monotonically with the constituent quark mass.
Fig. 4 shows the mass-radius sequences of QS using the \(D_{3}-D_{7}\) holographic EOS. Each sequence of stars was obtained with a particular value of the constituent quark mass, ranging from \(m=300\) MeV to \(m=360\) MeV. In this figure is clear that increasing the constituent quark mass makes the value of the maximum stellar mass decrease.
Note that within this framework it is even possible to achieve masses higher than 2 Solar masses (for \(m\leq 300\) MeV), which is in agreement with data reported in Refs. [75; 76; 77]. In addition, our computations have been compared with recent observational data analysis from NICER. The millisecond pulsars considered are the PSR J0030\(+\)0451 [66; 67] and PSR J0740\(+\)6620 [68; 69]. Independent analysis for PSR J0030 \(+\) 0451 gives the inferred mass of \(1.34^{+0.15}_{-0.16}M_{\odot}\)[66] and \(1.44^{+0.15}_{-0.14}M_{\odot}\)[67], while the radius estimates are \(12.71^{+1.14}_{-1.19}\) km [66] and \(13.02^{+1.24}_{-1.06}\) km [67]. For PSR J0030 \(+\) 0451, NICER reported the value of \(2.072^{+0.0067}_{-0.066}M_{\odot}\)[69] for the mass, while the radius estimates are \(13.7^{+2.6}_{-1.5}\) km [68] and \(12.39^{+1.30}_{-0.98}\) km [69]. Those range of values are represented by the blue (PSR J0030 + 0451) and red (PSR J0030 + 0451) regions of Fig. 4. One can see that the model is compatible with the observational data.
The region of stability of the compact stars sequence can be obtained from Fig. 5. The maximum mass for each parametrization is shown by a circle. All the stars to the left of this point are stable, since \(\frac{\partial M}{\partial c_{e}}>0\)[78]. Here the static stability criterion is employed, as long as the compact stars under consideration have only one phase.
For each parametrization, one can solve the TOV equations taking into account the holographic EOS. We use those solutions for \(\epsilon(r)\) and \(p(r)\) to calculate the relativistic tidal deformability. For this end, we use Eqs. (13) and (14), performing the integration from the center (\(r=0\)) to the star's surface (\(r=R\)). The outcomes are represented in Fig. 6. For the constituent quark
Figure 1: Sound velocity for each parametrization. Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV.
Figure 2: QS radial profiles for the maximum stellar mass of each parametrization. Pressure versus radial coordinate. Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV.
masses of 360 MeV, the tidal deformability obtained is consistent with the GW170817 event.
Fig. 7 presents the dimensionless tidal deformability parameters, \(\Lambda_{1}\) - \(\Lambda_{2}\), for the components of the binary compact star mergers, obtained with the chirp mass of the GW170817 event, \(\mathcal{M}=1.188^{+0.004}_{-0.002}M_{\odot}\). The outcomes are compared against the LIGO-Virgo confidence curves of 50% and 90% levels in the low-spin prior scenario [3]. For the constituent quark masses of 360 MeV, the model reproduces the observational data of the GW170817 event regarding tidal deformability.
Figure 4: (Color online) QS mass as a function of its radius for different values of \(m\). Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV. Red and blue regions represent 95% confidence intervals for the masses and radii PSR J0030+0451 and PSR J0740+6620 measured by NICER [66; 67; 68; 69]. The green horizontal line includes all observed masses over \(2M_{\odot}\), including the pulsars PSR J1614-2230, PSR J0348+0432 and PSR J0740+6620 [75; 76; 77].
Figure 5: QS mass \(M\) versus central density \(\epsilon_{c}\) for different values of \(m\). The maximum mass for each parametrization is shown by a circle. Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV.
Figure 3: QS radial profiles for the maximum stellar mass of each parametrization. Mass inside a volume of radius \(r\) versus radial coordinate. Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV.
Figure 6: (Color online) The tidal deformability parameter for the heaviest companion of the NS binary system versus the total stellar mass for different values of \(m\). Dot-dashed line: \(m=300\) MeV. Dotted line: \(m=320\) MeV. Dashed line: \(m=340\) MeV. Solid line: \(m=360\) MeV. Observational data from GW170817 event [3; 4; 5].
## 6 Summary and Concluding Remarks
In this work, we analyzed both static and dynamical QS properties within a holographic description. The mass-radius relation and the tidal deformability parameter were compared against recent observational data. We solved the TOV equations using the EOS of the \(D_{3}-D_{7}\) holographic model for describing the quark matter. In this framework, one has \(AdS_{5}\times S^{5}\) with the \(N_{f}\)\(D_{7}\)-branes wrapping \(AdS_{5}\times S^{3}\)[45] and the constituent quark mass is the only adjusted parameter of the EOS. We study the properties of the system for a range of values from \(m=300\) MeV to \(m=360\) MeV.
We obtained the \(M(R)\) sequence of compact stars, highlighting the regions of stability, see Fig. 5. It is shown that the holographic description is compatible with NICER observations for the pulsars PSR J\(0030+0451\) and PSR J\(0030+0451\). Decreasing the constituent quark mass value gives a higher maximum stellar mass, the last stable compact star. In particular, for \(m=300\) MeV, the holographic model can achieve the observed value of two Solar masses [75; 76; 77].
In addition, we showed that the tidal deformability parameter for the constituent quark masses of \(m=360\) MeV is compatible with the values associated with the GW170817 event observed by the LIGO-Virgo collaboration (see Figs. 6 and 7). The maximum mass for this parametrization is \(1.4M_{\odot}\) and belongs to a region of NICER data (blue region of Fig. 4). On the other hand, our exploratory study suggests that this holographic model is not able to reproduce simultaneously the tidal deformability of GW170817 event and a stellar mass of \(2M_{\odot}\). It indicates that further improvements should be implemented as, for example, considering a possible contribution of strange quarks for the equation of state [24].
QS can describe realistic astrophysical objects, whose quarkyonic matter in the core may carry effects of quantum gravity in AdS/CFT, as reported in Ref. [79]. The conformal traceless tensor fields, the decay rate of sound waves, the bulk viscosity, the pressure, and the energy density of the QGP were shown to support meaningful quantum corrections due to a functional measure, also encoding the instability of the QGP. Within this framework, the results in Secs. 3 - 5 may be slightly refined when very high-energy processes set in, making the thermodynamic variables acquire these quantum gravity effects. For instance, quantum gravity effects account for Eq. (10) in Sec. 3 and the functions \(F(r)\) and \(G(r)\) in Sec. 4 to be corrected up to \(\sim 0.86\%\), when compared to the standard QS without quantum gravity corrections in AdS/CFT. These effects will not significantly change the results obtained in our work, on the scale of energy here studied. Finally, the stability of QS, in particular displayed in Fig. 5, can be alternatively probed by information entropy methods, including the configurational entropy [80; 81] and the holographic entanglement entropy in QCD [82].
_The authors thank Niko Jokela and Carlos Hoyos for fruitful discussions. M.A. acknowledges the partial support of the National Council for Scientific and Technological Development CNPq (Grant No. 400879/2019-0). C. H. Lenzi is thankful to the Sao Paulo Research Foundation FAPESP (Grant No. 2020/05238-9). W.d.P. acknowledges the partial support of CNPq (Grant No. 313030/2021-9) and the Coordination for the Improvement of Higher Education Personnel CAPES (Grant No. 88881.309870/2018-01). R.d.R. is grateful to FAPESP (Grant No. 2021/01089-1 and No. 202/01734-7), CNPq (Grant No. 303390/2019-0), and CAPES-PrInt (Grant No. 88887.897177/2023-00), for partial financial support; and to Prof. Jorge Noronha and the Illinois Center for Advanced Studies of the Universe, University of Illinois at Urbana-Champaign, for the hospitality._
|
2306.12187 | Hydrophobically gated memristive nanopores for neuromorphic applications | Brain-inspired computing has the potential to revolutionise the current von
Neumann architecture, advancing machine learning applications. Signal
transmission in the brain relies on voltage-gated ion channels, which exhibit
the electrical behaviour of memristors, resistors with memory. State-of-the-art
technologies currently employ semiconductor-based neuromorphic approaches,
which have already demonstrated their efficacy in machine learning systems.
However, these approaches still cannot match performance achieved by biological
neurons in terms of energy efficiency and size. In this study, we utilise
molecular dynamics simulations, continuum models, and electrophysiological
experiments to propose and realise a bioinspired hydrophobically gated
memristive nanopore. Our findings indicate that hydrophobic gating enables
memory through an electrowetting mechanism, and we establish simple design
rules accordingly. Through the engineering of a biological nanopore, we
successfully replicate the characteristic hysteresis cycles of a memristor
\tr{and construct a synaptic device capable of learning and forgetting}. This
advancement offers a promising pathway for the realization of nanoscale, cost-
and energy-effective, and adaptable bioinspired memristors. | Gonçalo Paulo, Ke Sun, Giovanni di Muccio, Alberto Gubbiotti, Blasco Morozzo della Rocca, Jia Geng, Giovanni Maglia, Mauro Chinappi, Alberto Giacomello | 2023-06-21T11:33:40Z | http://arxiv.org/abs/2306.12187v3 | # Hydrophobically gated memristive nanopores for neuromorphic applications
###### Abstract
Brain-inspired computing has the potential to revolutionise the current von Neumann architecture, advancing machine learning applications. Signal transmission in the brain relies on voltage-gated ion channels, which exhibit the electrical behaviour of memristors, resistors with memory. State-of-the-art technologies currently employ semiconductor-based neuromorphic approaches, which have already demonstrated their efficacy in machine learning systems. However, these approaches still cannot match performance achieved by biological neurons in terms of energy efficiency and size. In this study, we utilise molecular dynamics simulations, continuum models, and electrophysiological experiments to propose and realise a bioinspired hydrophobically gated memristive nanopore. Our findings indicate that hydrophobic gating enables memory through an electrowetting mechanism, and we establish simple design rules accordingly. Through the engineering of a biological nanopore, we successfully replicate the characteristic hysteresis cycles of a memristor and construct a synaptic device capable of learning and forgetting. This advancement offers a promising pathway for the realization of nanoscale, cost- and energy-effective, and adaptable bioinspired memristors.
With the current upsurge in the production and deployment of artificial intelligence technologies, it has become critical [1] to circumvent the bottleneck associated with processing and storing data in separate units, which is specific to the von Neumann computer architecture [2]. Biology, which initially motivated the birth of artificial neural networks, is currently serving as a source of additional inspiration for a different paradigm in computer architectures, _neuromorphic computing_, which could boost the performance and sustainability of artificial intelligence [2, 3, 4].
Neuromorphic computing, as the name suggests, is shaped after the architecture of the brain, in which storage and processing of data happen in the same unit [5]. The most advanced technologies to date [6, 7, 8, 9, 10] implement this paradigm exploiting semiconductors; their applicability for machine learning systems has already been demonstrated [11, 12]. Even though these approaches have significantly lowered the power consumption of typical neuromorphic calculations, they are still far from the performance of biological neurons [13].
The brain indeed requires just a few watts to run and its basic operations are orchestrated by nanofluidic devices - ion channels [14] - transmembrane proteins which transmit signals in the form of ion currents. The non-linear behaviour that is essential for brain functions originates in the history-dependent conductance of the ion channels that are found in neurons, enabling the action potential, as first explained by Hodgkin and Huxley [15]. Specifically, ion channels in neurons can "gate", i.e., switch on or off, depending on the transmembrane potential [16]. Voltage gating typically occurs by complex action-at-a-distance mechanisms in which information is propagated from a voltage sensor domain to the ion-permeable pore, which is actuated by sterical occlusion [17].
From an electrical standpoint, ion channels behave as _memristors_ (memory resistors) [18], circutical elements whose resistance depends on the internal state of the system [19, 20]. Different architectures have been proposed to produce iontronic nanofluidic memristors [21, 22, 23, 24], in which ions act as charge (and information) carriers instead of electrons. Intornics platforms have the potential of being multichannel, as their natural counterpart [25], with information flowing in parallel through the same circuit encoded by different ions.
In this work we propose a hydrophobically gated memristive nanopore (HyMN), with an architecture inspired by biological ion channels; a drastic simplification is introduced in the gating mechanism, which relies on the formation of nanoscale bubbles to switch the ion currents thus requiring no moving parts [26, 27]. Voltage can be used to control the conductance of the nanopore imparting _memory by electrowetting_. We engineered a HyMN prototype mutating a biological nanopore, Frac. The device produces the pinched hysteresis loop in the voltage-current curve which is the signature of memristors [19, 20] and can behave as a synapse, learning and forgetting. This robust and flexible design combines the advantage of being an iontronic memristor with the simplicity of a 1D system, showing promise as a basic element for innovative nanofluidic computing.
## From wet/dry bistability to memristive behaviour
### Electrowetting of a single nanopore
To show how hydrophobic gating can enable memristive behaviour, we consider a simple nanopore model (Fig. 1a), consisting of a hydrophobic cylinder with a diameter of 1 nm and a length of 2.8 nm, mimicking the sizes of biological nanopores [16]. When immersed in water, the nanopore lumen can be found either in the dry or in the wet states (Fig. 1a), due to its small size and hydrophobicity [27]. The dry state is characterised by the presence of a vapour bubble, which precludes the flow of water and ions, resulting in a non-conductive (gated) pore [26, 29, 30].
The wet and dry states correspond to two different minima of the free energy, separated by a barrier. In the following, we will refer to the global minimum as the stable/most probable state, while the metastable state corresponds to the local minimum. The full (equilibrium) free energy profile, obtained by Restrained Molecular Dynamics (RMD) [28], is reported in Fig. 1b (solid black line), and in Supp. Fig. S1. For our model pore, the global free energy minimum corresponds to the dry (nonconductive) state; the free energy barrier for wetting is about \(18\,k_{B}T\), while the drying one is less than \(5\,k_{B}T\).
By applying an external voltage \(\Delta V\) across the nanopore, it is possible to shift the free energy profile towards the wet state (Fig. 1b) thereby changing its conductance, for details see Supp. Note S1 and Supp. Fig. S2. The origin of this effect is electrowetting - the electric field favours the wetting of the pore by electrostricting the water meniscus [31]. The voltage at which the stable state switches from the dry to the wet is indicated as \(V_{c}\). For \(\Delta V>V_{c}\), the system is preferably in the wet, conductive state. In analogy to electronic memristors [18], the voltage-dependence of the ionic conductance of the nanopore shown in Fig. 1a is the crucial ingredient for developing a hydrophobically gated memristor.
In Fig. 1c we report the wetting and drying transition rates (\(k_{w}\) and \(k_{d}\), respectively) computed at different \(\Delta V\), which are fundamental to assess the memory behaviour of the system; the protocol to accurately estimate these rates is discussed in Supp. Note S2 and Supp. Fig. S3. Indeed, the emergence of memory is due to the finite time that the system takes to transition from the metastable state to the stable state. Consider for example a pore which, at a moment \(\tau_{0}\), is in the dry state: by switching instantaneously the voltage to \(\Delta V>V_{c}\), the system will "remember" the previous dry state for a certain time \(\tau_{w}=1/k_{w}\). In this dry state ions cannot translocate through the pore and the nanopore is non-conductive even if \(\Delta V>V_{c}\). However, if the previous condition of the system was wet, at the same voltage the nanopore would be conductive. In the next section, we will show how the dynamic modulation of the wet/dry bistability of an ensemble of HyMNs generates a pinched IV loop, the hallmark of memristors.
### Collective Behaviour and Pinched Hysteresis Loop
Figure 1a-c shows that a single model pore can only be observed in a conductive (wet) or a non-conductive (dry) state. Instead, an array (ensemble) of pores would have a distribution of wet and dry pores, whose ratio depends, _inter alia_, on the applied voltage. The transition from single pore to the ensemble behaviour is discussed in Supp. Fig. S4, showing that just some tens of pores are needed to observe a continuous response as opposed to a stochastic one.
The average per-pore conductance \(G=\frac{1}{N_{p}}\frac{I}{\Delta V}\), with \(N_{p}\) the number of pores, \(I\) the total current, and \(\Delta V\) the applied voltage at a given moment, of an ensemble of pores is given by
\[G(\Delta V,t)=g_{0}\,n(\Delta V,t)\;, \tag{1}\]
where \(g_{0}\) is the single wet pore conductance and \(n\) the probability that a single pore is wet. \(n\) is history dependent and, in the limit of an infinite number of pores, its evolution can be described by a master equation
\[\frac{dn}{dt}=(1-n)\,k_{w}-n\,k_{d}\;, \tag{2}\]
with \(k_{w/d}(\Delta V)\) the voltage-dependent wetting/drying rates in Fig. 1c.
In Fig. 1d we report three current-voltage (IV) curves obtained by the numerical integration of Eqs. 1 and 2 under a saw-tooth potential at different cycling frequencies. The picture shows that an array of HyMNs has three possible regimes: i) at low frequencies (10 Hz, orange line), the array behaves as a non-linear resistor, because the system has enough time to visit both the wet and dry states with the equilibrium probabilities; ii) at high frequencies (100 MHz, dashed pink), the system behaves as an ohmic resistor with finite or infinite resistance, depending on its initial wet or dry state, respectively; in this regime the voltage variation is too fast to allow to
move away from the local equilibrium; iii) at intermediate frequencies (10 kHz, blue) the system displays a pinched-loop hysteresis, i.e., memristive behaviour. This happens because the cycling frequency does not allow a complete equilibration of all the pores of the array to their stable state. As a consequence, the number of the wet pores at a given moment strongly depends on the previous state, i.e., the system has memory. For instance, starting with all dry pores, the total current will increase with increasing voltage, but with some delay as compared to the equilibrium wet pore probability: cf. the blue and orange lines of Fig. 1d; for \(\Delta V>\Delta V_{c}\). The inset of Fig. 1d shows that the memristive behaviour is observed over a rather broad range of frequencies; in this example, \(10^{2}<f<10^{7}\) Hz. A number of parameters can influence this range and the location of the maximum, see also Supp. Fig. S5. Memristors can be classified in different types depending on the shape of their IV curves [24, 32], see Supp. Fig. S6. Our model nanopore is unipolar, which is expected based on the system symmetry.
### Design criteria for HyMNs
The previous analysis demonstrated that a pinched hysteresis loop - the fingerprint of memristors - can be produced by an ensemble of hydrophobically gated nanopores.
Based on the physical insights into the gating mechanism, we identify four design criteria that a nanopore must satisfy to behave as an efficient HyMN:
1. The pore must be preferentially dry at \(\Delta V=0\);
2. The pore must undergo electrowetting before the maximum voltage \(\Delta V^{*}\) that the system can sustain; e.g., for biological pores embedded in lipid membranes no more than 300 mV can usually be applied [33], while solid-state membranes can bear voltages up to some volts, depending on thickness and other parameters [34, 35];
3. The pore must dry "quickly" at \(\Delta V=0\) to ensure a fast transition from the wet state to the dry state;
4. The pore must wet "quickly" at the maximum voltage \(\Delta V^{*}\) to ensure a fast transition from the dry state to the wet state.
The four conditions above require the fine-tuning of a non-linear combination of different physical properties of the system, like the contact angle between the solid and the liquid-vapour interface, the radius and length of the nanopore, and the susceptibility of the pore to wetting by applying a voltage. To explore how different geometries
Figure 1: **Simple model of a memristive hydrophobic nanopore.****a)** Atomistic representation of a cylindrical hydrophobic nanopore immersed in water. The nanopore can switch between the wet (conductivity) and dry (non-conductive) state with rates \(k_{v}\) and \(k_{g}\) respectively. These rates depend on the applied voltage across the membrane, \(\Delta V\). **b)** Free energy profile computed as a function of the number of water molecules inside the nanopore (water filling \(\xi_{w}\)), normalised by the number of water molecules in the wet state. The equilibrium profile (at \(\Delta V=0\)) is computed by Restinised Molecular Dynamics (RMD) [28], while the voltage dependence is estimated by using the model explained in Supp. S11. At \(\Delta V=0\), the global minimum (i.e. the most probable state) corresponds to the dry state. Beyond the transition voltage, \(\Delta V>V_{c}\) the wet state becomes favoured; for this specific case \(V_{c}=1.2V\). The error bars of the profile computed using RMD are comparable to the width of the line, see Supp. Fig. S1. **c)** Variation of wetting and drying rates with \(\Delta V\). The drying rate, at \(\Delta V=V_{c}\), is 5 orders of magnitude lower than its value at \(\Delta V=0\), while the wetting rate is 3 orders of magnitude higher. **d)** Current-voltage (IV) curve for an array of independent nanopores under saw-tooth voltage cycles (top inset), at different frequencies. Three possible regimes are shown depending on the cycling frequency, going from a non-linear resistance (slow, 10 Hz) to a linear ohmic behaviour (fast, 100 MHz). At intermediate frequencies, the system shows pinched hysteresis, i.e., memory. The area of the hysteresis as a function of the cycling frequency is reported in the bottom inset, showing a maximum around 100 kHz.
and physical parameters affect the wetting and drying dynamics, we constructed a macroscopic model based on classical nucleation theory to estimate the wetting and drying rates, taking into account the effect of the voltage; the full details of the model are described in Supp. Note S3 and Supp. Fig S6. Within this model, we find that the range of parameters satisfying the previously expressed requirements is restricted to narrow (sub)nanometer-sized pores and to aspect ratios close to unity, see Fig. 2a.
The drying time depends mostly on the diameter of the pore and its contact angle, while the wetting time depends also on the length of the pore. These characteristic times restrict the size of the pore to the nanoscale, as pores with larger diameters would not dry once wet and longer pores would require too high voltages to wet. The contact angle is the dominant factor controlling the allowed aspect ratio to have a functioning HyMN, see Fig. 2b. Some biological channels fall in or near the region where hydrophobic gating is possible, and in fact some are known to do so, like CRAC [36] and BK [37] channels. In the next section, we explore the biological FraC channel, whose approximate dimensions are represented by a white ellipse in Fig. 2b. When allowing for higher maximum voltages \(\Delta V^{*}\), the range of aspect ratios can be significantly expanded, see Fig. 2c; the ellipse denotes the approximate position in parameter space of the model pore in Fig. 1, which indeed displays wetting around \(\Delta V=1.2\) V.
### A biological HyMN: the engineered FraC nanopore
To put to the test the above predictions, we engineered a biological nanopore - the Fragacetoxin C (FraC)- to hydrophobically gate. The wild type FraC is a biological toxin found in the sea anemone Actinia fragacea [38], which has been recently used in single-molecule nanopore sensing [39]; its stability allows the pore to be easily engineered by introducing different point mutations in its constriction [40].
In line with the design criteria of Fig. 2 and supported by atomistic simulations, we designed the double mutant G13F/G6F in which two hydrophobic residues are introduced in the narrowest region of the pore (the constriction), see Fig. 3a and Supp. Fig. S7. A peculiar feature of the system is the presence of a titratable ring of aspartic acids D10 (\(\mathrm{pK_{a}}=4.5\)) at the constriction center, between F6 and F13, that can be used to tune the wettability of the pore by changing the pH. Indeed, the protonation of D10, highlighted in Fig. 3a, creates an uncharged region extending for ca. 1.2 nm (3-4 aminoacid rings) that is mostly hydrophobic, allowing the formation of a stable vapour bubble inside the pore as demonstrated by the RMD simulations in Fig. 3b. Indeed the computed pore filling free-energy profile shows that, at pH 3.8, with D10 completely neutralised, the system exhibits two free-energy minima, with the dry state being the most favourable one. On the other hand, at pH 7 (red line), with charged D10, the system displays a single free-energy minimum - the wet (conductive) state.
The theoretical predictions are confirmed by single-pore electrophysiology measurements, reported in Fig. 3b. An intense flickering between two conductance levels is observed at pH 3.8 (black line), which is not seen neither in the wild type nor in the mutated pore at pH 7 (red line), strongly indicating the presence of hydrophobic gating in the mutated pore. The largest conductance roughly corresponds to the stable _wet pore_ conductance at pH 7, indicating that the open pore structure is not altered at low pH. Several intermediate states are present, indicating a more complex interplay between the bubble formation and the flexible protein
Figure 2: **Design criteria for HyMNs.****a)** The intersection of the 4 main design criteria exposed in the main text define a white region in the diameter vs. aspect ratio plane where hydrophobic gating is expected. Here, contact angle \(104^{\circ}\) and maximum voltage \(\Delta V^{*}=0.2\) V are assumed. The green region is forbidden as devices require voltages higher than \(\Delta V^{*}\) to wet. The red region corresponds to HyMNs for which the wet state is more favourable than the dry state at 0 V. The blue region, corresponds to HyMNs that more than 10 seconds to dry when no voltage is applied, “slow dryers”. The gray region corresponds to HyMNs taking longer than 10 seconds to wet (i.e. “slow werterers”) at \(\Delta V=0.2\) V. **b)** “Allowed” regions for 4 different contact angles, from \(100^{\circ}\) to \(115^{\circ}\), at fixed \(\Delta V^{*}=0.2\) V. Lower contact angles give rise to slightly bigger areas, very similar aspect ratios for small diameters, but very different aspect ratios for large diameters. The ellipse shows the approximate region where a specific biological channel (FraC heptamer, see Fig. 3) belongs. \(\phi\) “Allowed” regions for \(0.2\,\mathrm{V}<\Delta V^{*}<1.5\) V, at fixed contact angle \(104^{\circ}\). Different shades of green represent the movement of the high voltage line (dashed gray) in panel a). Higher \(\Delta V^{*}\) greatly increases the “working” area (between the dashed grey and the black dash-dotted line), allowing for longer pore to have the memristive behaviour. The ellipse corresponds to the model nanopore in Fig. 1a.
structure (e.g., elastocapillary effects [27]) than the one explored by RMD simulations.
Fig. 3d reports the average experimental IV curve, from which the capacitive current, e.g., due to the membrane, was subtracted out - see Supp. Note S4 Supp. Fig. S9-S13. The system clearly shows a pinched hysteresis loop, characteristic of memristors. The asymmetric response of the system, under opposite applied voltages, is likely to originate in the conical shape and non-uniform charge distribution of the FraC nanopore, as previously reported for other asymmetric geometries [35]. Indeed, by using the same protocol as in Fig. 1a to compute the effect of voltage on the simulated free-energy profile, we find that the intrinsic dipole of the FraC nanopore system leads to an asymmetric response under opposite voltages, see Supp. Fig. S8, consistent with experiments. Differently from Fig. 1d, the IV curve self intersects at the origin, which is the signature of bipolar memristors (Supp. Fig. S6). Therefore, controlling the symmetry/asymmetry of the pore, HyMN devices can be designed to behave either as unipolar or bipolar memristors.
In summary, by engineering the wetting properties of a mutated FraC nanopore, we demonstrated the potentiality of the proposed nanofluidic memristors, which exploit hydrophobic gating to induce memory. HyMNs have the advantages of compactness, simplicity -having no moving parts nor allosteric gating mechanisms- durability, and high reproducibility. The mutagenesis approach can be easily extended to other well studied nanopores having different radii and lengths, such as \(\alpha\)-Hemolysin [41], Aerolysin [42], CsgG [43], or artificial de-novo \(\beta\)-barrel nanopores [44] that can have other dynamical characteristics. Moreover, solid-state nanopores can be easily grafted with hydrophobic groups [35; 45] and, together with engineered biological nanopores, can pave the way to the next generation of highly tunable nanofluidic memristors.
### Neuromorphic applications using HyMNs
Neuromorphic computing has garnered significant attention as it promises to transcend the capabilities of digital computers by emulating the complex behaviour of neurons. Here, we tested the potential of HyMNs in neuromorphic applications by experimentally realising a device that has a synapse-like "learning-and-forgetting" behaviour (Fig. 4a). Figure 4b-e shows the response of a FraC-based HyMN, constituted by few nanopores (less than 5). The bipolar nature of the memristor allows for excitatory and inhibitory responses in the same device. The possibility to reversibly control the conductivity of
Figure 3: **Hydrohopically gated FraC nanopore**. **a)** MD system composed by the FraC nanopore embedded in a lipid membrane and immersed in 1M KCl water solution. Ions are not show for sake of clarity. In pink are highlighted the acidic residues that are protonated at pH 3.8, e.g., the aspartic acid D10. The system shows the FraC channel with a mutation on G13F and G6F, effectively creating a narrow hydrophobic region in the pore constriction. The water molecules inside the control box are displayed in classical VDW style. **b)** Pore filling free-energy profile, computed by counting the number of water inside the control box (pore constriction), at pH 7 (red) and 3.8 (black). At neutral pH the system presents only one minimum, corresponding to the wet state, while at low pH the system displays two minima, with the dry state being the more probable. **c)** Experimental current current series pore, measured at constant voltage \(\Delta V=-50\) mV at different pH. Lowering the pH makes the channel gate, as the neutralisation of the charged residue is completed, and a hydrophobic region is developed. **d)** Experimental IV curve under a cycling applied voltage (period 0.5). The plot is obtained by averaging the current at each voltage over 35 realisations of the same cycle, after the capacitance current was subtracted, see Supp. Note S4 and Supp. Fig. S9-13. The systems clearly show a pinched hysteresis loop, the hallmark of memristors. The direction of the loop is that of a bipolar memristor
the device using excitatory and inhibitory pulses paves the way to the exploitation of HyMNs in more complex intronic learning devices, e.g., for analog neural networks.
The energy consumption for one voltage spike produced by biological neurons was estimated to be between 0.01 and 10 pJ [46], which is significantly less expensive than those produced by solid-state neurons, which in turn outperform digital software-based ones [13]. The energy consumption of our device during the synaptic events is on the order of some pJ, see Fig. 4f-g, on par with biological neurons in terms of efficiency. For comparison, the 2D nanofluidic memristors of Ref. [24], requires order of nJ to perform similar tasks. Although inspired to voltage-gated ion channels, the presented HyMN devices have no moving parts, and hence are more easily tunable and robust, as specific mutations of the pore lumen have a predictable effect on hydrophobic gating, as demonstrated in this work; differently, mutations on voltage-gated ion channels have more complicated implications on the protein structure and on the allosteric gating mechanisms [17] and, hence, are harder to engineer. Our findings showcase the potential of HyMNs as flexible building blocks of nanofluidics neuromorphic computing.
## Conclusions
In this work, we propose and demonstrate a hydrophobically gated memristive nanopore (HyMN). Molecular dynamics simulations revealed the microscopic mechanism at the heart of the memristive behaviour, i.e., memory by electrowetting. Guided by the molecular dynamics results, we propose design criteria to narrow the parameter space where HyMNs can be found, pointing towards biological nanopores as promising candidates owing to their size and the possibility to carefully control their hydrophobicity by point mutations. We tested our prediction by engineering a mutant of the biological FraC nanopore to have a hydrophobic constriction. Molecular dynamics simulations demonstrated that it displays hydrophobic gating at low pH. Electrophysiological experiments confirmed this microscopic insight, showing a random telegraph signal only at low pH and displaying the hysteresis loop in the IV curve which is a signature of memristors. A HyMN-based device was successfully built and tested, showing synaptic capabilities, harnessing the power of hydrophobic gating and electrowetting to learn and forget. We show that engineered biological nanopores thus can serve as HyMNs, with important strengths: they are energy efficient, nanometer-sized, have no moving parts, are highly reproducible and economical, and advanced technologies are available to fine tune their properties [41, 44].
The computational capabilities of the brain have initiated the era of artificial intelligence, which in turn calls for suitable neuromorphic computing architectures, that should be durable and sustainable. The most advanced technologies so far have employed semiconductors, but nanofluidic memristors are making way. The proposed HyMN concept brings back to the original archetype from which this journey started, i.e., ion channels which confer to neurons their computational capabilities. Could the considerable simplification of hydrophobic gating
Fig. 4: **Learning and forgetting in a hydrophobically gated memristive nanopore (HyMN) device.****a)** Experimental setup. The HyMN is composed by multiple engineered FraC nanopores (less than 5), immersed into a lipid membrane sparing two electrolyte reservoirs. The current that passes through a HyMN depends on the previous voltage applied at its terminals. **b)** The average of measured current (black line) through the HyMN subjected to a voltage signal (golden line) composed of 4 positive “excitatory” triangular waves, 2 negative “inhibitory” ones, and 4 positive ones. The current response increases during the excitatory pulses and decreases during the inhibitory ones. **c)** HyMN device response to the opposite protocol, starting with 4 inhibitory pulses, followed by 2 excitatory ones, and ending with 4 inhibitory pulses. Again, during the inhibitory stage the maximum current goes down, while it increases after excitatory pulses. For both panels b) and c) the capacitive current was subtracted; the measured current is the average over 35 realizations. **d-e)** Change in the average conductivity (%), computed with respect to the conductivity during the first spike in b and c, respectively. The conductivity is averaged over each pulse. Note that the pore is more conductive at negative voltages, see Fig. 3d, a behaviour usually observed in biological nanopores and also reported for the FraC WT and other mutants [40]. **f-g)** Cumulative dissipated energy during the cycles reported in b and c, respectively. The dissipated energy is computed by integrating the mean electric power during the cycle, obtained averaging the current trace over 35 realizations.
together with the current capabilities of molecular biology bring about a revolution in the field?
|
2310.08639 | Mixed-state Quantum Phases: Renormalization and Quantum Error Correction | Open system quantum dynamics can generate a variety of long-range entangled
mixed states, yet it has been unclear in what sense they constitute phases of
matter. To establish that two mixed states are in the same phase, as defined by
their two-way connectivity via local quantum channels, we use the
renormalization group (RG) and decoders of quantum error correcting codes. We
introduce a real-space RG scheme for mixed states based on local channels which
ideally preserve correlations with the complementary system, and we prove this
is equivalent to the reversibility of the channel's action. As an application,
we demonstrate an exact RG flow of finite temperature toric code in two
dimensions to infinite temperature, thus proving it is in the trivial phase. In
contrast, for toric code subject to local dephasing, we establish a mixed state
toric code phase using local channels obtained by truncating an RG-type decoder
and the minimum weight perfect matching decoder. We also discover a precise
relation between mixed state phase and decodability, by proving that local
noise acting on toric code cannot destroy logical information without bringing
the state out of the toric code phase. | Shengqi Sang, Yijian Zou, Timothy H. Hsieh | 2023-10-12T18:02:35Z | http://arxiv.org/abs/2310.08639v1 | # Mixed-state Quantum Phases: Renormalization and Quantum Error Correction
###### Abstract
Open system quantum dynamics can generate a variety of long-range entangled mixed states, yet it has been unclear in what sense they constitute phases of matter. To establish that two mixed states are in the same phase, as defined by their two-way connectivity via local quantum channels, we use the renormalization group (RG) and decoders of quantum error correcting codes. We introduce a real-space RG scheme for mixed states based on local channels which ideally preserve correlations with the complementary system, and we prove this is equivalent to the reversibility of the channel's action. As an application, we demonstrate an exact RG flow of finite temperature toric code in two dimensions to infinite temperature, thus proving it is in the trivial phase. In contrast, for toric code subject to local dephasing, we establish a mixed state toric code phase using local channels obtained by truncating an RG-type decoder and the minimum weight perfect matching decoder. We also discover a precise relation between mixed state phase and decodability, by proving that local noise acting on toric code cannot destroy logical information without bringing the state out of the toric code phase.
###### Contents
* I Introduction
* II Local channel transformations and definition of mixed-state phases
* II.1 Local channel transformations
* II.2 Definition of mixed-state phase equivalence
* III Real-space RG of quantum mixed-states
* III.1 From pure-state RG to mixed-state RG
* III.2 Correlation-preserving map
* III.3 Ideal mixed-state RG
* III.4 From mixed-state RG to mixed-state quantum phases
* IV Overview of examples
* V Noisy GHZ states
* V.1 Bit-flip noise
* V.2 Phase-flip noise
* VI Thermal toric code state
* VI.1 Review of the toric code model
* VI.2 RG of the thermal toric code state
* VII Noisy toric code state
* VII.1 Logical information and long-range entanglement
* VII.2 RG of the dephased toric code state
* VII.3 Truncated minimal weight perfect matching channel
* VIII Discussion and outlook
## I Introduction
Understanding quantum phases of matter is a central task of quantum many-body physics. The traditional focus is on pure states, which are typically ground states of local Hamiltonians. However, in many physical contexts ranging from finite temperature systems to open system dynamics [1; 2], one is required to deal with mixed states. Recently, in the context of non-equilibrium quantum simulators and computers, there has been significant progress in constructing many different examples of nontrivial mixed states from the effect of local decoherence on symmetry-protected topological and long-range entangled pure states [3; 4; 5; 6; 7; 8; 9; 10; 11] or from protocols involving measurement and feedback [12; 13; 14].
Given the increasing wealth of examples, it is thus desirable to have a general framework of mixed-state phases and in particular a notion of renormalization for distill
universal long-range properties of a phase. Furthermore, in the class of mixed states obtained by decohering an error-correcting code, there are remarkable instances [5] in which mixed state entanglement measures undergo a transition at the same point at which encoded information is lost. This motivates us to understand the precise connection between mixed state phase transitions and error correction thresholds. In this work we will address these questions.
One way of defining _pure-state_ phases is via local unitary (LU) circuits [15]: two states are in the same phase if there is a short-depth LU circuit that connects them. This is based on the physical intuition that phases should be defined by long-range properties and representatives only differ in their local properties. For mixed states, an analogous definition was proposed by Coser and Perez-Garcia [16]: two mixed states \(\rho_{1}\) and \(\rho_{2}\) are in the same phase if there exists a pair of short-time evolution with local Lindbladians from \(\rho_{1}\) to \(\rho_{2}\) and from \(\rho_{2}\) to \(\rho_{1}\). The major difference with the LU definition of pure state phases is that two-way connections are needed since channels are not in general reversible. Another difference is that unlike pure states of interest, there is often no notion of Hamiltonian, gap, or adiabatic path to furnish the local transformations required to connect two mixed states (see however Ref. [17] for recent developments). These make establishing the existence of a mixed state phase much more challenging.
We draw inspiration from the real-space renormalization group (RG), which has played a major role in statistical mechanics and quantum many-body physics. The idea dates back to Wilson [18] and Kadanoff [19] who proposed that under block spin transformations, statistical mechanical systems flow to fixed points whose properties are easier to characterize. In the context of quantum many-body systems, real-space RG has led to the development of powerful numerical algorithms, including density matrix renormalization group (DMRG) [20], multiscale entanglement renormalization ansatz (MERA) [21], as well as theoretical tools, including matrix product states (MPS) [22], projected entangled pair states (PEPS) [23], etc. However, thus far, real-space RG has predominantly been applied to coarse-grain _pure_ quantum states.
In this work, we define a real-space RG scheme for mixed states involving local channel (LC) transformations to establish the existence of mixed-state phases. We define an "ideal" RG to consist of local channels acting on blocks which preserve correlations between different blocks, and we prove that the actions of such correlation-preserving channels can be reversed by another channel, thus establishing the phase equivalence of the fine-grained and coarse-grained states. As an example, we construct an ideal RG for the two-dimensional toric code at finite temperature and show that the temperature monotonically increases under coarse graining and thus the state does not possess topological order.
We also consider mixed states obtained by applying local decoherence to quantum error correction codes. There is a notion that the logical information in topological codes is protected by long-range entanglement. With a definition of mixed-state phases, we can make its relation to error correction precise. We prove that short-range correlated noise (represented by a local quantum channel) cannot destroy logical information without also transitioning out of the mixed state topologically ordered phase. We illustrate these connections in the example of toric code subject to local dephasing noise, for which we demonstrate the existence of a toric code mixed state phase by constructing (1) a real-space RG scheme based on the Harrington decoder [24] and (2) (quasi-)local channels based on truncating the minimum weight perfect matching (MWPM) algorithm. Our local version of MWPM can potentially be used for efficiently detecting the toric code mixed-state phase in experiments [25; 26].
We mention that several prior related works [27; 28] developed a mixed state RG scheme based on purification of the mixed state, which is generally different from our scheme but in some cases can furnish the local channels required in our scheme. Refs. [29; 30] defined an RG fixed point condition for one-dimensional matrix product density operators and related them to boundaries of two-dimensional topological order. Refs. [31; 32] demonstrated how quantum convolutional neural networks [33] can furnish RG schemes for detecting non-trivial pure state phases.
This paper is structured as follows. In Sec. II we define LC transformations and mixed state phase equivalence. In Sec. III we formulate the real-space RG for mixed-states and discuss its implications. In Secs. IV to VII we analyze several examples, including the dephased GHZ state, thermal toric code, and dephased toric code. In Sec. VII.1 we prove a relation between decodability and mixed-state phases.
## II Local channel transformations and definition of mixed-state phases
### Local channel transformations
We define local channel transformations following the proposal in [34].
**Definition 1** (Local channel (LC) transformation).: **On a given lattice of linear dimension \(L\), a range-\(r\) LC transformation is a quantum channel composed of the following steps: (1) Adding qubits to each lattice site, all initialized in the \(|0\rangle\) state; (2) Applying a range-\(r\) unitary circuit \(U\) on the lattice; (3) Tracing out some qubits on each lattice site.**
The range of a circuit is defined as the maximal range of each unitary gate times the depth of the circuit.
Henceforth if \(r\) is not specified for an LC transformation, it is assumed that \(r/L\to 0\) in the thermodynamic limit.
The major difference between local channel transformations and local unitary ones (LU) [15] is step (3). In local unitary transformations, a qubit can be discarded only when it is disentangled from the rest of the system. In that context, (1) and (3) are inverse operations, and hence LU transformations are invertible. In contrast, LC transformations allow discarding a qubit that is still entangled with the rest of the system, _i.e._\(\rho_{i,\vec{i}}\neq\rho_{i}\otimes\rho_{\vec{i}}\) with \(i\) being the qubit to be discarded and \(\vec{i}\) being the rest of the system. As a result, LC transformations are generically non-invertible.
LC transformations constitute a broad class of operations including any circuit composed of local channel gates, _i.e._ channels that only act on local domains of sites. To show this, one needs the Stinespring dilation theorem: any quantum channel \(\mathcal{E}_{X\to Y}\) can be rewritten as:
\[\mathcal{E}(\cdot)=\mathrm{tr}_{A^{\prime}}\left(U((\cdot)\otimes\ket{0} \bra{0}_{A})U^{\dagger}\right) \tag{1}\]
where \(U\) is a unitary map from \(X\cup A\) to \(Y\cup A^{\prime}\). In other words, any quantum channel can be implemented by adding some degrees of freedom, applying a unitary on the joint system, and discarding some degrees of freedom. Applying the theorem to a circuit of channel gates, one can first replace each channel gate with its Stinesping dilation form. Then one can move forward all the ancillae addition to the beginning of the circuit and postpone all
Figure 1: **(a)** Definition of mixed-state phase equivalence adopted in this work. Two many-body mixed states \(\rho_{1}\) and \(\rho_{2}\) are in the same phase if there is a pair of low-depth spatially local quantum channels \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) such that \(\rho_{2}\approx\mathcal{C}_{1}(\rho_{1})\) and \(\rho_{1}\approx\mathcal{C}_{2}(\rho_{2})\). **(b)** Illustration of the correlation-preserving criterion in Def.3. For a given bipartite mixed state \(\rho_{AB}\), a quantum channel \(\mathcal{E}\) acting on one party is correlation-preserving if it leaves the mutual information between two parties invariant. Thm.2 shows that \(\mathcal{E}\) is correlation-preserving if and only if its action can be reversed by another channel \(\mathcal{D}\). **(c)** Mixed-state RG consists of local channels (\(\mathcal{E}\)s) which coarse-grain degrees of freedom within a block. After iterating, all short-range correlations of the input state are discarded and only long-range ones remain. If all coarse-graining channels satisfy the correlation-preserving criterion, then the whole RG process can be reversed, by running from top to bottom and replacing each \(\mathcal{E}\) with its recovery map \(\mathcal{D}\). **(d)** Phase diagrams and RG flows of 4 exemplary mixed states studied in the Sec.IV. All 4 states come from perturbing a long-range entangled pure state in an incoherent way: In examples (i, ii) the pure state is the GHZ state, and in (iii, iv) it is the toric code state. In examples (i, ii, iv) the incoherent perturbation is a dephasing noise with strength \(p\) acted upon the state, while in (iii) the perturbation is a non-zero temperature. The mixed-state phase corresponding to the GHZ state and the toric code state are denoted by [GHZ] and [T.C.], respectively.
Figure 2: A circuit of local channel gates represented as an LC transformation.
the tracing-out to the end of the circuit. We graphically illustrate this in Fig.2. Furthermore, any finite-time local Lindbladian evolution can also be approximated by LC transformations by trotterizing the continuous dynamics.
### Definition of mixed-state phase equivalence
When studying (pure) ground states of gapped local Hamiltonians, two many-body states are defined to be in the same phase if one can be turned into the other through a LU transformation [15]. The definition reflects the idea that phases of matter should be characterized by long-range properties of the state, and should remain unchanged under reversible local modifications.
LC transformations, albeit local, are generally not reversible and can destroy long-range correlations. As an example, if one starts from an arbitrary state \(\ket{\psi}\) and applies the amplitude damping channel \(\mathcal{E}_{\mathrm{damping}}(\cdot):=\mathrm{tr}(\cdot)\ket{0}\bra{0}\) to each qubit in the system, the resulting state would be a product state \(\ket{0}^{\otimes L}\) without any non-trivial long-range correlation. On the other hand, an LC transformation's ability to create correlations is no stronger than LU ones. This follows from the fact that an LC transformation is some LU followed by discarding some degrees of freedom.
Thus the connectivity under LC transformations induces a partial order relation among mixed-states. States are ordered according to the amount of long-range correlation they possess: if \(\rho_{2}=\mathcal{C}(\rho_{1})\) for some LC transformation \(C\), then \(\rho_{1}\) has at least as much long-range correlation as \(\rho_{2}\). This naturally leads to the following definition of mixed-state phase equivalence 1
Footnote 1: The definition resembles the one taken in [16], where a pair of LC transformations is replaced by a pair of (quasi-)local Lindbladian evolutions. A similar definition also appears in [4] when defining mixed-state symmetry-protected topological orders.
**Definition 2** (Mixed-state phase equivalence).: On a given lattice, two many-body mixed states \(\rho_{1}\) and \(\rho_{2}\) are in the same phase if there exists a pair of LC transformations \(C_{1}\) and \(C_{2}\) such that \(C_{1}(\rho_{1})\approx\rho_{2}\) and \(C_{2}(\rho_{2})\approx\rho_{1}\).
Several clarifications regarding the definition:
* **Mixed-states of interest**: Though the definition above does not assume any restrictions on states \(\rho_{1,2}\), we are interested in physically relevant mixed states such as local Hamiltonian Gibbs states at finite temperature, gapped ground states subject to decoherence, and steady states of local Lindbladians.
* **The precise meaning of '\(\approx\)'**: This requires some distance measure of mixed states. For instance, we could define two mixed states \(\rho\approx\sigma\) if and only if \(F(\sigma,\rho)>1-\epsilon\) for some small \(\epsilon>0\), where \(F(\sigma,\rho):=||\sqrt{\sigma}\sqrt{\rho}||_{1}\) is the (Uhlmann) fidelity.
* **Ranges of LC transformations \(C_{1,2}\)**: In general we only require the range to be much smaller than the linear size of the lattice \(\rho_{1,2}\) is defined on. But as we will see later, it will be sufficient to have a range \(r=O(\mathrm{polylog}(L/\epsilon))\) when \(\rho_{1,2}\) have finite correlation length.
The definition is a natural generalization of pure-state phase equivalence defined through LU transformations. When restricting to pure many-body states, one can show that two states \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\) are of the same mixed-state phase if and only if \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\otimes\ket{\phi}\) are of the same pure-state phase for some invertible state \(\ket{\phi}\). We provide a proof in App.A.1.
Product states, _e.g._\(\ket{0}^{\otimes L}\), are states without any long-range correlations. This is also reflected by the partial order relation under LC circuits: any state can be turned into the product state by a LC transformation which only consists of the amplitude damping channel. Thus we identify the _trivial phase_ as the set of states that can be LC transformed from the product state. In other words, a mixed state is in the trivial phase if it can be written as \(\rho_{\mathrm{trivial}}=\mathcal{C}\left[\ket{0}^{\otimes L}\bra{0}^{\otimes L }\right]\) for some LC transformation \(\mathcal{C}\). This is equivalent to requiring that the state can be locally purified into a short-range entangled pure state.
We comment that the above definition treats quantum and classical correlations on the same footing. As an example, the state \(\rho=\frac{1}{2}(\ket{0^{\otimes L}}\bra{0^{\otimes L}}+\ket{1^{\otimes L}} \bra{1^{\otimes L}})\) is a classical ensemble of \(L\) spins which is non-trivial under the above definition, because it has classical long-range correlation. To single out states that contain long-range classical correlation only, we can define a state to be in a _classical phase_ if it can be written as \(\rho_{\mathrm{classical}}=\mathcal{C}(\rho_{\mathrm{Pr}(\mathbf{s})})\) for some LC transformation \(\mathcal{C}\). Here \(\rho_{\mathrm{Pr}(\mathbf{s})}:=\sum_{\mathbf{s}}\mathrm{Pr}(\mathbf{s})\ket {\mathbf{s}}\bra{\mathbf{s}}\) is a classical distribution \(\mathrm{Pr}(\mathbf{s})\) of product states \(\{\ket{\mathbf{s}}:\mathbf{s}\in\{0,1\}^{L}\}\) represented as a density matrix.
## III Real-space RG of quantum mixed-states
To answer whether two given states \(\rho_{1}\) and \(\rho_{2}\) are in the same phase, we need to either construct a pair of local channel transformations or prove their nonexistence.
Recall that when studying pure-state phases of ground states, adiabatic paths between Hamiltonians provide a convenient way of obtaining phase equivalence. Let \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\) be ground-states of local Hamiltonians \(H_{1}\) and \(H_{2}\). If there is a path from \(H_{1}\) to \(H_{2}\) in the space of local Hamiltonians such that the energy gap remains \(O(1)\) throughout the path, there is a standard way to construct a LU transformation connecting \(\ket{\psi_{1}}\) to \(\ket{\psi_{2}}\) which establishes the phase equivalence [35]. For mixed states, there is generally no counterpart to adiabatic paths.
In this section, we introduce the mixed-state real-space renormalization group (RG) as an alternative way to find LC connections and identify mixed-state phases.
### From pure-state RG to mixed-state RG
Conceptually, RG transformation in classical and quantum statistical mechanics is an iterative coarse-graining process that discards short-range degrees of freedom while preserving long-range ones. The idea of using real-space renormalization to study zero-temperature physics of lattice quantum systems ('numerical RG') was pioneered by Wilson when considering impurity problems Wilson (1975), and was later generalized and developed into a series of powerful RG-based numerical methods including DMRG White (1993), entanglement RG White (1993), etc. We refer to all of them as pure-state RGs, in contrast to the mixed-state RG we introduce in this work. 2
Footnote 2: Another aspect of pure-state RG is that it preserves the low-energy physics of the system. But here we emphasize this less because for mixed-states, there may not be a notion of energy or Hamiltonian.
For the sake of presentation, we restrict our attention to one-dimensional systems and only focus on tree-like RG circuits (see Fig. 3). All the main ideas can be easily generalized to more sophisticated RG circuit structures, _e.g._ entanglement RG circuits White (1993), as well as higher dimensional systems.
Pure-state RG, in its simplest form, involves partitioning the lattice into consecutive blocks each with size \(b\) and applying a coarse-graining map \(w_{B}^{\dagger}\) to each block \(B\) of a pure state \(\ket{\psi}\). More specfically, coarse-graining involves truncating the Hilbert space, and \(w_{B}\) is an isometry satisfying \(w_{B}^{\dagger}w_{B}=\mathbb{I}\). As proposed White (1993) for the density matrix renormalization group (DMRG) algorithm, the optimal choice of \(w_{B}\) that preserves all correlations between \(B\) and its complement is given by
\[\mathsf{supp}\ w_{B}w_{B}^{\dagger}=\mathsf{supp}\ \rho_{B} \tag{2}\]
where \(\rho_{B}:=\mathrm{tr}_{\bar{B}}(\ket{\psi}\bra{\psi})\) is the reduced density matrix of the block \(B\). \(\mathsf{supp}\ K\) of a positive semi-definite matrix \(K\) means the subspace spanned by \(K\)'s eigenstates with positive eigenvalues. If the original state \(\ket{\psi}\) has area law entanglement \(S_{B}\equiv-\mathrm{tr}[\rho_{B}\log\rho_{B}]=O(1)\), then each block is efficiently coarse-grained into a constant dimensional Hilbert space independent of the original block size 3.
Footnote 3: Rigorously speaking, it is only proven that the ground state of a gapped local Hamiltonian satisying the area law can be represented as an MPS with a bond dimension that grows sublinearly with the system size Affleck (1993). However, in practice, it is usually true that a finite bond dimension suffices to reproduce an accurate wavefunction for arbitrary large (even infinite) system sizes.
Now we turn to 1D mixed-states. In contrast to the pure state case, physical mixed states (_e.g._ ones mentioned below Def.2) typically have volume-law scaling of \(S_{B}\), leading to inefficient compression using the \(w_{B}\) selected according to Eq.(2). This is because \(S_{B}\) results from not only correlations between \(B\) and the complementary system \(\bar{B}\) but also between \(B\) and a purifying environment \(E\) of the mixed state. The latter is the non-universal information that should be discarded. We thus need a new criterion for finding the coarse-graining map.
To motivate the criterion we introduce, we observe that Eq.(2) can be interpreted as the solution to the optimization problem:
\[\begin{split}&\mathrm{argmin}_{w_{B}}\ \mathrm{dim}_{\mathrm{out}}(w_{B}^{\dagger})\\ s.t.& I_{B:\bar{B}}(w_{B}^{\dagger}\ket{\psi})=I_{B: \bar{B}}(\ket{\psi}),\end{split} \tag{3}\]
where \(\mathrm{dim}_{\mathrm{out}}(w_{B}^{\dagger})\) is the output dimension of \(w_{B}^{\dagger}\) and \(I_{X:Y}:=S_{X}+S_{Y}-S_{XY}\) is the quantum mutual information, a measure of correlations between two parties \(X\) and \(Y\). The constraint has clear physical meaning in the context of RG: by preserving \(I_{B:\bar{B}}\), it preserves all the long-distance correlation within \(\ket{\psi}\). We thus use '\(I_{B:\bar{B}}\) preserving' condition as a guideline to generalize Eq.(2) to mixed-states.
### Correlation-preserving map
We make the argument above more precise:
**Definition 3** (correlation-preserving maps).: For a given bipartite quantum state \(\rho=\rho_{AB}\), a quantum channel \(\mathcal{E}_{A\to A^{\prime}}\) acting on \(A\) is _correlation-preserving_ with respect to \(\rho_{AB}\) if it satisfies
\[I_{A^{\prime}:B}(\mathcal{E}_{A\to A^{\prime}}(\rho))=I_{A:B}(\rho)\]
It is worth noting that a channel being correlation-preserving or not depends on both the input state and the bipartition: the same map \(\mathcal{E}\) that is correlation-preserving with respect to one \((\rho,B)\) pair may not be so with respect to another pair.
Recalling that the motivation for defining an RG scheme is to establish equivalence between two mixed
Figure 3: **Real-space RG transformation of pure states**–Circuit representation of two iterations of pure state RG transformation. At the \(\ell\)-th iteration, the coarse-graining isometry \(w^{(\ell)}\) is determined by the level’s input state \(\ket{\psi^{(\ell)}}\) using Eq.(2). By applying the circuit from bottom to top (red arrows), all the short-range features of the initial UV state are gradually discarded, and only long-range ones are kept in the IR state \(\rho^{(\ell\rightarrow\infty)}\). By applying the circuit from top to bottom (blue arrows), the circuit generates the UV state \(\ket{\psi^{(1)}}\).
states by finding a local channel transformation and its inverse, ideally we would like \(\mathcal{E}\)'s action on \(\rho\) to be reversible. Conveniently, the two desired properties (correlation-preserving and reversibility) are equivalent, as we prove in the following theorem.
**Theorem 1**.: For a given bipartite quantum state \(\rho=\rho_{AB}\), the map \(\mathcal{E}_{A\to A^{\prime}}\) is correlation-preserving if and only if there exists another quantum channel \(\mathcal{D}_{A^{\prime}\to A}\), such that:
\[\rho=\mathcal{D}_{A^{\prime}\to A}\circ\mathcal{E}_{A\to A^{\prime}}(\rho)\]
Proof.: (reversibility \(\Rightarrow\) correlation-preserving) According to the quantum data processing inequality, a channel acting only on \(A\) cannot increase correlations between \(A\) and \(B\):
\[I_{A:B}(\rho)\geq I_{A^{\prime}:B}(\mathcal{E}(\rho))\geq I_{A:B}(\mathcal{D} \circ\mathcal{E}(\rho))=I_{A:B}(\rho) \tag{4}\]
Thus \(I_{A:B}(\rho)=I_{A^{\prime}:B}(\mathcal{E}(\rho))\).
(correlation-preserving \(\Rightarrow\) reversibility) Let \(W\) be an isometry from \(A\) to \(A^{\prime}\cup E\) that dilates the channel \(\mathcal{E}_{A\to A^{\prime}}\):
\[\mathcal{E}_{A\to A^{\prime}}(\cdot):=\mathrm{tr}_{E}\left[W(\cdot)W^{ \dagger}\right], \tag{5}\]
where \(E\) is an ancillary system, and let \(\sigma_{A^{\prime}EB}=W\rho W^{\dagger}\). Then we have the following relation:
\[I_{A:B}(\rho)=I_{A^{\prime}E:B}(\sigma_{A^{\prime}EB})=I_{A^{\prime}:B}(\sigma _{A^{\prime}EB}). \tag{6}\]
The second equality, due to the correlation-preserving property, implies that \(I_{B:E|A^{\prime}}(\sigma_{A^{\prime}EB})=0\) and \(B-A^{\prime}-E\) forms a quantum Markov chain. Thus there is a channel \(\mathcal{T}_{A^{\prime}\to A^{\prime}E}\) that reconstructs \(\sigma_{A^{\prime}EB}\) from \(\mathcal{E}_{A\to A^{\prime}}(\rho)=\sigma_{A^{\prime}B}=\mathrm{tr}_{E} \sigma_{A^{\prime}EB}\) alone:
\[\mathcal{T}_{A^{\prime}\to A^{\prime}E}(\sigma_{A^{\prime}B})=\sigma_{A^{ \prime}EB}. \tag{7}\]
The map \(\mathcal{T}_{A^{\prime}\to A^{\prime}E}\) is the Petz recovery map [37]:
\[\mathcal{T}_{A^{\prime}\to A^{\prime}E}(\cdot):=\sigma_{A^{\prime}E}^{1/2} \left(\sigma_{A^{\prime}}^{-1/2}(\cdot)\sigma_{A^{\prime}}^{-1/2}\otimes \mathbb{I}_{E}\right)\sigma_{A^{\prime}E}^{1/2} \tag{8}\]
We can then choose the inverse channel \(\mathcal{D}\) to be
\[\mathcal{D}_{A^{\prime}\to A}(\cdot)=\mathrm{tr}_{R}\left(U_{W}^{\dagger} \mathcal{T}_{A^{\prime}\to A^{\prime}E}(\cdot)U_{W}\right), \tag{9}\]
where \(U_{W}:\ A\cup R\to A^{\prime}\cup E\) is a unitary operator that 'completes' the isometry \(W:\ A\to A^{\prime}\cup E\), namely:
\[W(\cdot)W^{\dagger}=U_{W}\left(\left(\cdot\right)\otimes\left|0\right\rangle_ {R}\left\langle 0\right|\right)U_{W}^{\dagger}. \tag{10}\]
We remark that the relation between correlation-preserving and reversibility is robust in one direction. More precisely, if the channel \(\mathcal{E}_{A\to A^{\prime}}\) almost preserves correlation
\[I_{A:B}(\rho)-I_{A^{\prime}:B}(\mathcal{E}_{A\to A^{\prime}}(\rho))=\epsilon, \tag{11}\]
then there exists an almost perfect recovery channel \(\mathcal{D}_{A^{\prime}\to A}\) such that
\[F(\rho,\mathcal{D}\circ\mathcal{E}(\rho))\geq 2^{-\epsilon/2}. \tag{12}\]
A proof, based on approximate quantum Markov chains [38], can be found in App.A.2. The robustness property is desirable especially when we would like to numerically search for the correlation-preserving channel \(\mathcal{E}\).
When using \(\mathcal{E}\) for the purpose of coarse-graining, the target space of the channel should be as small as possible. This corresponds to solving the following optimization problem:
\[\begin{split}&\mathrm{argmin}_{\mathcal{E}_{A\to A^{\prime}}}\ \mathrm{dim}\,\mathcal{H}_{A^{\prime}}\\ & s.t.\quad I_{A:\tilde{A}}(\rho)-I_{A^{\prime}:\tilde{A}}( \mathcal{E}(\rho))\leq\epsilon\end{split} \tag{13}\]
with \(\epsilon\) taken to be a small number or zero. The problem is analogous to Eq.(3) for pure states, which has Eq.(2) as an explicit solution. The current problem, in contrast, has no known explicit solution. In fact, the problem is closely related to the mixed-state quantum data compression problem, which is under active exploration in quantum information theory. We refer interested readers to Refs. [39; 40; 41; 42] for recent discussions on the problem.
To search for a good coarse-graining map for a given state, one can either numerically solve the optimization problem Eq.(13) (in this case the robustness property is crucial for the purpose of estimating error), or try to construct the channel analytically by exploiting the special structure of the given state, as we do later when studying examples in Sec.IV.
We point out that two familiar coarse-graining schemes in 1D, one for quantum ground states and one for classical statistical mechanics models, are in fact correlation-preserving maps. The first one is the Hilbert space truncation reviewed in Sec.III.1 using the rule Eq.(2). Since this scheme preserves the entropy of a block, it satisfies the correlation-preserving condition (for a pure state \(\left|\psi_{AB}\right\rangle\), \(S_{A}=\frac{1}{2}I_{A:B}\)). The other example is Kadanoff's block spin decimation of classical spin chains. Consider the Gibbs state of a classical spin chain with nearest-neighbor interaction, but written as a quantum mixed-state:
\[\rho_{\beta}\propto\sum_{\mathbf{s}=s_{0}...s_{L}}\exp\left(-\beta\sum_{i}h_{i}( s_{i},s_{i+1})\right)\left|\mathbf{s}\right\rangle\left\langle\mathbf{s}\right| \tag{14}\]
The state is classical because it is diagonal in the computational basis \(\left|\mathbf{s}\right\rangle=\left|s_{1}...s_{L}\right\rangle\). For each block \(B=\{i_{1},...,i_{b}\}\), the block spin decimation corresponds to a quantum channel that traces out all spins in \(B\) other than \(i_{1}\). This operation is correlation-preserving with respect to \(B^{\prime}:=B\cup\{i_{b+1}\}\), because:
\[I_{B^{\prime}:\overline{B^{\prime}}}(\rho)=I_{\{i_{1},i_{b+1}\}:\overline{B^{ \prime}}}(\rho) \tag{15}\]
which is a consequence of the Markov property of the Gibbs distribution.
### Ideal mixed-state RG
In this section, we formulate an ideal real-space RG scheme built from local correlation-preserving channels.
Assume \(\rho\) is a many-body mixed-state on a lattice with linear size \(L\). We further assume that we have constructed, either numerically or analytically, a series of coarse-graining transformations \(\{\mathcal{C}^{(0)},\mathcal{C}^{(1)},...,\mathcal{C}^{(\ell)},...\}\) acting on \(\rho\) sequentially. In 1D, each \(\mathcal{C}^{(\ell)}\) may have one of the structures shown in Fig. 3, or any other structure as long as it is composed of at most \(O(1)\)-layer of local channels. This leads to an 'RG flow' of mixed-states:
\[\rho=\rho^{(0)}\xrightarrow{\mathcal{C}^{(0)}}\rho^{(1)}\xrightarrow{ \mathcal{C}^{(1)}}...\xrightarrow{\mathcal{C}^{(\ell-1)}}\rho^{(\ell)} \xrightarrow{\mathcal{C}^{(\ell)}}... \tag{16}\]
along which the level of coarse-graining increases gradually.
Each state \(\rho^{(\ell)}\) is supported on a coarse-grained lattice \(\mathcal{L}^{(\ell)}\) with an \(L^{(\ell)}:=L/b^{\ell}\) linear size. The chain has a length at most \(\sim\log_{b}L\), after which the state is supported on \(O(1)\) number of sites.
We call this RG process _ideal_ if every channel gate \(\mathcal{E}\) within each \(\mathcal{C}^{(\ell)}\) is correlation-preserving with respect to its input and the prescribed bipartition.
As a direct consequence of Thm.1, ideal RG is reversible. More specifically, there exists a series of local 'fine-graining' transformations \(\{\mathcal{F}^{(0)},\mathcal{F}^{(1)},...,\mathcal{F}^{(\ell)},...\}\) that recovers the original mixed-state from its coarse-grained version by gradually adding local details:
\[\rho^{(0)}\xleftarrow{\mathcal{F}^{(0)}}\rho^{(1)}\xleftarrow{\mathcal{F}^{( 1)}}...\xleftarrow{\mathcal{F}^{(\ell-1)}}\rho^{(\ell)}\xleftarrow{\mathcal{ F}^{(\ell)}}... \tag{17}\]
where each \(\mathcal{F}^{(\ell)}\) is the'reversed' channel of \(\mathcal{C}^{(\ell)}\), obtained by replacing each channel \(\mathcal{E}\) within \(\mathcal{F}^{(\ell)}\) by its corresponding recovery map \(\mathcal{D}\) (see Thm.1). In graphical notation, if
\[\mathcal{C}^{(\ell)}=\left(\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{ E}_{1}|}&\frac{|1}{|\mathcal{E}_{2}|}&\frac{|1}{|\mathcal{E}_{3}|}&\frac{|1}{| \mathcal{E}_{4}|}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|}\\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.9}{$\begin{bmatrix}\frac{|1}{|\mathcal{E}_{4}|} \\ \end{bmatrix}$}&\scalebox{0.
We find that such an \(\ell^{*}\) does exist in many cases when the fixed-point state \(\rho^{(\infty)}\) has a finite correlation length. More specifically, in such cases, the fidelity function satisfies the form:
\[F(\rho^{(\ell)},\rho^{(\infty)})\simeq\exp(-\alpha\;\theta^{(\ell)}L^{(\ell)}) \tag{25}\]
for some \(\alpha=O(1)\) and a positive coefficient \(\theta^{(\ell)}\). Further, \(\theta^{(\ell)}\) displays a power-law iteration relation under each coarse-graining step:
\[\theta^{(\ell+1)}\lesssim(\theta^{(\ell)})^{\gamma}\quad\text{when}\quad\theta ^{(\ell)}\to 0_{+} \tag{26}\]
for some coefficient \(\gamma>1\).
As detailed in the App.B, Eqs.(25),(26) guarantee that choosing
\[\ell^{*}\sim\log\log(L/\epsilon) \tag{27}\]
is sufficient to have \(F(\rho^{(\ell)},\rho^{(\infty)})>1-\epsilon\). We remark that one can let \(\epsilon\) be as small as \((\text{poly}L)^{-1}\) but still guarantee that \(\ell^{*}\) steps of RG is a \((\text{polylog}L)\) -range LC transformation.
In the App.B we show that conditions Eq.(25) and Eq.(26) hold for:
* 1D pure-state RG of a matrix product state
* Gibbs state of a classical statistical mechanics model flowing toward a non-critical fixed point
* All examples we study in Sec.IV
So far in this section, we have shown that RG can be viewed as an LC transformation connecting \(\rho\) to \(\rho^{(\infty)}\). Recalling that the phase equivalence is defined through two-way LC connections, we have to find another LC channel connecting \(\rho^{(\infty)}\) to \(\rho\) to conclude that the two states are in the same phase.
If the RG is an ideal one, it is composed of correlation-preserving channels and thus reversible. In this case, the other direction comes from the'reversed' RG process \(\text{RG}^{-1}=\{\mathcal{F}^{(1)},\mathcal{F}^{(2)},...\mathcal{F}^{(\ell)},.\ldots\}\) which we discussed in Sec.III.3. Similar to the forward RG, there is the issue of convergence concerning whether \(\text{RG}^{-1}\) can be treated as an LC transformation. But the discussion is completely parallel to the one for the forward RG. Thus in this case, the LC bi-connection is established as
\[\rho\;\xrightarrow{\text{RG}}\;\rho^{(\infty)}\;\xrightarrow{\text{RG}^{-1}}\;\rho \tag{28}\]
and we can conclude \(\rho\) and \(\rho^{(\infty)}\) are in the same phase.
Next we discuss what we can learn from a non-ideal RG. One class of mixed states of significant interest is a long-range entangled pure-state \(\ket{\psi}\) subject to local decoherence represented by an LC transformation, and an important question is whether or not the decohered state in the same phase as \(\ket{\psi}\). In this setting, one direction of the connection is already given by the decoherence. Therefore, if the RG (ideal or not) has \(\ket{\psi}\) as the fixed point:
\[\ket{\psi}\xrightarrow{\text{decohere}}\rho\xrightarrow{\text{RG}}\rho^{( \infty)}=\ket{\psi}\bra{\psi}, \tag{29}\]
then an LC bi-connection is established and \(\rho\) and \(\ket{\psi}\) are in the same phase. But on the other hand, if the fixed-point is not in the same phase as \(\ket{\psi}\), then we cannot determine \(\rho\)'s phase because no bi-connection is identified.
## IV Overview of examples
In the remaining sections we use our formalism to understand the quantum phases of several many-body mixed-states of recent interest.
In all the examples, the mixed-state is obtained by 'perturbing' a long-range entangled pure state, either through incoherent noise or finite temperature. The question we address is whether the states before and after the perturbation are in the same phase.
The long-range entangled pure state is chosen to be either the Greenberger-Horne-Zeilinger (GHZ) state or Kitaev's toric code state. In most examples, the LC circuits for identifying phases take the form of RG. The coarse-graining maps therein are either constructed according to the correlation-preserving criterion (Def.3), or inspired by decoders of quantum error correcting codes.
In the App.C we include an example of a mixed symmetry-protected topological (SPT) state and its associated mixed-state RG.
## V Noisy GHZ states
The many-body GHZ state, defined as
\[\ket{\text{GHZ}_{L}}:=\frac{1}{\sqrt{2}}\left(\ket{0^{\otimes L}}+\ket{1^{ \otimes L}}\right), \tag{30}\]
has long-range entanglement, _i.e._ it can not be generated from a product state using any one-dimensional LU (or LC) transformation from a product state. In this section, we study the effect of dephasing noise on this state.
For convenience of analysis, we let \(L=b^{\ell_{\text{max}}}\) for some integer \(\ell_{\text{max}}\) and an odd integer \(b\). The state can be rewritten as:
\[\ket{\text{GHZ}_{L}}=w_{b}^{\otimes(\ell_{\text{max}}-1)}\cdot w_{b}^{\otimes( \ell_{\text{max}}-2)}...w_{b}^{\otimes 1}\ket{+} \tag{31}\]
where \(w_{b}=\ket{0^{\otimes b}}\bra{0}+\ket{1^{\otimes b}}\bra{1}\) is an isometry and \(\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\). This provides a tree tensor network representation of the state (see Fig.4), as well as a way of blocking sites when performing RG.
We consider a setting in which each qubit experiences the same noise, as modeled by a single qubit channel \(\mathcal{N}\), resulting in the mixed state
\[\rho_{L}:=\mathcal{N}^{\otimes L}(\ket{\text{GHZ}_{L}}\bra{\text{GHZ}_{L}}) \tag{32}\]
We remark that the GHZ state is closely related to the quantum repetition code, whose codespace is spanned by \(\left|0^{\otimes L}\right\rangle\) and \(\left|1^{\otimes L}\right\rangle\). It is known that quantum information stored in a quantum repetition code is robust against bit-flip noise (\(X\) dephasing noise), but not phase-flip noise (\(Z\) dephasing noise). As we will see in this section, the robustness of the GHZ state's long-range entanglement has a parallel behavior when it is subjected to these two types of noise. We postpone a detailed discussion of the relation between mixed-state phases and quantum coding properties to Sec.VII.
### Bit-flip noise
We first consider dephasing each qubit in the \(X\) direction:
\[\mathcal{N}(\cdot)=\mathcal{N}_{p}^{X}(\cdot):=(1-p)(\cdot)+pX(\cdot)X, \tag{33}\]
in which each qubit is flipped with probability \(p\).
The resulting state is
(34)
where \(\left|\mathbf{s}\right|\):= \(\sum_{i}s_{i}\) is the number of \(1\) in the bitstring \(\mathbf{s}\), and \(\bar{\mathbf{s}}\) is the bitwise complement of \(\mathbf{s}\). Since \(\rho_{p}^{X}=\rho_{1-p}^{X}\), we only consider \(p\in(0,0.5]\).
Inspired by decoders for the quantum repetition code, we use the \(b\)-qubit _majority-vote_ channel as the coarse-graining map. To define it, we first introduce the unitary operator that re-parametrizes the bitstring:
\[U\left|\mathbf{s}\right\rangle:=\left|\mathsf{maj}(\mathbf{s})\right\rangle \otimes\left|\mathsf{diff}(\mathbf{s})\right\rangle \tag{35}\]
where \(\mathsf{maj}(\mathbf{s})\) takes the majority vote of the bits within \(\mathbf{s}\):
\[\mathsf{maj}(\mathbf{s}):=\left\{\begin{array}{ll}0&\text{if}\ \ \left|\mathbf{s} \right|<b/2\\ 1&\text{if}\ \ \left|\mathbf{s}\right|\geq b/2\end{array}\right. \tag{36}\]
and \(\mathsf{diff}(\mathbf{s})\) is a length \((b-1)\) bitstring that records pairwise difference of \(\mathbf{s}\):
\[\mathsf{diff}(\mathbf{s})_{i}:=(s_{i+1}-s_{i})\text{ mod }2\ \ \ i=1,2,...,b-1 \tag{37}\]
Then the majority vote channel can be written as:
\[\mathcal{E}_{b}(\cdot):=\mathrm{tr}_{2}(U(\cdot)U^{\dagger}) \tag{38}\]
where \(\mathrm{tr}_{2}\) denotes tracing out the pairwise difference information, regarded as unimportant short-distance degrees of freedom in the current example.
We inspect the state's RG flow under \(\mathcal{E}_{b}\), the coarse-graining map:
\[\mathcal{E}_{b}^{\otimes L/b}(\rho_{p,L}^{X}) \tag{39}\] \[= \mathcal{E}_{b}^{\otimes L/b}\circ(\mathcal{N}_{p}^{X})^{\otimes L }(\left|\mathrm{GHZ}_{L}\right\rangle\left\langle\mathrm{GHZ}_{L}\right|)\] \[= \left(\mathcal{E}_{b}\circ(\mathcal{N}_{p}^{X})^{\otimes b}\circ \mathcal{U}_{w_{b}}\right)^{\otimes L/b}\left(\left|\mathrm{GHZ}_{L/b}\right\rangle \left\langle\mathrm{GHZ}_{L/b}\right|\right)\]
where \(\mathcal{U}_{w_{b}}(\cdot):=w_{b}(\cdot)w_{b}^{\dagger}\) and we applied the relation \(\left|\mathrm{GHZ}_{L}\right\rangle=w_{b}^{\otimes L/b}\left|\mathrm{GHZ}_{L/b}\right\rangle\). Thus after one iteration of coarse-graining, the resulting state is a GHZ state with \(1/b\) of the original size subject to a'renormalized' noise channel, which is still \(X\)-dephasing (see App.A.4 for a derivation):
\[\mathcal{E}_{b}\circ(\mathcal{N}_{p}^{X})^{\otimes b}\circ\mathcal{U}_{w_{b}}= \mathcal{N}_{p^{\prime}}^{X}, \tag{40}\]
but with a renormalized noise strength \(p^{\prime}=\sum_{k=(b+1)/2}^{b}{b\choose k}p^{k}(1-p)^{b-k}\).
Thus we obtain an exact description of the state's RG flow:
\[\rho^{(\ell)}=(\mathcal{N}_{p^{(\ell)}}^{X})^{\otimes L^{(\ell)}}(\left| \mathrm{GHZ}_{L^{(\ell)}}\right\rangle\left\langle\mathrm{GHZ}_{L^{(\ell)}}\right|) \tag{41}\]
where \(L^{(\ell)}=Lb^{-\ell}\) is the renormalized system size at the \(\ell\)-th iteration, and \(p^{(\ell+1)}=\sum_{k=(b+1)/2}^{b}{b\choose k}(p^{(\ell)})^{k}(1-p^{(\ell)})^{b -k}\). It is straightforward to check that \(p=0\) and \(p=1/2\) are the two fixed points of the RG transformation.
Around \(p=1/2\), the iteration relation has the asymptotic behavior:
\[(p^{\prime}-1/2)\simeq g(b)(p-1/2) \tag{42}\]
where \(g(b):=\sum_{k=(b+1)/2}^{b}4k{b\choose k}>1\). Thus this is an unstable fixed point. Exactly at \(p=1/2\), the fixed-point state is:
\[\rho_{1/2,\ L}^{X}=\frac{1}{2^{L}}\sum_{\mathbf{s}\in\{0,1\}^{L}}\left(\left| \mathbf{s}\right\rangle\left\langle\mathbf{s}\right|+\left|\mathbf{s}\right\rangle \left\langle\bar{\mathbf{s}}\right|\right). \tag{43}\]
The state is better understood in the eigenbasis of Pauli \(X\) operators, _i.e._\(\{\left|0^{X}\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1 \right\rangle)\), \(\left|1^{X}\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle-\left|1 \right\rangle)\}\):
\[\rho_{1/2,\ L}^{X}=\frac{1}{2^{L-1}}\sum_{\mathbf{s}\in\{0,1\}^{L}}\left| \mathbf{s}^{X}\right\rangle\left\langle\mathbf{s}^{X}\right|\ \delta(\left|\mathbf{s}\right|=0\text{ mod }2) \tag{44}\]
In this basis, the state is a uniform distribution of bitstrings with even parity. Since it is diagonal in this basis,
Figure 4: Tree tensor network of a GHZ state with \(L=b^{\ell}=9\), \(b=3\), \(\ell=2\). Each triangle represents an isometry \(w\) (Eq.(31)). By replacing the state at the top with a generic single qubit state \(\left|\psi\right\rangle\), the same tensor network encodes \(\left|\psi\right\rangle\) into a codeword state of the quantum repetition code.
the state is a classical state (recall the definition in Sec.II) and is not in the same phase as the GHZ state 5.
Footnote 5: In fact, the state \(\rho_{1/2,L}^{X}\) is in the same phase as the product state. One may prepare the state using LC by first preparing a one dimensional cluster state of \(2L\) spins and then tracing out qubits on all odd sites.
On the other hand, \(p=0\) is a stable fixed-point attracting \(p\in[0,0.5)\). Starting from any state in the interval, the RG process gradually removes entropy within the state and brings it back to the noiseless state at \(p=0\), _i.e._\(|\mathrm{GHZ}\rangle\). We thus obtain the following LC transformation bi-connection:
\[|\mathrm{GHZ}\rangle\ \xrightarrow{\mathcal{N}_{X}}\ \rho_{p}^{X}\ \xrightarrow{ \mathrm{RG}}\ |\mathrm{GHZ}\rangle\qquad p\in[0,0.5) \tag{45}\]
Thus we can conclude \(\rho_{p}^{X}\) and \(|\mathrm{GHZ}\rangle\) are in the same phase. The analysis shows that the \(X-\)dephasing noise acts as an irrelevant perturbation with respect to the \(|\mathrm{GHZ}\rangle\) state and its long-range entanglement.
Besides establishing the phase equivalence, the bi-connection in Eq.(45) also yields information on the entanglement structure of the dephased state \(\rho_{p}^{X}\). Consider two sufficiently large subregions of the system, referred to as \(A\) and \(B\), for which one can always choose the RG blocking scheme such that coarse-graining channels never act jointly on \(A\) and \(B\). Let \(E_{A:B}(\cdot)\) be _any_ quantum or classical correlation measure between \(A\) and \(B\) that satisfies the data processing inequality. Then due to the bi-connection we have:
\[\begin{split}& E_{A:B}(|\mathrm{GHZ}\rangle)\geq E_{A:B}(\rho_{p}^{ X})\geq E_{A:B}(|\mathrm{GHZ}\rangle)\\ \Rightarrow& E_{A:B}(\rho_{p}^{X})=E_{A:B}(| \mathrm{GHZ}\rangle).\end{split} \tag{46}\]
Some examples of correlation measures are quantum mutual information, entanglement negativity, and entanglement of formation & distillation. All quantities are easy to compute analytically for the GHZ state but are difficult to obtain for the mixed-state \(\rho_{p}^{X}\) by other means.
We point out that all conclusions in this section hold also for the dephasing noise along other directions in the \(X\)-\(Y\) plane. This can be most easily seen by noticing that the expression Eq.(40) holds for any dephasing direction in the \(X\)-\(Y\) plane. As we will see in the next subsection, the \(Z\)-dephasing acts very differently.
### Phase-flip noise
Next we consider the GHZ state under another type of noise, namely the phase-flip or Z-dephasing:
\[\mathcal{N}_{p}^{Z}(\cdot):=(1-p)(\cdot)+pZ(\cdot)Z \tag{47}\]
which leads to the density matrix
\[\begin{split}\rho_{p,L}^{Z}=&\frac{1}{2}[|0^{ \otimes L}\rangle\,\langle 0^{\otimes L}|+|1^{\otimes L}\rangle\,\langle 1^{ \otimes L}|+\\ &(1-2p)^{L}(|0^{\otimes L}\rangle\,\langle 1^{\otimes L}|+|1^{ \otimes L}\rangle\,\langle 0^{\otimes L}|)]\end{split} \tag{48}\]
In the thermodynamic limit, the off-diagonal term vanishes for any \(p\notin\{0,1\}\) and the state converges to a classical state \(\frac{1}{2}(|0^{\otimes L}\rangle\,\langle 0^{\otimes L}|+|1^{\otimes L} \rangle\,\langle 1^{\otimes L}|)\). This already indicates that the state is in a different phase from the GHZ state.
To construct the RG for this mixed state, we still use the majority-vote channel \(\mathcal{E}_{b}\) (Eq.(38)) as the coarse-graining map. An important difference of this case compared to the bit-flip case is that the majority vote channel is now correlation-preserving with respect to \(\rho_{p,L}^{Z}\). To see this, we first verify the following relations:
\[\begin{split}\mathcal{E}_{b}\circ\mathcal{U}_{w_{b}}& =\mathcal{I}\\ (\mathcal{N}_{p}^{Z})^{\otimes b}\circ\mathcal{U}_{w_{b}}& =\mathcal{U}_{w_{b}}\circ\mathcal{N}_{p^{\prime}}^{Z}\end{split} \tag{49}\]
where \(\mathcal{I}\) is the identity channel and \(p^{\prime}\) is given later in Eq.(52). These equations imply that:
\[\mathcal{U}_{w_{b}}\circ\mathcal{E}_{b}\left(\rho_{p,L}^{Z}\right)=\rho_{p,L} ^{Z} \tag{50}\]
where \(\mathcal{U}_{w_{b}}\circ\mathcal{E}_{b}\) is applied to any block of \(b\) sites. Thus \(\mathcal{E}_{b}\) is reversible and correlation-preserving with respect to \(\rho_{p,L}^{Z}\).
Following a similar calculation as in the bit-flip noise case, we obtain that the state after one step of RG maintains the same form:
\[\mathcal{E}_{b}^{\otimes L/b}(\rho_{p,L}^{Z})=\rho_{p^{\prime},L/b}^{Z} \tag{51}\]
but with a renormalized noise strength
\[p^{\prime}=\frac{1}{2}(1-(1-2p)^{b}), \tag{52}\]
The iteration relation has \(p=0\) as an unstable fixed point, around which \(p^{\prime}\simeq bp\), and also \(p=1/2\) as a stable fixed point, around which \((p^{\prime}-1/2)=(p-1/2)^{b}\).
Since the RG is ideal, it leads to the following LC bi-connection:
\[\begin{split}\rho_{1/2}^{Z}\ \xrightarrow{\mathrm{RG}^{-1}}\ \rho_{p}^{Z}\ \xrightarrow{\mathrm{RG}}\ \rho_{1/2}^{Z}\qquad p\in[0,0.5)\end{split} \tag{53}\]
and the analysis shows the noisy state is of the same phase as the classical state \(\frac{1}{2}(|0^{\otimes L}\rangle\,\langle 0^{\otimes L}|+|1^{\otimes L} \rangle\,\langle 1^{\otimes L}|)\). Therefore, for the GHZ state the phase flip noise is relevant and destroys the long-range entanglement therein with an arbitrarily small strength.
## VI Thermal toric code state
In this section and the next, we discuss two mixed-states related to \(\mathbb{Z}_{2}\) topological order. In Sec.VI.1 we review key properties of the toric code model and define the notations. In Sec.VI.2 we construct an ideal RG to explicitly show that any finite temperature Gibbs state of the toric code is in the trivial phase.
### Review of the toric code model
We consider a square lattice with periodic boundary conditions and qubits on the links. Kitaev's toric code model has the Hamiltonian
\[H=-\sum_{\square\in P}A_{\square}-\sum_{+\in V}B_{+} \tag{54}\]
where \(A_{\square}=\prod_{i\in\square}X_{i}\) and \(B_{+}=\prod_{i\in+}Z_{i}\). \(P,V\) represent plaquettes and vertices, respectively.
Since all terms in the Hamiltonian commute with each other, their common eigenstates can be used to label the Hilbert space. But in order to construct a complete basis, we need two more operators \(\widetilde{X}_{1,2}=\prod_{i\in S_{1,2}}X_{i}\), where \(S_{1},S_{2}\) are the two homotopically inequivalent noncontractable loops on the torus. Each \(\widetilde{X}_{i}\) commutes with \(A_{\square}\)s and \(B_{+}\)s, thus all of them together define a basis for the Hilbert space:
\[\left|\mathbf{m}=m_{1}...m_{|P|};\mathbf{e}=e_{1}...e_{|V|};\mathbf{l}=l_{1}l_{ 2}\right>,\quad m_{i},e_{i},l_{i}\in\{0,1\} \tag{55}\]
satisfying
\[A_{\square_{i}}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right> =(-1)^{m_{i}}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right> \tag{56}\] \[B_{+_{i}}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right> =(-1)^{e_{i}}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right>\] \[\widetilde{X}_{i}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right> =(-1)^{l_{i}}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right>\]
We call this the anyon number basis in contrast to the computational basis. If \(m_{i}=1\), there is a plaquette anyon (or \(m\) anyon) at the corresponding plaquette; while if \(e_{i}=1\), there is a vertex anyon (or \(e\) anyon) at the corresponding vertex. The operator identities \(\prod_{\square}A_{\square}=1\) and \(\prod_{+}B_{+}=1\) enforce that the total number of either type of anyon must be even:
\[\pi(\mathbf{m})=0\quad\pi(\mathbf{e})=0, \tag{57}\]
where the function \(\pi(\cdot)\) evaluates the total parity of a bit string, _i.e._\(\pi(\mathbf{s}):=(\sum_{i}s_{i}\bmod 2)\).
The Hamiltonian's 4-dimensional ground state subspace \(V\) is spanned by anyon-free states:
\[V:=\texttt{span}\{\ \left|\mathbf{m}=\mathbf{0};\ \mathbf{e}=\mathbf{0};\ \mathbf{l}\right>:\ \ \mathbf{l}\in\{00,01,10,11\}\} \tag{58}\]
States within this subspace are locally indistinguishable, _i.e._\(\rho_{A}=\mathrm{tr}_{A}(\left|\psi\right>\left<\psi\right|)\) is independent of \(\left|\psi\right>\in V\) whenever \(A\) is a topologically trivial region.
We define a mixed-state \(\rho\) to be in the toric code phase if it is LC bi-connected to states within \(V\), namely:
\[\rho_{a}\ \xrightarrow{\mathcal{C}_{1}}\ \rho\ \xrightarrow{\mathcal{C}_{2}}\ \rho_{b} \tag{59}\]
for some states \(\rho_{a},\rho_{b}\) within \(V\), and some LC transformations \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\).
### RG of the thermal toric code state
We consider the Gibbs state of the toric code model Eq.(54) \(\rho_{\beta}\propto\exp(-\beta H)\) at inverse temperature \(\beta\). Ref. [34] showed that this state for finite \(\beta\) is not long-range entangled, and here we reproduce the conclusion by constructing an ideal mixed-state RG under which the state flows to a trivial one.
We notice that the density matrix \(\rho_{\beta}\) is diagonal in the anyon number basis (Eq. (55)):
\[\rho_{\beta}\left|\mathbf{m};\mathbf{e};\mathbf{l}\right>\propto\left|\mathbf{ m};\mathbf{e};\mathbf{l}\right>, \tag{60}\]
and is thus a classical mixture of different anyon configurations, with probabilities
\[\begin{split}\mathrm{Pr}(\mathbf{m},\mathbf{e},\mathbf{l}):=& \left<\mathbf{m};\mathbf{e};\mathbf{l}\right|\rho_{\beta}\left|\mathbf{m}; \mathbf{e};\mathbf{l}\right>\\ =&\mathrm{Pr}_{m}(\mathbf{m})\mathrm{Pr}_{e}(\mathbf{e })\mathrm{Pr}_{l}(\mathbf{l})\end{split} \tag{61}\]
in which the three types of degrees of freedom are independent:
\[\mathrm{Pr}_{m}(\mathbf{m}) =C_{\beta}\ \delta\left(\pi(\mathbf{m})=0\right)\prod_{i}p_{\beta}^{m_{i}} (1-p_{\beta})^{1-m_{i}}\] \[\mathrm{Pr}_{e}(\mathbf{e}) =C_{\beta}\ \delta\left(\pi(\mathbf{e})=0\right)\prod_{i}p_{\beta}^{e_{i}} (1-p_{\beta})^{1-e_{i}} \tag{62}\] \[\mathrm{Pr}_{l}(\mathbf{l}) =1/4\]
with \(p_{\beta}=\frac{e^{-\beta}}{e^{\beta}+e^{-\beta}}\) and \(C_{\beta}\) a normalization constant.
The key property is that \(m\) anyons on each plaquette (and \(e\) anyons on vertices) are independently excited with probability \(p_{\beta}\), up to a global constraint that the total number of each anyon type is even. This allows us to find an ideal RG, as we need only preserve the local anyon parity \(\pi(\mathbf{m}_{B}),\pi(\mathbf{e}_{B})\) of a block \(B\) to maintain correlations between the block and its complement.
We now describe how to coarse-grain to preserve this parity information. Consider the following quantum channel acting on 12 qubits in a \(2\times 2\) block of plaquettes:
\[\mathcal{E}^{X}(\cdot):=\sum_{\mathbf{m}\in\{0,1\}^{\otimes 4}}U_{\mathbf{m}}P_{ \mathbf{m}}(\cdot)P_{\mathbf{m}}U_{\mathbf{m}}^{\dagger} \tag{63}\]
where \(P_{\mathbf{m}}\) is the projector to the subspace with anyon configuration \(\mathbf{m}\), and the unitary operator \(U_{\mathbf{m}}\) is a product of Pauli \(Z\) matrices that brings \(\left|\mathbf{m}=m_{1}m_{2}m_{3}m_{4}\right>\) to \(\left|\pi(\mathbf{m})000\right>\). For instance, if we assume plaquettes are labeled as \(\frac{1}{3}\frac{2}{4}\) and \(\mathbf{m}=0110\), then \(U_{\mathbf{m}}\) can be \(Z_{12}Z_{13}\), where \(Z_{12(13)}\) is the Pauli-\(Z\) matrix acting on the qubit separating \(1\) and \(2\) (\(1\) and \(3\)). We remark that \(U_{\mathbf{m}}\) only acts on the inner four qubits.
In other words, \(\mathcal{E}^{X}\) first measures the anyon configuration within the block and then applies a unitary gate depending on the measurement outcome that pushes all anyons to the top-left plaquette. Since \(m\)-anyon is its own anti-particle, the top-left plaquette ends up with \(\pi(\mathbf{m})\) anyons while the other three end up with \(0\). Importantly, neither step disturbs the distribution of \(e\)-anyons.
\(\mathcal{E}^{X}\) is a correlation-preserving map with respect to the state \(\rho_{\beta}\), and one can explicitly check that its action on \(\rho_{\beta}\) can be reversed by the following channel:
\[\mathcal{D}^{X}(\cdot):=\sum_{\mathbf{m}}\mathrm{Pr}(\mathbf{m}|\pi(\mathbf{m} ))U_{\mathbf{m}}^{\dagger}P_{\pi(\mathbf{m})}^{1}(\cdot)P_{\pi(\mathbf{m})}^{1}U _{\mathbf{m}} \tag{64}\]
where \(P_{x}^{1}\) is the projector to the subspace with \({m_{1}=x}\). The action of \(\mathcal{D}^{X}\) can be intuitively understood as follows: It first measures the anyon occupancy of the site \(1\), which we recall is the only site that may host an anyon after the action of \(\mathcal{E}^{X}\). Then based on the measurement outcome (referred to as \(x\)), it randomly generates an anyon configuration on the block according to the distribution \(\Pr(\mathbf{m}|\pi(\mathbf{m})=x)\).
Analogously, there is a channel for each \(2\times 2\) block of vertices (_i.e._ a block of plaquettes of the dual lattice) that coarse-grains \(e\)-anyons:
\[\mathcal{E}^{Z}(\cdot):=\sum_{\mathbf{e}\in\{0,1\}^{\otimes 4}}U_{\mathbf{e}}P_{ \mathbf{e}}(\cdot)P_{\mathbf{e}}U_{\mathbf{e}}^{\dagger} \tag{65}\]
where \(P_{\mathbf{e}}\) are projector for \(e\)-anyon configurations and \(U_{\mathbf{e}}\) a product of \(X\) operators that brings \({|\mathbf{e}=e_{1}e_{2}e_{3}e_{4}}\) to \({|\pi(\mathbf{e})000}\). \(\mathcal{E}^{Z}\) only moves \(e\)-anyons and commutes with \(\mathcal{E}^{X}\).
After applying \(\mathcal{E}^{X(Z)}\) to each block of plaquettes (vertices), the resulting state only has anyons on plaquettes and vertices corresponding to a sublattice (see Fig.5, middle panel). To complete one iteration of the RG, we need to discard some degrees of freedom and put the state on a coarse-grained lattice. This step can be achieved by a series of local unitary operators called elementary moves introduced in [43; 44].
This step is most easily described graphically:
(66)
At each step, multiple controlled-not gates are applied, represented with arrows from the control qubit to target qubit. These gates decouple qubits into a product state which can then be removed, and the remaining qubits form a toric code state with anyons on a coarser lattice. Operations shown in each panel are applied in parallel to all the \(2\times 2\) blocks on the lattice.
In summary, one iteration of the RG consists of:
\[\mathcal{C}=\mathcal{U}\circ\left(\bigotimes_{B\in\mathcal{B}^{\prime}} \mathcal{E}_{B}^{Z}\right)\circ\left(\bigotimes_{B\in\mathcal{B}}\mathcal{E}_ {B}^{X}\right) \tag{67}\]
where \(\mathcal{B}\) contains \(2\times 2\) blocks of plaquettes and \(\mathcal{B}^{\prime}\) contains \(2\times 2\) blocks of vertices. \(\mathcal{U}\) stands for the disentangling operations in Eq.(66).
After one step of RG, a plaquette (vertex) contains an anyon if and only the four plaquettes (vertices) it was coarse-grained from contain an odd number of anyons. The renormalized state is still a thermal toric code state, but with a renormalized probability (or renormalized temperature):
\[\begin{array}{ll}&p^{\prime}_{\beta^{\prime}}=4p_{\beta}(1-p_{\beta})^{3}+4p _{\beta}^{3}(1-p_{\beta})\\ \Leftrightarrow&\tanh\beta^{\prime}=\tanh^{4}\beta\end{array} \tag{68}\]
We thus conclude that any finite temperature state \(\rho_{\beta<\infty}\) flows to the infinite temperature one \(\rho_{\beta=0}\) under the RG.
Furthermore, since all channels in the RG are correlation-preserving with respect to their inputs, the RG is ideal and can be reversed. Thus there is the following bi-connection:
\[\rho_{\beta=0}\ \xrightarrow{\text{RG}^{-1}}\ \rho_{\beta}\ \xrightarrow{\text{RG}}\ \rho_{\beta=0}\qquad\beta<\infty \tag{69}\]
Since the infinite temperature state \(\rho_{\beta=0}\propto\mathbb{I}\) is in the trivial phase, we conclude that \(\rho_{\beta}\) is also in the trivial phase.
## VII Noisy toric code state
Now we consider the mixed-state obtained by applying noise to a pure toric code state. All notations in the section follow those introduced in Sec.VI.1.
The toric code model is naturally a quantum memory that stores quantum information in its ground state subspace \(V\). In Sec.VII.1, we discuss the relation between the preservation of logical information and the preservation of the phase of matter. We prove that if an LC transformation \(\mathcal{C}\) does not bring a pure toric code state out of its phase, then \(\mathcal{C}\) must preserve any quantum information stored in \(V\).
In Sec.VII.2 and VII.3, we describe two ways to show that the dephased toric code state is in the toric code phase when dephasing strength is small. More specifically, we show that there exist LC transformations that bring the dephased state back to a pure toric code state. In Sec.VII.2, the LC transformation is motivated by the
Figure 5: **RG scheme for the thermal toric code state–** In all panels, a plaquette (vertex) is shaded (dotted) if has a non-zero probability of holding an \(m\)- (e-) anyon, and physical qubits are associated with edges of the lattice and drawn as circles. (left\(\rightarrow\)mid) \(\mathcal{E}^{X}\) and \(\mathcal{E}^{Z}\) (see Eq.(63) and Eq.(65)) acts on each \(2\times 2\) block of plaquettes and vertices, respectively. The resulting state has anyons on one of its sublattices’ plaquettes and vertices. (mid\(\rightarrow\)right) After disentangling with the unitary \(\mathcal{U}\) depicted in Eq.(66) and discarding the decoupled qubits, the new state is still a toric code Gibbs state, but with renormalized temperature \(p^{\prime}\) (Eq.(68)) supported on a coarse-grained lattice.
Harrington decoder of the toric code and takes the form of an RG transformation. In Sec.VII.3, the LC transformation is obtained by spatially truncating the minimum weight perfect matching (MWPM) decoder.
### Logical information and long-range entanglement
The toric code model, as its name suggests, is naturally a quantum error correcting code whose codespace is the ground state subspace \(V\) (Eq.(58)). In this context, an important question is whether a noise channel \(\mathcal{N}\) destroys logical information stored in a quantum memory. Mathematically, this is equivalent to asking whether there exists a recovery channel \(\mathcal{R}\) such that:
\[\mathcal{R}\circ\mathcal{N}(\ket{\psi}\bra{\psi})=\ket{\psi}\bra{\psi}\quad \forall\ket{\psi}\in V. \tag{70}\]
In quantum error correction, \(\mathcal{R}\) is often realized by a _decoder_, which maps any input state into an output supported within \(V\)6. If such \(\mathcal{R}\) exists, we say the logical information is preserved by \(\mathcal{N}\). Otherwise, we say the logical information is destroyed.
Footnote 6: Since we are only concerned about in-principle recoverability of logical information, we assume all operations within \(\mathcal{R}\) are noiseless.
To relate the phase of the mixed state, as defined by two-way LC connection, to preservation of logical information, we will need to first prove the following theorem.
**Theorem 2**.: Let \(V\) be the code subspace of a toric code defined on a torus. Suppose \(\mathcal{C}\) is a local channel transformation satisfying \(\mathsf{supp}\ \mathcal{C}(\ket{\psi}\bra{\psi})\subseteq V\ \ \forall\ket{\psi}\in V\), then \(\mathcal{C}\)'s action when restricted to \(V\) is a unitary channel.
To gain some intuition for why locality of the channel is essential in the theorem, consider the following channel:
\[\mathcal{N}(\rho):=\frac{1}{2}\rho+\frac{1}{2}\widetilde{X}_{1}\rho\widetilde{ X}_{1} \tag{71}\]
where \(\widetilde{X}_{1}=\Pi_{i\in S_{1}}X_{i}\) is the logical \(X\) operator of the first encoded qubit (see Sec.VI.1). \(\mathcal{N}\) is not an LC transformation: \(\mathcal{N}(\ket{0}^{\otimes L}\bra{0}^{\otimes L})=\frac{1}{2}(\ket{0}^{ \otimes L}\bra{0}^{\otimes L}+\ket{1}^{\otimes L}\bra{1}^{\otimes L})\), a non-trivial mixed-state with long-range correlations. Furthermore, \(\mathcal{N}\) preserves \(V\), but its action within \(V\) is dephasing the first logical qubit, which is not a unitary action.
We now prove Thm.2.
Proof.: \(\mathcal{C}\) can be dilated into an LU circuit \(U\) that acts jointly on the physical qubits (referred to as \(P\)) and the ancilla qubits (referred to as \(A\)). Consider \(U\)'s action on a codeword state:
\[\ket{\psi;\mathbf{0}}:=\ket{\psi}_{P}\ket{\mathbf{0}}_{A}\ \xrightarrow{U}\ \ket{\phi}_{PA} \tag{72}\]
where \(\ket{\psi}\) is any code word state in \(V\). For later convenience we define the expanded codespace \(V_{0}\), which is the subspace of \(\mathcal{H}_{PA}\) spanned by \(\{\ket{\psi}_{P}\ket{\mathbf{0}}_{A}\}_{\ket{\psi}\in V}\). We use \(V_{0}\) to refer to both the subspace and the code defined by it. \(V_{0}\) is still a stabilizer code, whose stabilizers are those of \(V\) combined with \(\{Z_{i}:\ i\in A\}\).
Let \(L_{1}\) and \(L_{2}\) be two Pauli logical operators of the toric code that act in the same way in the code subspace \(V\) (see Fig.6). Thus \(L_{1}L_{2}\) is a stabilizer of the toric code. Since \(\mathcal{C}\) preserves the code subspace, we have:
\[\bra{\phi}L_{1}L_{2}\ket{\phi}=\operatorname{tr}(C(\ket{\psi}\bra{\psi})L_{1 }L_{2})=1 \tag{73}\]
This leads to:
\[\bra{\psi;\mathbf{0}}L_{1}^{U}L_{2}^{U}\ket{\psi;\mathbf{0}}=1 \tag{74}\]
where \(L_{i}^{U}:=U^{\dagger}L_{i}U\) has support on both \(P\) and \(A\), and is not necessarily a Pauli operator. Recalling that \(L_{1}^{U}L_{2}^{U}\) is a unitary operator and the above expression holds for any \(\ket{\psi;\mathbf{0}}\), we conclude that \(L_{1}^{U}L_{2}^{U}\) acts as logical identity in the extended codespace \(V_{0}\).
To proceed, we assume the spatial separation between \(L_{1}\) and \(L_{2}\) to be much larger than the range of \(U\), so that \(L_{1}^{U}\) and \(L_{2}^{U}\) are also well-separated.
We claim that both \(L_{1}^{U}\) and \(L_{2}^{U}\) are logical operators of \(V_{0}\). Otherwise, there needs to be a codeword state \(\ket{a}\in V_{0}\) such that \(L_{1}^{U}\ket{a}\notin V_{0}\). This implies at least one stabilizer \(S\) of \(V_{0}\) is violated by the state: \(\bra{a}(L_{1}^{U})^{\dagger}SL_{1}^{U}\ket{a}\neq 1\), and \(S\) must have spatial overlap with \(L_{1}^{U}\). Further, since \(L_{2}^{U}\) is far from both \(L_{1}^{U}\) and \(S\),
\[\bra{a}(L_{1}^{U}L_{2}^{U})^{\dagger}SL_{1}^{U}L_{2}^{U}\ket{a}\neq 1 \tag{75}\]
But this cannot be true because \(L_{1}^{U}L_{2}^{U}\ket{a}=\ket{a}\) and \(S\) is a stabilizer.
The same reasoning applies to any Pauli logical operator, referred to as \(K\), whose spatial support is perpendicular to \(L=L_{1}\) (see Fig.6). By varying \(K\) and \(L\), their product \(R=K\cdot L\) can represent all of the 15 inequivalent Pauli logical operators of the toric code. We fix such a set: \(\mathcal{P}=\{R_{1},R_{2},...,R_{15}\}\). The image of \(\mathcal{P}\) under \(R\to R^{U}\) is a set of 15 logical operators of \(V_{0}\), as we just proved. Furthermore, since the map \(R\to R^{U}\) preserves all the multiplication and commutation relations, we know \(\mathcal{P}^{U}\) must act as a set of 15 inequivalent Pauli logical operators on \(V_{0}\), up to a basis rotation.
We consider the part of \(R_{i}^{U}\) (or \(R_{i}\)) when restricted to the codespace \(V_{0}\) (or \(V\)):
\[\Pi_{V_{0}}R_{i}^{U} =\widetilde{R_{i}^{U}}\otimes\ket{\mathbf{0}}\bra{\mathbf{0}} \tag{76}\] \[\Pi_{V}R_{i} =\widetilde{R_{i}}\]
where \(\widetilde{R_{i}^{U}}\) and \(\widetilde{R_{i}}\) are operators acting within \(V\) only. \(\Pi_{V}\) is the projector to the subspace \(V\) and \(\Pi_{V_{0}}=\Pi_{V}\otimes\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\). As explained, both \(\{\widetilde{R_{i}}\}\) and \(\{\widetilde{R_{i}^{U}}\}\) realize the algebra of Pauli operators in the logical space.
We have:
\[\begin{split}\mathcal{C}(\widetilde{R_{i}^{U}})&= \operatorname{tr}_{A}(U(\widetilde{R_{i}^{U}}\otimes\left|\mathbf{0}\right\rangle \left\langle\mathbf{0}\right|)U^{\dagger})\\ &=\operatorname{tr}_{A}(U(R_{i}^{U}\Pi_{V_{0}}U^{\dagger})\\ &=\operatorname{tr}_{A}(R_{i}U\Pi_{V_{0}}U^{\dagger})\\ &=R_{i}\mathcal{C}(\Pi_{V})\\ &=\widetilde{R_{i}}\mathcal{C}(\Pi_{V})\\ \end{split} \tag{77}\]
The second to last equality holds because \(R_{i}\) is supported on \(P\) only, while the last one holds because \(\mathsf{supp}\;\mathcal{C}(\Pi_{V})\subseteq V\) by assumption.
On the _r.h.s._ of the second equality above \(R_{i}^{U}\) and \(\Pi_{V_{0}}\) commute. Thus if we change their order then the same derivation gives:
\[\mathcal{C}(\widetilde{R_{i}^{U}})=\mathcal{C}(\Pi_{V})\widetilde{R_{i}} \tag{78}\]
Since the relation holds for any \(i\in\{1,...,15\}\) and \(\mathsf{supp}\;\mathcal{C}(\Pi_{V})\subseteq V\), we know \(\mathcal{C}(\Pi_{V})\propto\Pi_{V}\). Further, since \(\mathcal{C}\) is trace-preserving, we have \(\mathcal{C}(\Pi_{V})=\Pi_{V}\). Thus:
\[\mathcal{C}(\widetilde{R_{i}^{U}})=\widetilde{R_{i}} \tag{79}\]
This implies that when restricted to \(V\), \(\mathcal{C}(\cdot)\) is a \(*\)-isomorphism and must be a unitary channel.
We use the theorem to explore the relation between the phase of the noisy toric code state and the preservation of quantum information stored.
Consider a toric code's codeword state \(\left|\psi\right\rangle\in V\). Suppose \(\mathcal{N}_{\psi}\) is an LC transformation that preserves the toric code phase. By definition, there exists another LC transformation \(\mathcal{D}_{\psi}\) such that \(\mathsf{supp}\;\mathcal{D}_{\psi}\circ\mathcal{N}_{\psi}(\left|\psi\right\rangle )\subseteq V\)7. We first point out that the pair \((\mathcal{N}_{\psi},\mathcal{D}_{\psi})\) satisfies
Footnote 7: We emphasize that the condition should not be \(\mathcal{D}_{\psi}\circ\mathcal{N}_{\psi}(\left|\psi\right\rangle)=\left| \psi\right\rangle\), according to our definition of the toric code phase in Sec.VI.1
\[\mathcal{D}_{\psi}\circ\mathcal{N}_{\psi}(\left|\psi^{\prime}\right\rangle \left\langle\psi^{\prime}\right|)\subseteq V\quad\forall\left|\psi^{\prime} \right\rangle\in V, \tag{80}\]
which we prove in App.A.5. Since the choice of \((\mathcal{N}_{\psi},\mathcal{D}_{\psi})\) does not depend on the codeword state \(\left|\psi\right\rangle\), we drop the \(\psi\) subscript henceforth.
The channel \(\mathcal{D}\circ\mathcal{N}\) thus satisfies the condition in Thm.2, according to which we have
\[\mathcal{D}\circ\mathcal{N}(\left|\psi\right\rangle\left\langle\psi\right|)=U \left|\psi\right\rangle\left\langle\psi\right|U^{\dagger}\quad\forall\left| \psi\right\rangle\in V \tag{81}\]
for some logical unitary operator \(U\).
We thus conclude that if the noise \(\mathcal{N}\) preserves the toric code phase, then it also preserves the logical information stored. In particular, the recovery map can be chosen as \(R(\cdot)=U^{\dagger}\mathcal{D}(\cdot)U\).
We consider a more detailed scenario where the channel \(\mathcal{N}=\mathcal{N}_{p}\) has a strength parameter \(p\). When the noise is very strong, both the toric code phase and the logical information stored should be destroyed. Thus one can define two critical noise strengths: \(p_{\text{t.c.}}\), beyond which the noisy state is no longer in the toric code phase; and \(p_{\text{coding}}\), beyond which the stored logical information is no longer recoverable. The previous analysis shows that
\[p_{\text{t.c.}}\leq p_{\text{coding}} \tag{82}\]
Namely, the loss of logical information must occur after transitioning out of the toric code phase.
If there is a gap between \(p_{\text{t.c.}}\) and \(p_{\text{coding}}\), then the noisy state \(\mathcal{N}_{p}(\left|\psi\right\rangle\left\langle\psi\right|)\) for \(p\in(p_{\text{t.c.}},p_{\text{coding}})\) is not in the toric code phase but still contains logical information. In this case, the corresponding recovery map \(\mathcal{R}\) that recovers logical information must be non-LC.
### RG of the dephased toric code state
We illustrate these general results in a specific example, for which we construct explicit RG channels. We consider a toric code ground state \(\left|\text{t.c.}\right\rangle\in V\) subject to bit-flip noise with strength \(p\) ( Eq.(33)):
\[\rho_{p}:=(\mathcal{N}_{p}^{X})^{\otimes L}\left(\left|\text{t.c.}\right\rangle \left\langle\text{t.c.}\right|) \tag{83}\]
We ask whether the state is in the same phase as \(\left|\text{t.c.}\right\rangle\), when \(p\) is small.
It is convenient to work in the anyon number basis Eq.(55), and since all states in this example are in the \(e\)-anyon-free subspace, we omit the \(\mathbf{e}\) labeling henceforth. An \(X\) operator acting on a qubit on an edge will create a pair of anyons in the two plaquettes adjacent to the edge. But if two anyons meet in the same plaquette, they annihilate. Thus if we fix the set of edges acted on by \(X\), then anyons appear on faces adjacent to an odd number of \(X\)s (see Fig.8).
The noisy state is a classical mixture of anyon configurations which differ significantly from those of the Gibbs state. When \(p\) is sufficiently small, the typical size of an error cluster is much smaller than the typical distance between clusters. The errors create anyons at the boundary of each cluster.
This picture suggests that by locally identifying clusters and pairing up anyons therein, one can remove all errors if \(p\) is sufficiently small. This intuition underlies the design of several decoding algorithms for the toric code [45, 46, 47, 24], which aim to pair up anyons such that the quantum information stored in the code remains intact. As we show now, these decoders can be modified into RG schemes to reveal mixed-state phases of the noisy toric-code states.
We construct a simplified version of the Harrington decoder for the toric code [47, 24] to demonstrate that
and \(|\)t.c.\(\rangle\) are in the same phase when \(p\) is small. We first partition the lattice into even blocks \(\mathcal{B}_{\text{even}}\) and odd blocks \(\mathcal{B}_{\text{odd}}\) (see Fig.7). Odd blocks are obtained by translating even blocks by one lattice spacing in both spatial directions. The two types of blocks will play different roles: coarse-graining will occur on even blocks and anyons will be paired up within odd blocks, regarded as boundary regions of even blocks.
Each step of the RG is composed of three layers of local channels:
\[\mathcal{C}=\mathcal{U}\circ\left(\bigotimes_{B\in\mathcal{B}_{\text{even}}} \mathcal{E}_{B}^{\,X}\right)\circ\left(\bigotimes_{B\in\mathcal{B}_{\text{odd }}}\mathcal{G}_{B}\right) \tag{84}\]
where the final step \(\mathcal{U}\) is the disentangling operation depicted in Eq.(66), and \(\mathcal{E}^{X}\) is the coarse-graining channel defined in Eq.(63).
The main difference between this RG scheme and the one for the thermal toric code state (see Sec.VI.2) is the introduction of \(\mathcal{G}\)s. \(\mathcal{G}_{B}\) annihilates all anyons within \(B\) only if there are an even number of them; otherwise it leaves anyons within \(B\) unmodified:
\[\mathcal{G}_{B}:=\sum_{\mathbf{m}\in\{0,1\}^{04}}\widetilde{U}_{\mathbf{m}}P_ {\mathbf{m}}(\cdot)P_{\mathbf{m}}\widetilde{U}_{\mathbf{m}}^{\dagger} \tag{85}\]
where \(\widetilde{U}_{\mathbf{m}}\) equals \(U_{\mathbf{m}}\) (Eq.(63)) if \(\pi(\mathbf{m})=0\), and is \(\mathbb{I}\) when \(\pi(\mathbf{m})=1\). The heuristic reason for introducing \(\mathcal{G}_{B}\) is to pair up anyon clusters across the boundaries of the even blocks before the coarse-graining step. If such anyons were not paired, the coarse-graining on even blocks potentially prolongs them into clusters with a larger size, thus hindering effective anyon removal.
After each RG step \(\mathcal{C}\), the new state is still an ensemble of different anyon configurations, albeit one that is not analytically tractable. Thus, we numerically compute how the RG steps affect the anyon density:
\[q_{p}^{(\ell)}=\frac{1}{|P^{(\ell)}|}\sum_{\square\in P^{(\ell)}}\text{tr} \left(\rho_{p}^{(\ell)}\frac{1-A_{\square}}{2}\right) \tag{86}\]
where \(P^{(\ell)}\) is the set of plaquettes on the renormalized lattice and \(\rho_{p}^{(\ell)}\) is the renormalized state after \(\ell\) iterations. \(q=0\) implies the state is in the ground state subspace \(V\) of the toric code Hamiltonian Eq.(54).
We use Monte Carlo method to study the flow of \(q_{p}^{(\ell)}\) under RG. The simulation (Fig.7 (b)) shows that there is a sharp transition of \(q_{p}^{(\ell)}\) at \(p_{c}\approx 0.041\):
\[\lim_{l\rightarrow\infty}q_{p}^{(\ell)}=\left\{\begin{array}{ll}0&p<p_{c}\\ O(1)&p>p_{c}\end{array}\right. \tag{87}\]
When \(p<p_{c}\) the RG successfully annihilates all anyons and the fixed-point state is in the ground state subspace of \(V\); while when \(p>p_{c}\), the fixed-point state has finite anyon density.
Furthermore, when the anyon density \(q^{(\ell)}\) approaches \(0\), it transforms under each RG step as (see Fig.7 (c)):
\[q^{(\ell+1)}\simeq(q^{(\ell)})^{\gamma} \tag{88}\]
for \(\gamma>1\). This behavior guarantees that a small number of iterations is sufficient for the convergence to the toric
Figure 8: **A sample anyon configuration for noisy toric code**– A dashed line on an edge denotes the corresponding qubit is flipped by an \(X\) operator. Anyons are created in plaquettes (shaded) where an odd number of dashed lines meet.
Figure 7: **RG of the dephased toric code state**– (a) In each RG iteration, \(\mathcal{G}\) is applied in parallel to all the odd blocks (blue), then \(\mathcal{E}^{X}\) is applied to all the even blocks (green). Finally disentangling unitaries (see Eq.(66), not drawn in the figure) are applied to reduce the lattice size by half. (b) RG flow of the anyon density \(q_{p}^{(\ell)}\). (c) Iteration relation of \(q_{p}^{(\ell)}\) when approaching \(0\), for various choices of \(p\).
code ground state subspace (see the appendix App.B.4 for a details).
We thus obtain the LC bi-connection:
\[|\text{t.c.}\rangle\ \xrightarrow{\text{noise}}\ \rho_{p}\ \xrightarrow{\text{RG}}\ |\text{t.c.}^{\prime}\rangle\qquad p<p_{c} \tag{89}\]
where \(|\text{t.c.}^{\prime}\rangle\) is another toric code state. This shows that \(\rho_{p}\) is in the same phase as the pure toric code state, when \(p<p_{c}\) and therefore \(X\)-dephasing noise is an irrelevant perturbation to the topologically ordered phase.
We emphasize that this analysis does not show that \(\rho_{p}\) with \(p>p_{c}\) is in a different phase, because no bi-connection has been identified with this decoder. In fact, in the next section, we will construct another local channel that establishes that the phase boundary of the toric code phase extends to a much higher \(p_{c}\).
### Truncated minimal weight perfect matching channel
The seminal work [43] showed that the dephased toric code state (Eq.(83)) retains its logical information up to a critical point \(p_{\text{coding}}\approx 0.108\), by relating the coding phase transition to the ferromagnetic-paramagnetic transition in the random bond Ising model. A recovery channel called the maximal likelihood decoder [43, 48] decodes the logical information for any \(p<p_{\text{coding}}\), but the channel is not an LC transformation.
The minimal weight perfect matching (MWPM) decoder is another decoder introduced in [43]. It has a decoding threshold \(p_{\text{MWPM}}\approx 0.103\) very close to \(p_{\text{coding}}\)[49]. The MWPM decoder, as a quantum channel, is also not an LC transformation. In the rest of this section, we show that it is possible to approximate the MPWM decoder's action arbitrarily well with an LC transformation whenever \(p<p_{\text{MWPM}}\). Consequently, we show that any dephased toric code state with \(p<p_{\text{MWPM}}\) is in the toric code phase.
The core component of the MWPM decoder (henceforth referred to as \(\mathcal{C}^{\text{MWPM}}\)) is a classical algorithm that solves the MWPM problem, namely looking for an anyons pairing scheme that minimizes the total length of the strings connecting pairs. Afterward, the decoder annihilates each anyon pair by acting with the string of \(X\) operators connecting the pair.
We now devise a way to truncate the \(\mathcal{C}^{\text{MWPM}}\) into an LC transformation. We first partition plaquettes into disjoint blocks, each with a size \(b\times b\). For each block \(B\), we apply a local channel \(\mathcal{E}_{B,a}\) which acts jointly on \(B\) and a buffer region \(F\) of width \(a\) surrounding \(B\) (see Fig.9). The local channel first solves the MWPM of anyons within the truncated region \(B\cup F\), with the additional requirement that each anyon can either pair with another anyon or with the outer boundary of \(F\) (the dashed line in Fig.9). Then given the pairing scheme suggested by the MWPM solution, the channel only accepts a subset of it, namely pairs with at least one anyon within \(B\). The truncated MWPM (tMWPM) channel applies the above channel to every block:
\[\mathcal{C}^{\text{tMWPM}}_{a}:=\prod_{B\in\mathcal{B}}\mathcal{E}_{B,a} \tag{90}\]
Note that different \(\mathcal{E}_{B,a}\) can have overlapping domains. But since each \(\mathcal{E}_{B,a}\) acts only on a patch of \((b+2a)^{2}\) qubits, we can always rearrange \(\{\mathcal{E}_{B,a}\}\) into an \(O((b+2a)^{2})\)-layer circuit so that each layer is composed of channels with non-overlapping domains. After the rearrangement, it is apparent that \(\mathcal{E}^{\text{tMWPM}}_{a}\) is a range-\(O((b+2a)^{4})\) LC transformation (because both the depth and the range of gate is \(O((b+2a)^{2})\)).
The assumption behind the design of the tMWPM channel is the existence of a correlation length \(\xi(p)\), such that when \(a\gg\xi(p)\), solving MWPM on \(B\cup F\) only and solving MWPM on the whole system produce the same pairing for anyons in \(B\). If the assumption holds for every block \(B\), then the \(\mathcal{C}^{\text{tMWPM}}_{a}\) pairs all anyons in the same way as \(\mathcal{C}^{\text{MWPM}}\):
\[\mathcal{C}^{\text{tMWPM}}_{a}(\rho_{p})\approx\mathcal{C}^{\text{MWPM}}(\rho_ {p})=|\text{t.c.}\rangle\quad a\gg\xi(p) \tag{91}\]
We provide a rough estimate of how large \(a\) needs to be for the '\(\approx\)' above to hold (agreement between local and global MWPM with probability \(1-\epsilon\)). Given the correlation length assumption, the probability that for a single block \(B\) the global and the truncated MWPM agrees should be \((1-e^{-a/\xi(p)})\). Assuming these probabilities for different blocks are independent (which should hold for far apart blocks), then we need
\[(1-e^{-a/\xi(p)})^{L^{2}/b^{2}}>1-\epsilon \tag{92}\]
which occurs when
\[a=\xi(p)O\left(\log\frac{L^{2}}{\epsilon}\right). \tag{93}\]
Figure 9: **Truncated minimum weight perfect matching channel**: (a) For a given block \(B\), the corresponding channel \(\mathcal{E}_{B,a}\) acts on both \(B\) and a buffer region \(F\) of a width \(a\). (b) \(\mu(a,p)\) is the probability that the truncated and global MWPM algorithms produce the same anyon pairings. It is plotted against \(p\) for various choices of \(a\).
\(a\) diverges whenever \(\xi(p)\) does, and this is expected to happen when \(p\to p_{\rm MWPM}\).
To numerically support the assumption that there exists a correlation length for MWPM, we sample anyon configurations in the \(X\)-dephased toric code state and solve the MWPM first for all the anyons, and then only for anyons within \(B\cup F\) (Fig.9(a)). Then we compute the probability \(\mu(a,p)\) that the two solutions are identical on \(B\). We let both the system size \(L\) and the diameter of \(B\) be proportional to \(a\), the width of the buffer \(F\), so that the system has only one length scale \(a\).
The simulation result is shown in Fig.9(b) and suggests there is a critical point \(p_{\rm tMWPM}\) in the interval \((0.10,0.11)\), presumably consistent with \(p_{\rm MWPM}\) in the thermodynamic limit. Below \(p_{\rm tMWPM}\), we observe that \(\lim_{a\to\infty}\mu(a,p)=1\). This indicates that the MWPM solution within \(B\) is independent of anyons that are more than \(O(\xi)\) away from \(B\), for some correlation length \(\xi\) which diverges at \(p_{\rm tMWPM}\). tMWPM thus serves as a local channel which, along with the noise channel, establishes the two-way connection demonstrating the toric code phase up to \(p_{\rm tMWPM}\approx 0.1\). Above \(p_{\rm tMWPM}\), \(\lim_{a\to\infty}\mu(a,p)=0\), implying non-locality in the MPWM solution.
We point out that the simulation method above provides a way to detect the toric code phase using the anyon distribution data. One can fix a region \(B\), implement MWPM on \(B\cup F\), and gradually increase the buffer width \(a\). If the MWPM solution when restricted to \(B\) becomes stationary after \(a\) is larger than some \(a^{*}=O(1)\) with high probability, then the original mixed state is in the toric code phase because the tMWPM channel with \(a\gtrsim a^{*}\log L\) can transform it into a pure toric code state. The method can potentially be used to detect mixed-state topological order in experiments.
## VIII Discussion and outlook
Our work provides two routes (RG and local versions of decoders) for constructing local channels connecting two mixed states to prove they are in the same phase. We formulated a real-space RG scheme for mixed states and proposed the correlation-preserving property as a guiding criterion for finding coarse-graining maps; this property is necessary and sufficient for the map's action to be reversible (Thm.1). We applied this formalism to identify the phases of several classes of mixed states obtained by perturbing a long-range entangled pure state with noise or finite temperature, and in particular we constructed an exact RG flow of the finite temperature 2D toric code state to infinite temperature.
For toric code subject to decoherence, we also established a relation between the mixed state phase of the toric code and the integrity of logical information. In Thm.2, we proved that if local noise preserves the long-range entanglement of the toric code (and the resulting mixed state remains within the same phase as toric code), it must also preserve logical information encoded in the initial pure state. We conjecture that the converse statement is also true, namely, if local noise destroys the long-range entanglement of toric code, it must also destroy any encoded logical information. Even though the theorem and subsequent discussion focused on the toric code state, the main proof idea generalizes to many other topological codes and their corresponding phases.
* After formalizing the definition of mixed-state phase, one natural question to ask is whether there is a nontrivial phase that has no pure state nor classical state in this phase. A promising candidate is the ZX-dephased toric code state recently studied in [10]. Since the state (when noise is strong) loses logical information [10], it is provably not in the toric code phase according to our Thm.2. Thus if the state is not in the trivial phase, it is an example of intrinsic mixed-state topological order. Another class of potential examples are decohered critical ground states [7; 8], because such states naturally sit between a long-range entangled pure state and a long-range correlated classical state.
* One related question is whether one can find a computable quantity to detect nontrivial mixed-state phases. Topological entanglement negativity has been successfully used as a probe of mixed-state topological order [5; 50], but its robustness under LC transformations needs to be studied further.
* The decoherence-induced toric code transition can also be understood as a separability transition of the mixed-state [11], and it would be valuable to relate this perspective with the mixed state phase and local channel perspective.
* Pure state RG methods like DMRG serve as powerful computational methods for analyzing many-body systems. It is thus important to develop a numerical implementation of our mixed-state RG scheme. To facilitate simulations, one needs to first find an efficient representation of the mixed state (_e.g._ using tensor networks), then update it iteratively using exact or approximately correlation-preserving maps, obtained by solving the optimization problem Eq.(13). We leave this problem for future exploration.
* As presented in Sec.VII.3, tMWPM also serves as a practical probe of the mixed state toric code phase using anyon measurements. However, in experiments imperfect measurements lead to a finite density of 'fake' anyons as well as unprobed anyons. To address this, one needs to consider a specific model of measurement errors and perform more than one round of measurements. Another potential direction is generalizing tMWPM to other topologically ordered mixed-state phases in two or higher dimensions.
###### Acknowledgements.
We thank Arpit Dua, Tyler Ellison, Matthew P.A. Fisher, Tarun Grover, Ethan Lake, Yaodong Li, Zi-Wen Liu, Tsung-Cheng Lu, Ruochen Ma, and Michael Vasmer for helpful discussions and feedback. We also thank Roger G. Melko and Digital Research Alliance of Canada for computational resources. SS acknowledges the KITP graduate fellow program, during which part of this work was completed. This work was supported by the Perimeter Institute for Theoretical Physics (PI) and the Natural Sciences and Engineering Research Council of Canada (NSERC). Research at PI is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This research was also supported in part by the National Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135.
|
2304.12926 | Efficient control protocols for an active Ornstein-Uhlenbeck particle | Designing a protocol to efficiently drive a stochastic system is an active
field of research. Here we extend such control theory to an active
Ornstein-Uhlenbeck particle (AOUP) in a bistable potential, driven by a
harmonic trap. We find that protocols designed to minimize the excess work (up
to linear-response) perform better than naive protocols with constant velocity
for a wide range of protocol durations. | Deepak Gupta, Sabine H. L. Klapp, David A. Sivak | 2023-04-25T15:39:04Z | http://arxiv.org/abs/2304.12926v3 | # Efficient control protocols for an active Ornstein-Uhlenbeck particle
###### Abstract
Designing a protocol to efficiently drive a stochastic system is an active field of research. Here we extend such control theory to an active Ornstein-Uhlenbeck particle (AOUP) in a bistable potential, driven by a harmonic trap. We find that protocols designed to minimize the excess work (up to linear-response) perform better than naive protocols with constant velocity for a wide range of protocol durations.
## I Introduction
Active matter is composed of self-propelled units that convert free energy from the environment into mechanical motion [1; 2]. This intrinsic self-propulsion violates detailed balance [3; 4] and drives the system out of equilibrium [5]. Examples of such active-matter systems include flocking birds [6], fish schools [7], light-activated colloids [8], synthetic microswimmers [9], motile cells [10], bacteria [11; 12], and human and animal crowds [13; 14; 15]. Researchers have uncovered fascinating behaviors in active-matter systems including jamming [16], clustering [17], and motility-induced phase separation [18]. A profusion of experimental and theoretical investigations have probed their nonequilibrium nature at the single-particle level [19; 20; 21; 22; 23; 24; 25; 26].
Experiments reveal the promise of active systems for several applications [27], such as delivering drugs to target organs [28; 29], controlling the spread of infectious microorganisms [30], and developing micro-robots capable of advanced group behaviors [31; 32]. Recently, researchers have focused on developing optimal schemes to transport such active particles in complex environments [33; 34; 35].
Since active systems constantly dissipate energy into the environment to sustain nonequilibrium directed operations, it is of paramount importance to develop efficient driving strategies (temporal schedules for varying external control parameters) that reduce the thermodynamic costs of control [36]. Examples of such control parameters include length of a polymer, stiffness and location of a particle-confining trap, and magnetic fields on spin systems [37]. One way to manipulate the dynamics of nonequilibrium systems and to control the thermodynamic cost is _feedback_, as has been demonstrated, _e.g._, for Brownian ratchet systems [38; 39; 40; 41; 42; 43; 44]. Another promising route is to deliberately design a pre-determined protocol that does not depend on contemporary measurements of the system. Indeed, researchers have analytically obtained an optimal driving schedule (henceforth a _protocol_) that minimizes dissipation for a harmonically confined (passive) Brownian particle for arbitrary protocol duration [45; 46]; however, far from equilibrium there is no general strategy to design a minimum-dissipation control protocol for a system diffusing in an arbitrary potential-energy landscape.
Ref. [47] formulated a linear-response framework for such a complicated scenario to design protocols that minimize dissipation near equilibrium. This method has been used to design protocols that reduce dissipation for biomolecular systems, such as driving the F\({}_{1}\)-ATPase molecular motor to synthesize ATP [48], and driving folding and unfolding of single DNA hairpins [49]. Moreover, the effectiveness of this scheme has been demonstrated in numerical simulations of barrier crossing [50], rotary motors [51], Ising models [52; 53; 54], and several other model systems [55; 56; 57; 58].
In contrast to previous works applicable to systems in thermal equilibrium in the absence of driving [45; 46; 47; 48; 49; 58], here we seek efficient driving protocols that minimize the work in driving an active particle. Specifically, we drive an active Ornstein-Uhlenbeck particle (AOUP) in a double-well potential using a harmonic confinement. The AOUP is a popular active-particle model that has already been useful in investigating motility-induced phase separation [18], glassy behavior [59], heat transport [60], and other active nonequilibrium behavior [61; 62; 63; 64]. For this system, we apply the linear-response framework [47] to design a driving protocol. We show that this "designed protocol" performs better than a naive (constant-velocity) protocol. Our analysis extends the linear-response framework (originally derived in passive close-to-equilibrium systems) to AOUPs close to a nonequilibrium stationary state.
The rest of the paper is organized as follows. Section II introduces the model. Section III presents the linear-response framework. Section IV discusses the designed protocol and its effectiveness in driving the particle over the potential-energy barrier. Section V summarizes the main results. Appendix A compares the generalized friction obtained using the full-model defined in Eqs. (3) and (4) (in two extreme limits of the active particle's
persistence time) with that obtained using the effective model (11). Appendix B derives the Kramers time for a passive Brownian particle. Appendix C discusses numerical simulation methods.
## II Setup
We consider an active Ornstein-Uhlenbeck particle (AOUP) coupled to a heat reservoir at temperature \(T\) and confined in a one-dimensional (1D) double-well potential [50] (see Fig. 1):
\[U(x)\equiv-\beta^{-1}\ln\left[e^{-\frac{\beta x}{2}(x+x_{\rm m}) ^{2}}+e^{-\beta\Delta E-\frac{\beta k}{2}(x-x_{\rm m})^{2}}\right]\,, \tag{1}\]
for particle position \(x\), inverse temperature \(\beta\equiv(k_{\rm B}T)^{-1}\), Boltzmann's constant \(k_{\rm B}\), and spring constant \(k\). The double-well minima are located at \(x=\pm x_{\rm m}\), and \(\Delta E\) is the energy difference between these minima (see Fig. 1). This double-well potential models a bistable system (_e.g._, a DNA hairpin with folded and unfolded conformations) switching between its two metastable states (each modeled as a harmonic potential) on a time scale much faster than all other relevant system time scales [65].
In this paper, we seek a driving protocol that minimizes the work required to transport an AOUP [between the two wells of the double-well \(U(x)\)] using a harmonic trap
\[U_{\rm trap}(x;\lambda)\equiv\frac{1}{2}E^{\ddagger}[x-\lambda(t)]^{2} \tag{2}\]
with fixed stiffness \(E^{\ddagger}\) and time-dependent minimum \(\lambda(t)\). To simplify notation, we henceforth suppress its explicit time dependence.
In the presence of the trap, the particle position \(x\) evolves according to the Langevin equation
\[\dot{x}=-\beta DU_{\rm tot}^{\prime}(x;\lambda)+\sqrt{2D}\ \eta(t)+y(t)\, \tag{3}\]
where the dot and the prime respectively indicate a time- and a space-derivative, and \(D\) the diffusion coefficient. The total potential energy \(U_{\rm tot}(x;\lambda)\equiv U(x)+U_{\rm trap}(x;\lambda)\) experienced by the particle is the sum of the underlying landscape \(U(x)\) and the trapping potential \(U_{\rm trap}(x;\lambda)\) for a given trap minimum \(\lambda\). Figure 1 shows schematics of \(U(x)\), \(U_{\rm trap}(x;\lambda)\), and \(U_{\rm tot}(x;\lambda)\). In Eq. (3), the Ornstein-Uhlenbeck (OU) contribution \(y(t)\) (hereafter the _active velocity_) to the velocity represents the fluctuating active self-propulsion and evolves according to [66; 67]
\[\dot{y}=-\frac{y}{t_{\rm a}}+\frac{1}{t_{\rm a}}\sqrt{2D_{\rm a}}\ \eta_{\rm a}(t). \tag{4}\]
for the persistence time \(t_{\rm a}\). We define the Peclet number \({\rm Pe}=D_{\rm a}/D\) as a dimensionless parameter characterizing the strength of the active noise relative to the thermal noise. In Eqs. (3) and (4), \(\eta(t)\) is thermal noise and \(\eta_{\rm a}(t)\) is "active" noise, each Gaussian with zero mean, i.e., \(\langle\eta(t)\rangle=\langle\eta_{\rm a}(t)\rangle=0\), and delta correlated in time,
\[\langle\eta(t)\eta(t^{\prime})\rangle=\langle\eta_{\rm a}(t) \eta_{\rm a}(t^{\prime})\rangle=\delta(t-t^{\prime}). \tag{5}\]
We further assume that the two noises are independent:
\[\langle\eta(t)\eta_{\rm a}(t^{\prime})\rangle=0. \tag{6}\]
Angle brackets \(\langle\cdots\rangle\) denote an average over both noises.
Integrating Eq. (4) up to the long-time limit and averaging over the active noise \(\eta_{\rm a}(t)\) gives \(y\)'s stationary-state average, \(\langle y(t)\rangle=0\). In this stationary state, the temporal correlations of \(y(t)\) decay exponentially [68]:
\[\langle y(t)y(t^{\prime})\rangle=\frac{D\ {\rm Pe}}{t_{\rm a}}\ e^{-|t-t^{ \prime}|/t_{\rm a}}. \tag{7}\]
Further, since \(y(t)\) depends linearly on the Gaussian active noise \(\eta_{\rm a}(t)\) [see Eq. (4)], \(y(t)\) is also Gaussian distributed with stationary-state distribution
\[p_{\rm ss}(y)=\frac{1}{\sqrt{2\pi D\ {\rm Pe}/t_{\rm a}}}\ e^{-\frac{y^{2}}{2D \ {\rm Pe}/t_{\rm a}}}. \tag{8}\]
For our later analysis, it is useful to consider two limiting cases. First, taking the limit \(t_{\rm a}\ \to\ 0\) in Eq. (7), \(y(t)\) reduces to a zero-mean Gaussian white noise [69], i.e.,
\[\langle y(t)y(t^{\prime})\rangle=2D\ {\rm Pe}\ \delta(t-t^{\prime}). \tag{9}\]
In the opposite limit (\(t_{\rm a}\to\infty\) while holding \(D\) and \({\rm Pe}\) fixed), the distribution of \(y(t)\) becomes a delta-function at \(y=0\), i.e, \(p_{\rm ss}(y)=\delta(y)\) [see Eq. (8)]. To summarize,
\[y(t)\to\left\{\begin{array}{ll}\sqrt{2D\ {\rm Pe}}\ \eta_{\rm a}(t)&t_{\rm a} \to 0\,\\ 0&t_{\rm a}\to\infty\.\end{array}\right. \tag{10}\]
Figure 1: Model schematic. Potential-energy landscape \(U(x)\), trap potential \(U_{\rm trap}(x;\lambda)\), and total potential energy \(U_{\rm tot}(x;\lambda)\equiv U(x)+U_{\rm trap}(x;\lambda)\), as functions of particle position \(x\). Trap minimum is \(\lambda=1.075\sqrt{Dt_{\rm trap}}\), for trap relaxation time \(t_{\rm trap}\equiv k_{\rm B}T/(DE^{\ddagger})\). Energy offset \(\Delta E=2\ k_{\rm B}T\) between potential minima at \(x=\pm x_{\rm m}=\pm 1.414\sqrt{Dt_{\rm trap}}\). Here and in the following figures, the spring constant is \(k=2\ k_{\rm B}T/(Dt_{\rm trap})\), and the trap stiffness is \(E^{\ddagger}=k/2\).
Combining Eq. (10) with Eq. (3), in the stationary state for appropriate limits of \(t_{\rm a}\), \(x\) effectively describes the position of a passive Brownian particle with noise strength \(D\) for \(t_{\rm a}\to\infty\) and \(D(1+{\rm Pe})\) for \(t_{\rm a}\to 0\) [see Eq. (10)].
## III Theory
For a single stochastic trajectory, the _excess work_\(w^{\rm FM}_{\rm ex}\) is the difference between the work
\[w^{\rm FM}\equiv-\int_{0}^{t_{\rm dur}}\;{\rm d}t\ \dot{\lambda}\ f\, \tag{11}\]
performed on the AOUP [70] in a time-dependent protocol and its quasistatic value,
\[W_{\rm qs}\equiv-\int_{0}^{t_{\rm dur}}\;{\rm d}t\ \dot{\lambda}\ \langle f \rangle_{\lambda}. \tag{12}\]
Here \(f\equiv-\partial_{\lambda}U_{\rm trap}(x;\lambda)=E^{\ddagger}(x-\lambda)\) is the force conjugate to the control parameter \(\lambda\). In Eq. (11), the superscript 'FM' denotes the _full model_, i.e., Eqs. (3) and (4). In Eq. (12), angle brackets \(\langle\dots\rangle_{\lambda}\) indicate an average at fixed trap minimum \(\lambda\). Appendices C.2 and C.3 respectively detail the computation of the stationary-state average force \(\langle f\rangle_{\lambda}\) and the quasistatic work \(W_{\rm qs}\).
The main quantity of interest is the ensemble-average (over initial conditions and each noise's history) excess work
\[W^{\rm FM}_{\rm ex}\equiv\langle w^{\rm FM}_{\rm ex}\rangle=-\int_{0}^{t_{ \rm dur}}\;{\rm d}t\ \dot{\lambda}\ \langle\delta f\rangle\, \tag{13}\]
where \(\delta f\equiv f-\langle f\rangle_{\lambda}\) is the deviation of the force from its average for fixed trap minimum \(\lambda\). Even though the excess work is an ensemble-average quantity, henceforth for brevity we drop explicit mention of the average. Notice that in the absence of active velocity (i.e., \(y=0\)), the quasistatic work equals the free-energy difference between the initial and final control-parameter values (see Appendix C.3 and Fig. C2) [47; 48].
In the absence of the moving harmonic trap (2), the active system described by Eqs. (3) and (4) approaches a nonequilibrium stationary state. The presence of the moving harmonic trap (2) pushes the system further from equilibrium. In view of this complicated situation, it seems challenging to find an optimal control protocol that minimizes the excess work (13) for an AOUP. But a linear-response framework [47]--originally derived for passive systems close to equilibrium--provides a framework for designing protocols that systematically reduce dissipation in a variety of systems [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58]). Here, we test the applicability of this linear-response framework [47] for a driven AOUP.
In the following, we briefly summarize the linear-response framework developed in [47]. For a passive system (i.e., no active velocity, \(y=0\)) that remains close to its stationary (in this case equilibrium) state during time-dependent variation of the control parameter \(\lambda\), within the linear-response approximation the instantaneous excess power (exceeding the corresponding quasistatic power) is
\[P^{\rm LR}_{\rm ex}(t)\approx\zeta(\lambda)\!\left(\frac{{\rm d}\lambda}{{\rm d }t}\right)^{2}. \tag{14}\]
(Here, the superscript 'LR' denotes the _linear-response_ approximation.) The time integral of this quantity over the protocol duration \(t_{\rm dur}\) gives the excess work,
\[W^{\rm LR}_{\rm ex}=\int_{0}^{t_{\rm dur}}\;{\rm d}t\ P^{\rm LR}_{\rm ex}(t). \tag{15}\]
In Eq. (14), the generalized friction coefficient \(\zeta(\lambda)\) is the time-integral of the stationary-state force autocovariance:
\[\zeta(\lambda)\equiv\beta\int_{0}^{\infty}\;{\rm d}t\ \langle\delta f(0)\ \delta f(t)\rangle_{\lambda}. \tag{16}\]
Appendix C.1 details computation of the force-autocovariance function at fixed trap minimum \(\lambda\).
Multiplying and dividing the right-hand side (16) by the stationary-state force variance \(\langle(\delta f)^{2}\rangle_{\lambda}\), we rewrite the generalized friction coefficient,
\[\zeta(\lambda)=\beta\langle(\delta f)^{2}\rangle_{\lambda}\ \tau_{\rm relax}( \lambda)\, \tag{17}\]
as the product of the force variance \(\langle(\delta f)^{2}\rangle_{\lambda}\) and the force relaxation time
\[\tau_{\rm relax}(\lambda)\equiv\int_{0}^{\infty}\;{\rm d}t\ \frac{\langle\delta f(0)\ \delta f(t)\rangle_{\lambda}}{\langle(\delta f)^{2}\rangle_{\lambda}}. \tag{18}\]
Following Ref. [47], the rate of change (hereafter _velocity_) of the designed protocol \(\lambda^{\rm des}(t)\) that (near equilibrium) minimizes the excess work is inversely proportional to the square root of the friction coefficient,
\[\frac{{\rm d}\lambda^{\rm des}}{{\rm d}t}=\frac{A^{\rm des}}{\sqrt{\zeta( \lambda)}}\, \tag{19}\]
which differs from the constant-velocity (hereafter _naive_) protocol,
\[\frac{{\rm d}\lambda^{\rm naive}}{{\rm d}t}=A^{\rm naive}. \tag{20}\]
In Eqs. (19) and (20), the protocol's boundary conditions \(\lambda(0)=\lambda_{\rm i}\) and \(\lambda(t_{\rm dur})=\lambda_{\rm f}\) fix the constants \(A^{\rm des}\) and \(A^{\rm naive}\). Substituting (19) in (14) yields (within the linear-response framework) a constant excess power, whereas for the naive protocol (20), the excess power (14) is proportional to \(\zeta(\lambda)\).
## IV Results
We start by considering the energetic landscape determining the particle dynamics and the quantities entering the linear-response framework.
Figure 2a shows the total potential energy \(U_{\rm tot}(x;\lambda)\) as a function of particle position \(x\), for different trap minima \(\lambda\). For each examined \(\Delta E\), there is a range of \(\lambda\) for which the total potential energy \(U_{\rm tot}(x;\lambda)\) has two metastable states (_e.g._, see \(\lambda=0\) for \(\Delta E=0\)).
Figure 2b shows the force autocovariance function [determining the generalized friction coefficient (16)] as a function of observation time \(t/t_{\rm trap}\), for different trap minima \(\lambda\). The force autocovariance decays particularly slower when the total potential \(U_{\rm tot}(\underline{x};\lambda)\) displays two metastable states (_e.g._, for \(\lambda/\sqrt{Dt_{\rm trap}}=0\) for \(\Delta E=0\)\(k_{\rm B}T\)). For \(\Delta E=0\), \(U_{\rm tot}(x;\lambda)\) and \(U_{\rm tot}(x;-\lambda)\) are related by a mirror reflection about \(\lambda=0\) (see Fig. 2a), thus producing identical (up to numerical sampling) force autocovariance functions; for \(\Delta E\neq 0\) there is no such symmetry.
Figures 2c,d,e respectively display the force variance, the force relaxation time (18), and their product yielding the generalized friction (16), each as a function of trap minimum \(\lambda\). For \(\Delta E=2\)\(k_{\rm B}T\), all these functions are asymmetric about \(\lambda=0\) reflecting the asymmetry in the total potential energy landscape, \(U_{\rm tot}(x;\lambda)\). For longer persistence time (\(t_{\rm a}/t_{\rm trap}\gtrsim 1\)), each of these quantities are maximized at a trap minimum \(\lambda\) for which the total potential \(U_{\rm tot}(x;\lambda)\) has two metastable states (see Fig. 2a). However, for short persistence time (\(t_{\rm a}/t_{\rm trap}\ll 1\)), they are almost independent of the trap minimum \(\lambda\). This is because in this limit the active velocity \(y(t)\) behaves as Gaussian white noise (9), producing a higher effective diffusion coefficient \(D(1+{\rm Pe})\) than the passive Brownian particle [see Eq. (10)]. Thus, at \({\rm Pe}=5\) the effective temperature experienced by the AOUP is 6 \(k_{\rm B}T\) (six times larger than the passive Brownian particle), dominating the \(\sim\)1 \(k_{\rm B}T\) height of the total potential's barrier (Fig. 2a). Fig. 10 shows agreement of the generalized friction coefficient obtained at extreme values of persistence time with that obtained using the effective dynamics (10).
Figure 2f shows the designed protocol velocity defined according to (19), as a function of trap minimum \(\lambda\). The system is driven slower where the generalized friction is higher in order to harness thermal fluctuations to overcome the total potential's barrier between the two metastable states, thereby reducing the excess work.
Integrating the protocol's velocity with respect to time gives the designed protocol, that is, the optimal trajectory of the trap minimum \(\lambda\), as a function of time (Fig. 2g). Since for \(t_{\rm a}/t_{\rm trap}\ll 1\) the effect of the total potential's barrier is negligible (Figs. 2c,d,e), the naive and designed protocols are indistinguishable.
In the following, we use the naive and designed protocols to compute the excess work and normalized flux using the full model described by dynamics (3) and (4) as functions of protocol duration (see Appendix C.4 for the numerical simulation method).
We start by assessing the accuracy of the linear-response framework by comparing the true excess work (13) using the full model (3) and (4) and the
Figure 2: (a) Total potential energy as a function of particle position \(x\). Vertical dashed lines: trap minimum \(\lambda\). (b) Force autocovariance as a function of time, for persistence time \(t_{\rm a}=10^{4}\times t_{\rm trap}\). (c) Force variance, (d) force relaxation time, (e) generalized friction coefficient, and (f) protocol velocity, each as a function of trap minimum \(\lambda\). (g) Protocol as a function of time. (f,g) Red lines: naive (constant-velocity) protocol; points/curves: designed protocols. (c-g) Blue color intensity increases with persistence time \(t_{\rm a}\). Here and in the following, the Péclet number \({\rm Pe}=5\), and error bars indicate one standard error of the mean.
linear-response approximation (15) in the slow-driving (long-duration) regime. Figure 3a shows the ratio \(\phi\equiv W_{\rm ex}^{\rm FM}/W_{\rm ex}^{\rm LR}\), for both naive and designed protocols, as a function of protocol duration. By definition, \(\phi\) quantifies the accuracy of the linear-response approximation [72]: \(\phi=1\) indicates complete accuracy. For each value of \(t_{\rm a}\), \(\phi\) approaches a constant value (up to numerical sampling) in the limit of long duration. These values appear to be independent of the protocol type and the energy shift \(\Delta E\).
Figure 3b shows that this linear-response accuracy \(\phi\) asymptotes to unity for \(t_{\rm a}\gg t_{\rm K}\), and to \(1/(1+{\rm Pe})\) for \(t_{\rm a}\ll t_{\rm K}\), where \(t_{\rm K}\) is an average Kramers time obtained for the passive Brownian case, see Appendix B for details. In the limit \(t_{\rm a}\to\infty\), the OU contribution \(y(t)\) to the velocity effectively vanishes and the system can be described by a (passive) Brownian dynamics with unchanged temperature \(k_{\rm B}T\) [see Eq. (47) for \({\rm Pe}=0\)]. In the opposite limit of \(t_{\rm a}\to 0\), \(y(t)\) effectively becomes an additional Gaussian white noise which combines with the Gaussian thermal white noise \(\eta(t)\) to give white noise with total effective strength \(D(1+{\rm Pe})\); therefore, the system can be described by a Brownian dynamics with effective temperature \(k_{\rm B}T(1+{\rm Pe})\) [see Eq. (47)]. Figure 3b also displays the crossover of \(\phi\) from \(1/(1+{\rm Pe})\) to \(1\) as a function of persistence time \(t_{\rm a}\).
Figure 4a shows the full-model naive and designed excess works (13), each as a function of protocol duration. At long protocol duration, the system mostly follows the trap and remains close to its stationary state during the entire protocol, so the work performed on the AOUP approaches its quasistatic value, i.e., \(W_{\rm ex}\to 0\) as \(t_{\rm dur}\to\infty\). We observe that this excess work (for both naive and designed protocols) decays as \(\sim t_{\rm dur}^{-1}\).
Figure 4b compares the ratio of full-model naive and designed excess works (13) with its linear-response approximation [72], as a function of protocol duration. The linear-response approximation is more accurate at longer protocol durations. For short persistence time
Figure 3: The linear-response accuracy \(\phi\equiv W_{\rm ex}^{\rm FM}/W_{\rm ex}^{\rm LR}\), the ratio of the works under the full-model (13) (FM) and the linear-response (LR) approximation (15) [72], a) as a function of protocol duration \(t_{\rm dur}\) for various persistence times \(t_{\rm a}\), and b) as a function of persistence time \(t_{\rm a}\). Panel b) shows boxed data from panel a). Vertical gray lines mark the Kramers time for the passive Brownian particle, \(t_{\rm K}/t_{\rm trap}=4.46\ldots\) and \(7.43\ldots\) for \(\Delta E=0\) and \(2\)\(k_{\rm B}T\), respectively (see Appendix B). Red: naive; blue: designed. Horizontal green dashed and pink dot-dashed lines respectively show \(\phi=1\) and \(\phi=1/(1+{\rm Pe})\). Color intensity increases with persistence time \(t_{\rm a}\) (see Fig. 2).
Figure 4: Excess work \(W_{\rm ex}\) as a function of protocol duration \(t_{\rm dur}\). a) Full-model (FM) excess work for naive (red) and designed (blue) protocols. b) Ratio of naive and designed excess works. Symbols: full-model. Horizontal lines: linear-response (LR) approximation [72]. c) The difference of the naive and designed full-model excess works. Dashed curves are a guide to the eye. Throughout, color intensity increases with persistence time \(t_{\rm a}\) (see Fig. 2).
(\(t_{\rm a}/t_{\rm trap}\ll 1\)), the effect of the total potential's barrier on the AOUP is negligible, so the naive and designed protocols are similar (Fig. 2g), and thus this ratio is approximately unity. Away from this limit (i.e., \(t_{\rm a}/t_{\rm trap}\gtrsim 1\)), the designed excess work is lower than the naive for a considerable range of protocol durations. We emphasize that in contrast to the absolute value of excess work (see Fig. 3), the excess-work ratio is independent of the linear-response accuracy \(\phi\), signaling the applicability of the linear-response framework [47] for the AOUP.
Figure 4c shows the difference of the full-model naive and designed excess works as a function of protocol duration. For slower protocols, both naive and designed excess works decay to zero (see Fig. 4a); therefore, their difference also approaches zero. For vanishing duration, all protocols produce the same excess work, so this difference again vanishes. For intermediate durations, this difference attains a maximum value indicating a protocol duration for which the designed protocol has greatest advantage over the naive protocol.
Finally, we calculate the total flux induced by driving,
\[\bar{J}\equiv\frac{1}{\langle x\rangle\big{|}_{\lambda_{t}}-\langle x\rangle \big{|}_{\lambda_{i}}}\int_{0}^{t_{\rm dur}}\,{\rm d}t\ \langle\dot{x}\rangle\, \tag{21}\]
normalized by a prefactor quantifying the distance between the mean particle positions at the control-parameter endpoints.
Figures 5a,b show that \(\bar{J}\) increases with protocol duration, reaching unity for longer durations (with only minor differences between protocol types), indicating successful transport over the potential-energy barrier. At shorter protocol durations, both designed and naive fluxes increase with decreasing persistence time \(t_{\rm a}\): the higher effective temperature \(k_{\rm B}T(1+{\rm Pe})\) in this limit makes it easier to cross the barrier.
Figure 5c displays the ratio of designed flux to naive flux. For longer protocol durations, this ratio asymptotes to unity. For shorter durations, the designed flux is higher than naive for \(\Delta E=2\ k_{\rm B}T\), and vice versa for \(\Delta E=0\).
## V Discussion
In this paper, we designed a driving protocol to transport an AOUP in a 1D nonlinear potential-energy landscape using a harmonic trap. Our analysis reveals that the designed protocol obtained using the linear-response framework [47] requires less work than the naive protocol for a considerable range of protocol durations. Moreover, at intermediate duration the work savings are maximized. Thus the linear-response result in [47] (previously applied to systems without intrinsic activity) can be usefully extended to an AOUP.
This study opens a new research avenue investigating the applicability of the linear-response framework to construct analogous minimum-dissipation control protocol for other active-particle systems, such as active Brownian particles [71; 73; 74] and run-and-tumble particles [75; 22]. Further, we emphasize that our results can be tested in an experiment driving the extension of single DNA hairpins [49], but now the beads attached to the hairpin's ends experience an additional OU noise generated by electrodes coupled to a resistor and an amplifier [44] (see Refs. [66; 67] for other methods to generate OU noise).
###### Acknowledgements.
D.G. and S.H.L.K. gratefully acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projekt-nummer 163436311 - SFB 910. D.A.S. is supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant and a Tier-II Canada Research Chair.
Figure 5: Normalized flux \(\bar{J}\) as a function of protocol duration, for a) designed protocols, b) naive protocols, and c) their ratio. Dashed lines are a guide to the eye. Color intensity increases with persistence time \(t_{\rm a}\) (see Fig. 2).
## Appendix A Generalized friction coefficient: comparison with effective dynamics
Figure 11 shows the agreement of the generalized friction coefficient \(\zeta(\lambda)\) for two extreme values of persistence time, \(t_{\rm a}/t_{\rm trap}=10^{-2}\) and \(10^{4}\), at fixed Peclet number \(\mathrm{Pe}=5\) (see Fig. 2e), with that obtained from the effective dynamics [substituting Eq. (10) in (3)]:
\[\dot{x}=-\beta DU_{\rm tot}^{\prime}(x;\lambda)+\sqrt{2D(1+\mathrm{Pe})}\ \eta_{\rm eff}(t)\, \tag{10}\]
at \(\mathrm{Pe}=5\) and \(0\) [corresponding respectively to the first and second lines of Eq. (10)]. Notice that in Eq. (10) \(\eta_{\rm eff}(t)=\eta(t)\) (3) and \(\mathrm{Pe}=0\) for the second line of Eq. (10). \(\eta_{\rm eff}(t)\) is Gaussian noise with zero mean, \(\langle\eta_{\rm eff}(t)\rangle=0\), and delta-correlation in time:
\[\langle\eta_{\rm eff}(t)\eta_{\rm eff}(t^{\prime})\rangle=\delta(t-t^{\prime }). \tag{11}\]
As expected, the effective dynamics reproduce the generalized friction coefficient of the full dynamics in both limits.
## Appendix B Kramers time for passive Brownian particle
Here we calculate the Kramers time for the passive Brownian particle, namely the characteristic time for the passive Brownian particle to transition from one well to another. This gives the vertical lines in Fig. 3b.
The Kramers _rate_ for the diffusion of the passive Brownian particle (dynamically evolving according to (10) with \(\mathrm{Pe}=0\)) to the location \(\langle x\rangle_{\lambda_{\rm f}}\), starting from \(\langle x\rangle_{\lambda_{\rm i}}\) is [76]
\[\kappa(\lambda)\equiv\left[\frac{1}{D}\int_{\langle x\rangle_{\lambda_{\rm i}} }^{\langle x\rangle_{\lambda_{\rm f}}}\ \mathrm{d}y\ e^{\beta U_{\rm tot}(y;\lambda)}\int_{-\infty}^{y}\ \mathrm{d}z\ e^{-\beta U_{\rm tot}(z;\lambda)}\right]^{-1}\, \tag{12}\]
for fixed trap minimum \(\lambda\). This gives the mean number of such transitions \([\langle x\rangle_{\lambda_{\rm i}}\to\langle x\rangle_{\lambda_{\rm i}}]\) per unit time. We define the Kramers time for the passive Brownian particle as the inverse of the average of this Kramers rate over all fixed trap minima from \(\lambda_{\rm i}\) to \(\lambda_{\rm f}\):
\[t_{\rm K}\equiv\left[\frac{1}{N+1}\sum_{\ell=0}^{N}\kappa(\lambda_{\ell}) \right]^{-1}\, \tag{13}\]
for \(\lambda_{0}\equiv\lambda_{\rm i}\), \(\lambda_{N}\equiv\lambda_{\rm f}\), \(N\equiv\frac{\lambda_{\rm f}-\lambda_{\rm i}}{\Delta\lambda}\) and trap-minimum bin width \(\Delta\lambda\).
So when \(t_{\rm a}\gg t_{\rm K}\), the AOUP experiences an OU velocity that is relatively constant on the characteristic timescale for a transition (of the passive Brownian particle); conversely, when \(t_{\rm a}\ll t_{\rm K}\), the effect of the OU velocity on barrier crossing is effectively that of white noise.
## Appendix C Numerical simulation methods
### Force autocovariance
To compute the force autocovariance (Fig. 2b), we discretize the Langevin equations (3) and (4) (for each fixed \(\lambda\)) to first order in the discretization time \(\Delta t\) and evolve the dynamics iteratively for \(1\leq j\leq t/\Delta t\), where \(t\) is the observation time:
\[x_{j} =x_{j-1}-\beta DU_{\rm tot}(x_{j-1}|\lambda)\Delta t\] \[\quad+\sqrt{2D\Delta t}\ \eta_{j-1}+y_{j-1}\Delta t\, \tag{14a}\] \[y_{i} =y_{j-1}-\frac{y_{j-1}}{t_{\rm a}}\Delta t+\frac{1}{t_{\rm a}} \sqrt{2D\ \mathrm{Pe}\ \Delta t}\ \eta_{{\rm a},j-1}. \tag{14b}\]
\(\eta_{j}\) and \(\eta_{{\rm a},j}\) are standard independent Gaussian random variables at the \(j\)th time increment, with zero mean and covariances
\[\langle\eta_{j}\ \eta_{k}\rangle =\langle\eta_{{\rm a},j}\ \eta_{{\rm a},k}\rangle=\delta_{j,k} \tag{15}\] \[\langle\eta_{j}\ \eta_{{\rm a},k}\rangle =0\, \tag{16}\]
for Kronecker delta \(\delta_{j,k}\). For a given initial condition \(x_{0}\) and \(y_{0}\), we generate a time-series of the force \(f_{j}=(x_{j}-\lambda)E^{\ddagger}\) at fixed trap minimum \(\lambda\). To remove any dependence on initial condition, we discard the initial portion of the trajectory (\(\sim\)8\(\times\) the largest force relaxation time \(\tau_{\rm relax}\) (see Fig. 2d), \(\sim\)80\(\times\) the trap relaxation time \(t_{\rm trap}\)), and use the remaining time-series to compute the force autocovariance \(\langle\delta f_{0}\ \delta f_{j}\rangle_{\lambda}\). We generate three independent force trajectories, each of length \(t/t_{\rm trap}=1.6\times 10^{4}\), and average over the three resulting force autocovariances.
### Stationary-state average force
We evolve the discretized Langevin equations (11a) and (11b) from a fixed initial condition (\(x_{0}=\lambda,\ y_{0}=0\)) up to time \(t/t_{\rm trap}=8\times 10^{2}\) (ensuring the stationarity of the joint probability density function of \(x\) and \(y\)) and compute the force experienced by the particle,
\[f=(x-\lambda)E^{\ddagger}\, \tag{12}\]
using the particle's position \(x\) at the final time-step. Figure 11 displays the stationary-state average force computed by averaging over \(\mathcal{N}_{\rm R}=10^{5}\) realizations for each fixed trap minimum \(\lambda\). For the smallest \(t_{\rm a}\), the force decreases linearly as \(\lambda\) increases, and appears unaffected by the barrier of the total potential \(U_{\rm tot}(x;\lambda)\).
### Quasistatic work
We compute the discretized version of the quasistatic work [see Eq. (12)]:
\[W_{\rm qs}=-\sum_{i}\langle f\rangle_{\lambda_{i}}\Delta\lambda\, \tag{13}\]
where Fig. 11 shows the average force \(\langle f\rangle_{\lambda}\).
Figure 12 displays the difference of quasistatic work and equilibrium free-energy difference, as a function of persistence time. For longer persistence time, this difference decreases, since the system can be approximated by the effective dynamics (11) for \(\mathrm{Pe}=0\), reproducing the system's passive behavior.
### Excess work
We use the discretized Langevin equations (11a) and (11b), interleaved with substeps that discretely update \(\lambda\) according to either a naive or designed protocol (Fig. 2g).
For each trajectory, we compute the external work as the energy change due to changes of \(\lambda\):
\[w^{\rm FM}=\sum_{j=1}^{t_{\rm prat}/\Delta t}\left[U_{\rm trap}(x_{j-1}|\lambda _{j})-U_{\rm trap}(x_{j-1}|\lambda_{j-1})\right]\,. \tag{14}\]
The initial condition \((x_{0},y_{0})\) is drawn from the (numerically computed) stationary-state distribution \(\rho_{\rm ss}(x_{0},y_{0})\) for \(\lambda_{i}\equiv\lambda_{0}=-2.824\sqrt{Dt_{\rm trap}}\). We simulate a range of protocol durations, with the shortest duration of \(t_{\rm trap}\)\(\sim\)\(5\times\) the smallest force relaxation time \(\tau_{\rm relax}\) (Fig. 2d).
To calculate the excess work \(w^{\rm FM}_{\rm ex}\equiv w^{\rm FM}-W_{\rm qs}\) for each trajectory, we subtract the discretized version of the quasistatic work (13) from the external work \(w^{\rm FM}\) (14). Averaging over \(\mathcal{N}_{\rm R}=10^{6}\) independent realizations gives the average excess work \(W^{\rm FM}_{\rm ex}\) (Figs. 3 and 4). We compute the normalized flux (Fig. 5) similarly.
### Simulation parameters
For each numerical simulation, we choose discretization time \(\Delta t/t_{\rm trap}=8\times 10^{-4}\) (\(8\times 10^{-2}\times\) the smallest value of \(t_{\rm a}\)) and set inverse temperature \(\beta=1\) and diffusion constant \(D=1\).
|
2301.09266 | FInC Flow: Fast and Invertible $k \times k$ Convolutions for Normalizing
Flows | Invertible convolutions have been an essential element for building
expressive normalizing flow-based generative models since their introduction in
Glow. Several attempts have been made to design invertible $k \times k$
convolutions that are efficient in training and sampling passes. Though these
attempts have improved the expressivity and sampling efficiency, they severely
lagged behind Glow which used only $1 \times 1$ convolutions in terms of
sampling time. Also, many of the approaches mask a large number of parameters
of the underlying convolution, resulting in lower expressivity on a fixed
run-time budget. We propose a $k \times k$ convolutional layer and Deep
Normalizing Flow architecture which i.) has a fast parallel inversion algorithm
with running time O$(n k^2)$ ($n$ is height and width of the input image and k
is kernel size), ii.) masks the minimal amount of learnable parameters in a
layer. iii.) gives better forward pass and sampling times comparable to other
$k \times k$ convolution-based models on real-world benchmarks. We provide an
implementation of the proposed parallel algorithm for sampling using our
invertible convolutions on GPUs. Benchmarks on CIFAR-10, ImageNet, and CelebA
datasets show comparable performance to previous works regarding bits per
dimension while significantly improving the sampling time. | Aditya Kallappa, Sandeep Nagar, Girish Varma | 2023-01-23T04:31:03Z | http://arxiv.org/abs/2301.09266v1 | # FinC Flow: Fast and Invertible \(k\times k\) Convolutions for Normalizing Flows
###### Abstract
Invertible convolutions have been an essential element for building expressive normalizing flow-based generative models since their introduction in Glow. Several attempts have been made to design invertible \(k\times k\) convolutions that are efficient in training and sampling passes. Though these attempts have improved the expressivity and sampling efficiency, they severely lagged behind Glow which used only \(1\times 1\) convolutions in terms of sampling time. Also, many of the approaches mask a large number of parameters of the underlying convolution, resulting in lower expressivity on a fixed run-time budget. We propose a \(k\times k\) convolutional layer and Deep Normalizing Flow architecture which i.) has a fast parallel inversion algorithm with running time O(\(nk^{2}\)) (\(n\) is height and width of the input image and k is kernel size), ii.) masks the minimal amount of learnable parameters in a layer. iii.) gives better forward pass and sampling times comparable to other \(k\times k\) convolution-based models on real-world benchmarks. We provide an implementation of the proposed parallel algorithm for sampling using our invertible convolutions on GPUs. Benchmarks on CIFAR-10, ImageNet, and CelebA datasets show comparable performance to previous works regarding bits per dimension while significantly improving the sampling time.
Normalizing Flows, Deep Learning, Invertible Convolutions
## 1 Introduction
Normalizing flow is an important subclass of Deep Generative Models that offers distinctive benefits (Kobyzev et al., 2020). In comparison to GANs (Goodfellow et al., 2014) and VAEs (Kingma et al., 2019), they are trained using a very intuitive Maximum Likelihood loss function. Images and the _latent vector_, which is required to have a Gaussian distribution, correspond one-to-one in flow models. Despite these intriguing characteristics, GANs and VAEs are utilized more frequently. This is due to the need for the Normalizing Flows transformations to be invertible, which significantly restricts the neural network types employed. For deployment in a real-world scenario, the invertible transformations must be efficiently calculable in the forward and sample stages.
A significant breakthrough came with Glow (Kingma and Dhariwal, 2018) which used \(1\times 1\) invertible convolutions to design normalizing flows. If it exists, the inverse function for a \(1\times 1\) convolution also happens to be a \(1\times 1\) convolution. Since computing \(1\times 1\) convolution has fast parallel algorithms for which running time does not depend on the spatial dimensions, they are also highly efficient in forward pass (i.e. computing _latent vector_ from an image) as well as the sampling passes (i.e. computing image from a sampled _latent vector_). Extending Glow to use invertible \(k\times k\) convolutions promises to improve the expressivity further, allowing it to model more complex datasets. However, this is a challenging problem since the inverse function for a \(k\times k\) convolution, in general, is given by a \(n^{2}\times n^{2}\) matrix where \(n=H=W\) (i.e. the spatial dimensions). Hence, while the forward pass can be fast, the trivial approach for the sampling pass will cost \(O(n^{4})\) operations per convolutional layer.
CInC Flow (Nagar et al., 2021) introduced a padded \(3\times 3\) convolution layer design and gave it the necessary and sufficient conditions to make it invertible. They showed that the convolution matrix is lower triangular by ensuring padding in only two sides of the input. Furthermore, all the diagonal entries of the convolution matrix are equal to a single weight parameter. By setting this parameter to 1, they ensured that the convolutions are invertible, and Jacobian is always 1.
We build on their work by proposing a parallel inversion algorithm for their convolution design. The parallel algorithm only uses O(\(nk^{2}\)) sequential operations, unlike O(\(n^{2}k^{2}\)) operations used by most previous works. We also build a normalizing flow archi
tecture, where channel-wise splitting is further used to parallelize operations.
Our Contributions.1. We design a \(k\times k\) invertible convolutional layer with a fast and parallel invertible sampling algorithm (see Sections 4.1, 4.2).
2. We build a normalizing flow architecture based on the fast invertible convolution, which uses channel wise splitting to improve the parallelism further (see Sections 4.3, 4.4).
3. We provide a fast GPU implementation of our parallel inversion algorithm and benchmark the sampling times of the model (see Section 5). We show greatly improved sampling times due to our parallel inversion algorithm, while giving similar bits per dimensions as compared to other works.
## 2 Related Work
Generative Modeling.The idea of generative modeling stems from training a generative model whose sample comes from the same distribution as the training data distribution. Most of the generative models can be grouped as Generative adversarial networks (GANs) (Goodfellow et al., 2014; Brock et al., 2019), Energy-based models (EBMs) (Zhang et al., 2022; Song et al., 2021), Variational autoencoders (VAEs) (Kingma and Welling, 2013; Kingma et al., 2019; Hazami et al., 2022), Autoregressive models (Oord et al., 2016; Nash et al., 2020), Diffusion models (Ho et al., 2020; Song et al., 2021; Song and Ermon, 2019) and Flow-based models (Dinh et al., 2014, 2017; Hoogeboom et al., 2019; Kingma and Dhariwal, 2018; Ho et al., 2019; Ma et al., 2019; Nagar et al., 2021).
Normalizing Flows.Flows-based models construct complex distributions by transforming a probability density through a series of invertible mappings (Rezende and Mohamed, 2015). At the end of these invertible mapping, we obtain a valid distribution; hence, this type of flow is referred to as a Normalizing Flow model. Flow models apply the rule for change of variables; the initial density 'flows' through the sequence of invertible mappings (Dinh et al., 2017). Flow-based models generalize a dataset distribution into a latent space (Kobyzev et al., 2020).
Invertible kxk Convolutions.An invertible neural network requires the inverse of the network with fast and efficient computation of the Jacobian determinant (Song et al., 2019). An invertible neural network can be used for generation and classification with more interpretability. (Kingma and Dhariwal, 2018) proposed an invertible \(1\times 1\) convolution building on top of NICE (Dinh et al., 2014) and RealNVP (Dinh et al., 2017) consisting a series of flow step combined in a multi-scale architecture. Each flow step consists of actnorm followed by an invertible \(1\times 1\) convolution, followed by a coupling layer (see Sec 4.3). Emerging (Hoogeboom et al., 2019) presented method to generalized \(1\times 1\) convolution to invertible \(k\times k\) convolutions. Emerging chains two specific auto regressive convolutions (Kingma and Welling, 2013) to form a single convolutional layer following the associativity of the convolution operation. Each of these autoregressive convolutions is chosen such that the resulting convolution matrix \(\mathbf{M}\) is triangular with an inverse time of each of the convolutions is \(\mathrm{O}(n\times n\times k^{2})\). MintNet (Song et al., 2019) presented a method for
Figure 1: Convolution of a \(3\times 3\)_TL_ (Top Left) padded image with a \(2\times 2\) filter viewed as a linear transform of vectorized input(\(x\)) by the convolution matrix \(\mathbf{M}\). The _TL_ padding on the input results in the making matrix \(\mathbf{M}\) lower triangular, and all diagonal values correspond to \(w_{k,k}\) of the filter. Each row of \(\mathbf{M}\) can be used to find a pixel value. The rows or pixels with the same color can be inverted in parallel since all the other values required for computing them will already be available at a step of our inversion algorithm 1.
designing invertible neural networks by combining building blocks with a set of composition rules. The inversion of the proposed blocks necessitates a sequence of dependent computations that increase the network's sampling time. SNF (Keller et al., 2021) proposed a method to reduce the computation complexity of the Jacobian determinant by replacing the gradient term with a learned approximate inverse for each layer. This method avoids the determinant of Jacobian and makes it approximate, and requires an additional backward pass for inversion of convolution. MaCow (Ma et al., 2019) while many other papers make use of the invertibility of triangular matrix to reduce inversion time, MaCow outperforms all of them by performing the inverse in O(\(nk^{2}\)) by carefully masking 4 kernels at the top, left, bottom, right to achieve a full convolution, but this flow model use four autoregressive convolutions to make an effective standard convolution. Woodbury (Lu and Huang, 2020) this paper employs the _Woodbury transformation_ for invertible convolution, which is a generalized permutation layer that models dimension dependencies along the channel and spatial axes using the channel and spatial transformation. ButterflyFlow (Meng et al., 2022) introduced a new family of an invertible layer that works for special underlying structures and needs a sequence of layers for an effective invertible convolution.
Fast Algorithms for Invertible Convolutions.CInC Flow (Nagar et al., 2021), derive necessary and sufficient conditions on a padded CNN for it to be invertible and require a single CNN layer for every effective invertible CNN layer. The padded CNN can leverage the advantage of parallel computation for inversion, resulting in faster and more efficient computation of Jacobian determinants.
The distinguishing feature of our invertible convolutions as compared to previous works is that we have a parallel inversion algorithm that does only \((2n-1)k^{2}\) operations where \(n\) is input size and \(k\) is kernel size. MaCow is the closest approach that takes twice the number of operations. Some of the approaches, like MintNet and SNF, do achieve a lesser number of operations. However, they are not proper normalizing flows as they compute only an approximate inverse. We use the convolution design from CInC Flow but give a parallel inversion algorithm for it. Furthermore, our FInC Flow _Unit_ is designed to efficiently parallelize the operations by splitting the convolution operations channel-wise. In Table 1, we compare our proposed flow model with the existing model in terms of the receptive fields/number of learnable parameters, complexity of computing the inverse of convolution layer for sampling.
## 3 Preliminaries
Normalizing Flows.Formally, Normalizing Flows is a series of transformations of a known simple probability density into a much more complex probability density using invertible and differentiable functions. These invertible function allows to write the probability of the output as a differentiable function of the model parameters. As a result, the models can be trained using backpropagation with the negative log likelihood loss function.
Let \(\mathbf{X}\in\mathbb{R}^{d}\) be a random variable with tractable density \(p_{\mathbf{X}}\). Let \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) be a differentiable and invertible function. If \(\mathbf{Y}=f(\mathbf{X})\) then the density of \(Y\) can be calculated as
\[p_{X}(x)=p_{Y}(y)\left|\det J_{f}\right|\qquad\text{where}\qquad J_{f}=\frac{ \partial f(x)}{\partial x}.\]
Note that \(J_{f}\) is a \(d\times d\) matrix called the Jacobian. If \(X\) is transformed using a sequence of functions \(f_{i}\)'s.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Method & \# of ops & \# params / CNN layer & Complexity of Jacobian & Inverse \\ \hline FInC Flow (our) & \((2n-1)k^{2}\) & \(k^{2}-1\) & 1 & exact \\ Woodbury (Lu and Huang, 2020) & \(cn^{2}\) & \(k^{2}\) & \(\mathrm{O}(d^{2}(c+n)+d^{3})\) & exact \\ MaCow (Ma et al., 2019) & \(4nk^{2}\) & \(k(\lceil\frac{k}{2}\rceil-1)\) & \(\mathrm{O}(n^{3})\) & exact \\ Emerging (Hoogeboom et al., 2019) & \(2n^{2}k^{2}\) & \(k(\lceil\frac{k}{2}\rceil-1)\) & \(\mathrm{O}(n)\) & exact \\ CInC Flow (Nagar et al., 2021) & \(n^{2}k^{2}\) & \(k^{2}-1\) & 1 & exact \\ \hline MintNet (Song et al., 2019) & \(3n\) & \(\frac{k^{2}}{3}\) & \(\mathrm{O}(n)\) & approx \\ SNF (Keller et al., 2021) & \(k^{2}\) & \(k^{2}\) & approx & approx \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the learnable parameters. where \(n\times n\) is input size, \(k\times k\) is filter size which is constant, \(c\) is number of input/output channels. \(d\) is the number of latent dimensions. # of ops: required number of operations for the inversion of convolutional layers. The complexity of Jacobian: Time complexity for calculating the Jacobian of a single convolution layer. For FInC Flow and CInC Flow, the Jacobian is 1, since the Convolution matrix is lower triangular with diagonal entries being 1.
That is \(f=f_{1}\circ f_{2}\circ f_{3}\circ\cdots\circ f_{r}\). Now probability density, \(p_{Y}(y)\) can be expressed as
\[p_{Y}(y)=p_{X}(f^{-1}(y))\cdot\prod_{i=r}^{1}|J_{f_{i}^{-1}}(y_{i})|. \tag{1}\]
where \(y_{i}=f_{i}^{-1}\circ\cdots\circ f_{r}^{-1}(x)\). The log-probability of \(p_{Y}\) which will be used to model the complex image distribution is given by,
\[\log p_{Y}(y)=\log p_{X}(f^{-1}(y_{r}))+\sum_{i=1}^{r}\log|\det J_{f_{i}^{-1}}( y_{i})|. \tag{2}\]
The functions \(f_{i}^{-1}\) will be given by neural network layers and the above function can be computed during the forward pass of the neural network. The negative of this function called the negative log likelihood (NLL) is minimized when images in the dataset are given highest probabilities. Hence it gives a simple, interpretable loss function for training the model.
Invertible Convolutions.The convolution of an input \(X\) with shape \(H\times W\times C\) with a kernel \(K\) with shape \(k\times k\times C\times C\) is \(Y=X\times K\) of shape \((H-(k+1))\times(W-(k+1))\times C\) which is equal to
\[Y_{i,j,c_{o}}=\sum_{l,h<k}\sum_{c_{i}=1}^{C}X_{i+l,j+k,c_{i}}K_{l,k,c_{i},c_{o}} \tag{3}\]
Notice that the dimensions of \(X\) and \(Y\) are not necessarily the same. To ensure that the \(X\) and \(Y\) are the same size, we apply padding to the input \(X\). For an input image \(X\) with shape \(H\times W\times C\), the \((t,b,l,r)\) padding of \(X\) is the image \(\hat{X}\) of shape \((H+t+b)\times(W+l+r)\times C\) is defined as
\[\hat{X}_{i,j,c}=\begin{cases}X_{i-t,j-l,c}&\text{if $i-t<H$ and $j-l<W$}\\ 0&\text{otherwise}\end{cases} \tag{4}\]
The convolution operation is a linear transformation of the input. For the vectored (flattened) input \(X\), denoted by \(x\), the output vector \(y\) can be written as \(y=\mathbf{M}x\). Matrix \(\mathbf{M}\) is the called as the _Convolution Matrix_ and the dimensions of this matrix is \(HWC\times HWC\). As long as matrix \(\mathbf{M}\) is invertible, the convolutional layer can be included as a part of the Normalizing Flows. The common approach to building invertible convolutions is by making \(M\) upper triangular and ensuring invertibility by making diagonal entries to be nonzero.
Algorithms for Computing Inverse of Convolutions.For normalizing flows built using invertible convolutions, the sampling pass will involve computing the inverse of the convolution matrix. This involves solving a linear systems of equations \(\mathbf{M}x=y\).
For a general square matrix of size \(n\times n\), the time complexity for inversion is \(O(n^{3})\). For a lower triangular matrix of size \(n\times n\), the time complexity for inversion is \(O(n^{2})\) because of back-substitution method. Notice that the size of convolution matrix, \(\mathbf{M}\)) is \(n^{2}\times n^{2}\) (refer Figure 1) and also that row of the matrix has only \(k^{2}\) entries at the maximum, results in an inversion time of \(O(n^{2}k^{2})\) which is used in many of the previous works like Emerging and CInC Flows. We show that this method can be parallelized for carefully designed convolutions giving a complexity of only \(O(nk^{2})\).
## 4 FInC Flow: Our Approach
In this section we describe our approach including convolution layer design which has a fast parallel inversion algorithm with running time \(\mathrm{O}(nk^{2})\). For more clarity, we refer to height of the image as \(H\), width as \(W\) and channels as \(C\) in this section.
### Convolution Design
As it is obvious from equation \(x=\mathbf{M}^{-1}y\), the inverse timings depends on \(\mathbf{M}\). Emerging (Behrmann et al., 2019) masks almost half of the convolution kernel values to ensure \(\mathbf{M}\) is a Lower Triangular Matrix. However, we follow the method followed in CInC Flow, where only a few values of the convolution kernel are masked. For an input image \(X\) with shape \(H\times W\times C\), the top-left(\(TL\)) i.e., \((t,0,l,0)\) padding of \(X\) is the image \(X^{\text{(TL)}}\) of shape \((H+t)\times(W+l)\times C\) is defined in equation 5 and similarly for the top-right (\(TR\)) as equation 6, bottom-left (\(BL\)) as equation 7, bottom-right (\(BR\)) as equation 8.
\[X_{i,j,c}^{\text{(TL)}} =\begin{cases}X_{i-t,j-l,c}&i-t>0\ \wedge\ j-l>0\\ 0&\text{otherwise}\end{cases} \tag{5}\] \[X_{i,j,c}^{\text{(TR)}} =\begin{cases}X_{i-t,j,c}&i-t>0\ \wedge\ j-r<W\\ 0&\text{otherwise}\end{cases}\] (6) \[X_{i,j,c}^{\text{(BL)}} =\begin{cases}X_{i,j-l,c}&i-b<H\ \wedge\ j-l>0\\ 0&\text{otherwise}\end{cases}\] (7) \[X_{i,j,c}^{\text{(BR)}} =\begin{cases}X_{i,j,c}&i-b<H\ \wedge\ j-r<W\\ 0&\text{otherwise}\end{cases} \tag{8}\]
Figure 1 shows the convolution of a \(TL\) padded \(3\times 3\) image with a \(2\times 2\) filter is equivalent to a matrix multiplication between convolution matrix \(\mathbf{M}\) and vectored input \(x\). We leverage this to find the inverse faster. We discuss this in more detail in the subsequent sections. Also padded input \(X^{\text{(TR)}}\), \(X^{\text{(BL)}}\) and
\(X^{\text{(BR)}}\) are equivalent to \(X^{\text{(TL)}}\) once they are flipped along corresponding dimension(s).
### Parallel Inversion Algorithm
We have presented our algorithm in Algorithm 1. The algorithm can be understood using Figure 1.
**Definition 1** (Diagonal Elements).: Two pixels \(x_{i,j}\) and \(x_{i^{\prime}j^{\prime}}\) are said to be secondary diagonal elements if \(i+i^{\prime}=j+j^{\prime}\). For brevity, we refer to these elements from here on simply as Diagonal Elements.
```
Input:\(K\): Kernel of shape \((C,C,k_{H},k_{W})\)\(Y\): output of the conv of shape \((C,H,W)\) Result:\(X\): inverse of the conv. with shape \((C,H,W)\). Initialization:\(X\gets Y\) ;
2for\(d\gets 0,H+W-1\)do
3for\(c\gets 0,C-1\)do
4 /* The below lines of code executes parallelly on different threads on GPU for every index \((c,h,w)\) of \(X\) on the \(d\)th diagonal. */
5for\(k_{h}\gets 0,k_{H}-1\)do
6for\(k_{w}\gets 0,k_{W}-1\)do
7for\(k_{c}\gets 0,C-1\)do
8ifpixel\((k_{c},h-k_{h},w-k_{w})\) not out of boundsthen
9\(X[c,h,w]\gets X[c,h,w]-X[k_{c},h-k_{h},w-k_{w}]*\)\(K[c,k_{c},k_{H}-k_{h}-1,k_{W}-k_{w}-1];\)
10
11 end for
12
13 end for
14
15 end for
16
17 end for
18
19 end for
20
21 end for
22
23 end for
24
25 end for /* synchronize all threads */
26
27 end for
```
**Algorithm 1**Fast Parallel Inversion Algorithm of _TL_ padded convolution block(PCB)
Theorem 1 proves that every element of on the diagonal can be computed parallelly and Line 2 of the algorithm takes care of that. We initialize \(X\) to \(Y\) in Line 1 and compute \(X\) in Line 8 which is given in Equation 10. It is important that we wait for the threads to synchronize before we move to the next diagonal, as they are needed for computing the elements of the next diagonal. The _not out of bounds_ in Line 7 means we are remaining in the \(k\times k\) convolution window and also we are not including pixel \((i,j)\) while computing \(x_{i,j}\) as given in Equation 10
**Theorem 1**.: _The inverse of the pixels on the diagonals of a TL padded convolution can be computed independently and parallelly._
Proof.: The \((i,j)^{th}\) pixel value of the output \(Y\) with shape \(H\times W\) can be calculated as
\[y_{i,j}=(\mathbf{M}_{iW+j,:})^{T}\cdot x\]
which means \(y_{i,j}\) is the dot product of \(\mathbf{M}_{iW+j,:}\) i.e., the corresponding row of matrix \(\mathbf{M}\) and the vectored input \(x\). Because it is a _TL_ padded convolution, \(y_{i,j}\)
Figure 2: (a) FlnC Flow unit: to utilize the independence of convolution on channels the input channels are sliced into four equal parts and then padded (1. top-left, 2. top-right, 3. bottom-right, 4. bottom-left) to keep the input size and output size same. Next, parallelly convoluted each sliced channel with the corresponding masked kernel (masked corner of kernels: 1. bottom-right, 2. bottom-left, 3. top-left, 4. top-right). Finally, concatenate the output from each convolution. (b) We propose a FlnC Flow architecture 4.3 where each FlnC Flow _Step_ consists of an actnorm step, followed by an invertible 1 × 1 convolution, followed by coupling layer. (c) Flow is combined with a multi-scale architecture 4.4.
depends only on the values of \(k\times k\) window of \(x_{\leq i,\leq j}\) pixels where \(x_{\leq i,\leq j}\) are the pixels that are on the top and left side of the pixel \(x_{i,j}\) including \(x_{i,j}\). Because all the diagonal values are \(w_{k,k}\), we have,
\[y_{i,j} =w_{k,k}x_{i,j}+f(x_{<i,<j})\] \[x_{i,j} =\frac{y_{i,j}-f(x_{<i,<j})}{w_{k,k}}\]
where \(x_{<i,<j}\) are the pixels which are strictly top and left side of \((i,j)\). Following the masking pattern of CInC Flow, we have \(w_{k,k}=1\) and \(f\) is a linear function which is given by weighted sum of the given pixels weighed by the filter values. So,
\[x_{i,j}=y_{i,j}-f(x_{<i,<j}) \tag{9}\]
\[x_{i,j}=y_{i,j}-\sum_{p=0}^{k}\sum_{q=0}^{k}x_{i-p,j-q}K_{p,q}\text{ where }p=q\neq 0 \tag{10}\]
Let two pixels \(x_{i,j}\) and \(x_{\prime,j^{\prime}}\) be on the same diagonal. This also means that only one of the following settings is true a) \(i<i^{\prime}\) and \(j>j^{\prime}\) or b) \(i>i^{\prime}\) or \(j<j^{\prime}\). Either way, we can conclude that computation of \(x_{i,j}\) is not dependent on \(x_{i^{\prime},j^{\prime}}\) and vice versa following the result in Equation 9. Hence they can be computed independently. Once \(x_{i,j}\) is computed, following the Equation 9 and the above result, we can compute \(x_{i+1,j}\) and \(x_{i,j+1}\). Since, the sets of pixels \(x_{<i+1,<j}\) and \(x_{<i,<j+1}\) both include the elements of \(x_{<i,j}\) and also \(x_{i,j}\), we can write
\[x_{i+1,j} =y_{i+1,j}-f(x_{<i+1,<j})\] \[=y_{i+1,j}-\alpha x_{i,j}-f_{1}(x_{<i,<j}) \tag{11}\] \[x_{i,j+1} =y_{i,j+1}-f(x_{<i,<j+1})\] \[=y_{i,j+1}-\beta x_{i,j}-f_{2}(x_{<i,<j}) \tag{12}\]
where \(\alpha\) and \(\beta\) are kernel weights.
From Equations 11 and 12, we can conclude that \(x_{i+1,j}\) and \(x_{i,j+1}\) which are on the same diagonal can be calculated parallelly in a single step.
**Theorem 2**.: _Algorithm 1 uses only \((H+W-1)k^{2}\) sequential operations._
Proof.: We have proved in Theorem 1 that the inverse pixels on a single diagonal can be computed parallelly in one iteration of Algorithm 1. Since there are \(H+W-1\) number of diagonals in a matrix and there are at maximum \(k^{2}\) entries in a row of the convolutional matrix, the number of sequential operations needed will be \((H+W-1)k^{2}\).
Thus the running time of our algorithm is \(\mathrm{O}(nk^{2})\) where \(n=H=W\)
### FInC Flow Unit
Figure 1(a) visualizes our \(k\times k\) convolution block. We call this block as FInC Flow _Unit_. We use all the 4 padding techniques mentioned before to different channels of the image. For this purpose, we split the input into four equal parts along the channel axis. We do _TL_ padding to the first part, _TR_ to the second part, _BL_ to the third part and _TR_ to the fourth part. Then we use a masked filter on each of these parts to perform the convolution operation parallelly. We call each of this padded image along with it's corresponding kernel as Padded Convolution Block (PCB).
### Architecture
Figure 1(c) shows the complete architecture of our model. Our model architecture resembles the architecture of Glow. The multi-scale architecture involves a block of a Squeeze layer, FInC Flow _Step_ repeated \(K\) number of times and a Split layer. The whole block is repeated \(L-1\) number of times. A Squeeze layer follows this and finally FInC Flow _Step_ repeated \(K\) times. At the end of each split layer, half of the channels are'split' (taken away) and modeled as Gaussian distribution samples. These splited half channels are _latent vectors_. The same is done for the output channels. These are denoted as \(z_{L}\) in Figure 1(c). Each _FInC Flow Step_ consists of a _FInC Flow Unit_, an Actnorm Layer, a \(1\times 1\) Convolutional Layer, followed by a coupling layer.
**Actnorm Layer:** Acts as an activation normalization layer similar to that of a batch normalization layer. Introduced in Glow, this layer performs the affine transformation using scale and bias parameters per channel.
\(1\times 1\) **Convolutional Layer:** This layer introduced in Glow does a \(1\times 1\) convolution for a given input. Its log determinant and inverse are very easy to compute. It also improves the effectiveness of coupling layers.
**Coupling Layer:** RealNVP introduced a layer in which the input is split into two half. The first half remains unchanged, and the second half is transformed and parameterized by the first half. The output is concatenation of first half and the affine transformation, by functions parameterized by the first, of second half. The inverse and log determinant of coupling layer are computed in a straightforward manner. Coupling layer consists of \(3\times 3\) convolution followed by a \(1\times 1\) and a modified \(3\times 3\) convolution used in Emerging.
**Squeeze:** This layer takes features from spatial to channel dimension (Behrmann et al., 2019), i.e., it reduces the feature dimension by total four, two across
the height dimension and two across the width dimension resulting in increases the channel dimension by four. As used by Dinh et al. (2017), we use squeeze layer to reshape the feature maps to have smaller resolution but more channels.
**Split:** Input is splited into two halves across the channel dimension. We retain the first half, and a function parameterized by first half transform the second half. The transformed second half is modeled as Gaussian samples, are the _latent vectors_. We do not use the checkerboard pattern used in RealNVP Dinh et al. (2017) and many others to keep the architecture simple.
## 5 Results
Bits Per Dimension (BPD):BPD is closely related to NLLLoss given in equation 2. BPD of \(H\times W\times C\) image is given by
\[\text{bpd}=\frac{\text{NLLLoss}\times\log_{2}e}{HWC} \tag{13}\]
Table 2 shows the BPD comparative results of various models with our model. We present the results of MaCow-var which uses Variational Dequantization which was introduced in Flow++ Ho et al. (2019). BPDs recorded are the reported numbers from the respective model papers.
Sampling Time:Table 2 shows the comparative results of our model with other models. For MaCow, we use the official code released by the authors. We use the code for Emerging, which was implemented in PyTorch by the authors of SNF.. We have implemented CInC Flow in PyTorch and used it to generate results. The FTs and STs are recorded by averaging ten runs on untrained models (including our model).
In Figure 3, we plot the relationship between the input image size and inverse sampling time. As the input image size increase, our _Parallel Inversion Algorithm_ improve by utilizing the independence in the convolution matrix \(M\). If we input a single image (batch size = 1), our model performs similarly to the CInC Flow and Emerging. MaCow is far slower because it does the masking of four kernels to maintain the receptive field. To do one convolution, it needs four convolutions to complete one standard convolution, making it slower. Emerging requires two consecutive autoregressive convolutions to have the same receptive field as standard convolution and solver compared to FlnC Flow. For batch size = 4 and larger, FlnC Flow beats the Emerging, MaCow, and CInC Flow by a big difference (see Figure 3) while maintaining the same receptive field.
Scaling sampling time with spatial dimensions:Table 3 shows the comparison among the invertible convolution-based models. To keep it fair, we restrict the total parameters across all the models to be close to 5 M. We note down the average sampling time (ST) to generate 100 images over ten runs while doubling the size of the sampled image from \(16\times 16\) all the way to \(128\times 128\) and also doubling our batch size from 1 all the way to 128. Our model outperforms all the other models in most, if not all, the settings. All the models were untrained and run on a single NVIDIA GTX 1080Ti GPU.
Image reconstruction and generation:In Figure 4, we present the effectiveness of the FInC Flow model in the reconstruction (sampling) of the images. First, we feed the input image to forward flow and get the _latent vector_ (\(z_{L}\)). To reconstruct the images from the _latent vector_ (\(z_{L}\)), give the \(z\) as input to the inverse flow. Figure 4 present the reconstructed face images for the CelebA dataset after training our model for 100 epochs. The model takes a random sample from the Gaussian distribution for the _latent vector_ to generate sample images. This _latent vector_ is used to generate images by going backward in the flow model. In Figure 5, we present generated sample by
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{MNIST} & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{Imagenet-32x32} & \multicolumn{3}{c}{Imagenet-64x64} \\ \cline{2-13} & BPD & FT & ST & BPD & FT & ST & BPD & FT & ST & BPD & FT & ST \\ \hline Emerging & – & 0.16 & 0.62 & 3.34 & 0.49 & 17.19 & 4.09 & 0.73 & 25.79 & 3.81 & 1.71 & 137.04 \\ MaCow & – & – & – & 3.16 & 1.49 & 3.23 & – & – & – & 3.69 & 2.91 & 8.05 \\ CInC Flow & – & – & – & 3.35 & 0.42 & 7.91 & 4.03 & 0.62 & 11.97 & 3.85 & 1.57 & 55.71 \\ MintNet & 0.98 & 0.16 & 17.29 & 3.32 & 2.09 & 230.17 & 4.06 & 2.08 & 230.44 & – & – & – \\ FlnC Flow (our) & 1.05 & 0.14 & 0.09 & 3.39 & 0.37 & 0.41 & 4.13 & 0.48 & 0.52 & 3.88 & 1.43 & 2.11 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the bits per dimension (BPD), forward pass time (FT) and sampling time (ST) on standard benchmark datasets of various \(k\times k\) convolution based Normalizing Flow models. FT and ST are presented in seconds.
our model on the MNIST, CIFAR-10, and ImageNet-64x64 dataset.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Models & Setting (K and L) & Learnable params (M = million) & FT(n=100) & ST(n=100) \\ \hline MaCow-FG & [4, [12, 12], [12, 12], [12], [12]] & 37.19M & 0.88 & 2.64 \\ MaCow-org & [4, [12, 12], [12, 12], [12], [4, 4]] & 38.4M & 1.48 & 3.23 \\ FinC Flow (our) & [28, 28, 28] & 39.46M & **0.37** & **0.41** \\ \hline \hline \end{tabular}
\end{table}
Table 3: CIFAR-10: comparison of learnable parameters and the sampling time. FinC Flow has less number of learnable parameters with the same receptive field and fast layers (all the times are averaged over ten loops for n = 100 sample images in seconds). ST = Sample time, FT = Forward Time. MaCow-FG is the fine-grained MaCow model and MaCow-org stands for MaCow model utilizes the original multi-scale architecture which is the same as Glow. MaCow and our method is closely similar in term of the convolutional design. So, here we show that our proposed method do fast sampling while maintaining the faster forward time.
Figure 4: Comparison of (a) original and (b) reconstructed image samples for the \(64\times 64\) CelebA dataset after FinC Flow model for 100 epochs. From the images, we can conclude our model reconstruct original image.
Figure 3: Sampling Times for four models - our, Emerging, ClnC Flow, MaCow. Each plot gives the 95% Confidence Interval (CI) time of the ten runs to sample 100 images. X-axis represents the sizes of the image sampled starting from \(16\times 16\times 2\) (\(H\times W\times C\)) all the way to \(128\times 128\times 2\).
## 6 Conclusion
With a parallel inversion approach, we present a \(k\times k\) invertible convolution for Normalizing flow models. We utilize it to develop a model with highly efficient sampling pass, normalizing flow architecture. We implement our parallel algorithm on GPU and presented benchmarking results, which show a significant enhancement in forward and sampling speeds when compared to alternative methods for \(k\times k\) invertible convolution.
|
2303.07432 | End-to-end Deformable Attention Graph Neural Network for Single-view
Liver Mesh Reconstruction | Intensity modulated radiotherapy (IMRT) is one of the most common modalities
for treating cancer patients. One of the biggest challenges is precise
treatment delivery that accounts for varying motion patterns originating from
free-breathing. Currently, image-guided solutions for IMRT is limited to 2D
guidance due to the complexity of 3D tracking solutions. We propose a novel
end-to-end attention graph neural network model that generates in real-time a
triangular shape of the liver based on a reference segmentation obtained at the
preoperative phase and a 2D MRI coronal slice taken during the treatment. Graph
neural networks work directly with graph data and can capture hidden patterns
in non-Euclidean domains. Furthermore, contrary to existing methods, it
produces the shape entirely in a mesh structure and correctly infers mesh shape
and position based on a surrogate image. We define two on-the-fly approaches to
make the correspondence of liver mesh vertices with 2D images obtained during
treatment. Furthermore, we introduce a novel task-specific identity loss to
constrain the deformation of the liver in the graph neural network to limit
phenomenons such as flying vertices or mesh holes. The proposed method achieves
results with an average error of 3.06 +- 0.7 mm and Chamfer distance with L2
norm of 63.14 +- 27.28. | Matej Gazda, Peter Drotar, Liset Vazquez Romaguera, Samuel Kadoury | 2023-03-13T19:15:49Z | http://arxiv.org/abs/2303.07432v1 | # End-to-End Deformable Attention Graph Neural Network for Single-View Liver Mesh Reconstruction
###### Abstract
Intensity modulated radiotherapy (IMRT) is one of the most common modalities for treating cancer patients. One of the biggest challenges is precise treatment delivery that accounts for varying motion patterns originating from free-breathing. Currently, image-guided solutions for IMRT is limited to 2D guidance due to the complexity of 3D tracking solutions. We propose a novel end-to-end attention graph neural network model that generates in real-time a triangular shape of the liver based on a reference segmentation obtained at the pre-operative phase and a 2D MRI coronal slice taken during the treatment. Graph neural networks work directly with graph data and can capture hidden patterns in non-Euclidean domains. Furthermore, contrary to existing methods, it produces the shape entirely in a mesh structure and correctly infers mesh shape and position based on a surrogate image. We define two on-the-fly approaches to make the correspondence of liver mesh vertices with 2D images obtained during treatment. Furthermore, we introduce a novel task-specific identity loss to constrain the deformation of the liver in the graph neural network to limit phenomenons such as flying vertices or mesh holes. The proposed method achieves results with an average error of \(3.06\pm 0.7\) mm and Chamfer distance with L2 norm of \(63.14\pm 27.28\).
Matej Gazda +
Footnote †: star}\)Matej Gazda performed the work while at Ecole Polytechnique de Montreal, Montreal, QC H3C 3A7, Canada
\({}^{\dagger}\) Polytechnique Montreal, Montreal, QC H3C 3A7, Canada
\({}^{\dagger}\) Intelligent Information Systems Laboratory, Technical University of Kosice, Kosice 040 12, Slovakia
Motion modeling, 3D mesh inference, Attention Graph Neural Network, Liver cancer radiotherapy
## 1 Introduction
One of the most commonly used radiotherapy treatment is intensity modulated radiotherapy (IMRT), which consists in the delivery of tightly targeted radiation beams from outside the body. However, it faces complex challenges when experiencing important motion displacements, hence posing a great risk of dose administration to healthy tissue, such as in the liver [1]. Consequently, respiratory motion compensation is an important part of radiotherapy and other non-invasive interventions [2]. To avoid unnecessary damage due to organ displacement caused by respiratory motion, the treated organ must be located and imaged at all times. Unfortunately, image acquisition during treatment with IMRT is limited to 2D cine slices due to time complexity, therefore resulting in a lack of information in out-of-plane data for tumor targeting. Real-time 3D motion tracking of organs would provide necessary tools to accurately follow tumor targets.
Several methods, based on convolutional neural networks (CNNs) have been proposed to tackle the problem of modeling 3D data based on 2D signals. Mezheritsky et al. [3] proposed a method that warps the reference volume with the output of a convolutional autoencoder, thus recovering 3D deformation fields with only a pre-treatment volume and a single live 2D image. Cerrolaza et al. [4] proposed a 3D ultrasound fetal skull reconstruction method based on standard 2D ultrasound views of the head using a reconstructive conditional variational autoencoder. Girdhar et al. [5] investigated tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. However, methods based on CNNs achieved partial success, since they rely on fixed-size inputs. The volumes might have different sizes across scans, body types, and/or machines. The requirement of having volumes reshaped might result in information loss.
Recent advances in graph neural networks sparked new advances in many domains, including in medical imaging [6]. Lu et al. [7] leveraged dynamic spatio-temporal graph neural network for cardiac motion analysis. Graph neural networks operate on graph objects, which contrasts to representations obtained by CNN that might lose important surface details. The mesh has more desirable properties for many applications, because they are lightweight, have more shape modeling details, and are better suited for simulating deformations [6].
In this work, we propose a Deformable Attention Graph Neural Network - Single View Liver Reconstruction (DAGNN-SVLR) method, which is an end-to-end trainable model that infers a 3D liver mesh structure at any time during treatment, using a reference triangulated mesh obtained from the segmentation of baseline 3D MRI volume and a single 2D MRI slice captured in real-time. Moreover, we present a new identity loss tailored for this specific task and empirically show that the combination with the Chamfer distance favors good mesh properties and improves the performance of the
predictive model. A high-level representation of the proposed approach is shown in Fig. 1.
## 2 Dagnn-Svlr
The DAGNN-SVLR model leverages a triangulated liver mesh segmented from the pre-operative T2-w MRI volume and a 2D cine-MR slice taken in real-time during treatment. The model learns a function \(f(.)\) that predicts the deformation of a reference mesh \(M_{r}\) based on the surrogate 2D MRI slice at time \(t\), defined as \(I_{t}\), calculating the mesh at time \(t\) as \(M_{t}=f(M_{r},I_{t})\).
A major component of the DAGNN-SVLR model is to determine correspondences between the reference mesh and a surrogate image. Since GNNs work directly on graphs and we cannot simply merge the image with the graph, we explored two solutions as illustrated in Fig. 1.
1. [label=()]
2. As a first option - without the feature pooling, we utilize a residual convolutional network as a feature extractor. The output is a latent representation of the image represented by a one-dimensional vector of size \(dim=128\). ResNet18 [8] network was selected as a compromise between accuracy and speed.
3. As a second option, we propose a feature pooling approach. We use a ResNet18 network, with added padding, so each consequent layer of the network does not downsample the shape. Consequently, the feature maps produced by this ResNet have an identical shape to the input image. In parallel to the feature extraction, the index coordinates of the reference mesh are calculated, so that each vertex of the mesh can be associated with its particular position in the image. Afterward, direct \(3\times 3\) neighborhoods are extracted from each feature map, thereby yielding nine features per feature map for each node in the reference mesh.
Once the feature extraction module is processed, the concatenated features of the reference mesh, with the extracted features from the image, are passed through an attention graph neural network to produce the predicted mesh surface.
### Graph Convolutional Neural Network
A Graph Convolutional Neural Network (GCNN) is a multilayer neural network that operates directly on graphs. It induces vertex embeddings based on the properties of their local neighborhood and the vertices.
Let \(G=(V,E)\) be a triangulated surface of the liver volume, where \(V=\{1,2,\ldots,N\}\) and \(E\subseteq|V|\times|V|\) represent the set of vertices and edges, respectively. Let \(X\in\mathbb{R}^{Nxm}\) be a matrix containing the \(m\) features of all \(N\) vertices. We denote \(\mathcal{N}_{i}=\{j:(i,j)\in E\}\cup\{i\}\) as the neighbor set of vertex \(i\). Then, \(H=\{\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{N}\}\) is a set of input vertex features, where feature vector \(\mathbf{h}_{i}\) is associated with a vertex \(i\in V\).
Kipf et al. [9] proposed a Spatial Graph Convolution layer GCN, where the features from neighboring vertices are aggregated with fixed weights, where one vertex embedding is calculated as:
\[h_{u}=\Phi(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}c_{uv}\psi(\mathbf{ x}_{v})) \tag{1}\]
where \(\Phi\) and \(\psi\) are learnable functions, \(\mathbf{x}_{u}\) and \(\mathbf{x}_{v}\) are vertex representations of vertex \(u\) and vertex \(v\), \(c_{uv}\) specifies the importance of vertex \(v\) to vertex \(u\)'s representation, \(\mathcal{N}_{u}\) is a local neighborhood and \(\bigoplus\) is an aggregation function, such as a summation or mean.
Graph Attention Layers (GAT) [10] extend this work by incorporating a self-attention mechanism that computes the importance coefficients \(a_{uv}\):
\[h_{u}=\Phi(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}a(\mathbf{x}_{u}, \mathbf{x}_{\mathbf{v}}\psi(\mathbf{x}_{v}))). \tag{2}\]
The attention mechanism \(a\) is a single-layer feed-forward neural network parameterized by a weight vector \(\mathbf{a}\in\mathbb{R}^{2F^{\prime}}\), where:
Figure 1: High-level representation of the DAGNN-SVLR model. The Attention Graph Neural Network deforms the reference mesh from the preoperative phase based on features extracted from supplied 2D surrogate images. We proposed two approaches to feature extraction: without feature pooling and with feature pooling.
\[a_{ij}=\exp(LeakyRELU(\mathbf{a}^{T})). \tag{3}\]
We employ a neural network consisting of seven alternating GAT and normalization layers. Each GAT layer receives as input \(128\) features and two heads, which output is summed afterward. According to the proposal in [11] we aggregate the features based on a combination of two aggregation functions: summation and mean.
### Loss functions
We define three different loss functions to constrain the properties of output liver meshes. Amongst the most critical properties of final meshes are smoothness, closeness, and the absence of so-called flying vertices.
#### Chamfer distance
The Chamfer distance measures the distance of each vertex to the other set and it serves as a constraint for the location of mesh vertices. The function is continuous, piecewise smooth and is defined as:
\[\mathcal{L}_{CD}(P,Q)=\sum_{p}\min_{q}||p-q||_{2}^{2}+\sum_{q}\min_{p}||p-q||_{ 2}^{2}. \tag{4}\]
where \(P\) and \(Q\) are predicted and ground truth liver meshes and \(p\) and \(q\) are single points. Contrary to its name, based on the mathematical formulation, it is not a distance function since it does not hold the property of triangle inequality.
#### Sampled Chamfer Distance
The Chamfer distance, despite its wide usage in the mesh processing domain, is unable to capture any fine information included in the mesh structure. It penalizes the point difference directly resulting in the loss of surface mesh information.
To mitigate this drawback, Smith et al. [12] introduced a training objective that operates on a local surface defined by vertices sampled by a differentiable sampling procedure.
Given a facet defined by 3 vertices \(\{v_{1},v_{2},v_{3}\}\in\mathbb{R}^{3}\), uniform sampling is achieved by:
\[s=(1-\sqrt{r_{1}})v_{1}+(1-r_{2})\sqrt{r_{1}}v_{2}+\sqrt{r_{1}}r_{2}v_{3} \tag{5}\]
where s is a point inside the surface defined by the facet, \(r_{1},r_{2}\sim U[0,1]\)
The loss function is defined as:
\[\mathcal{L}_{SCD}(P,Q)=\sum_{p\in S}\min_{f\in P}dist(p,\hat{f})+\sum_{q\in S} \min_{f\in Q}dist(q,f) \tag{6}\]
where \(P\) is the first mesh, \(Q\) is the ground truth mesh, \(\hat{S}\) and \(S\) are the sampled points of the predicted mesh and ground truth mesh \(f\) and \(\hat{f}\) are faces and \(dist\) is a function computing the distance between a point and a triangular facet.
#### Identify loss
The identity loss penalizes substantial changes in the vertex positions if the surrogate image represents the actual state of the mesh. Given a mesh \(M_{t}\), the surrogate signal at time \(t\)\(I_{t}\), and model that infers current mesh \(\hat{M}_{t}\) based on surrogate image and a reference mesh \(M_{r}\), \(\hat{M}_{t}=f(M_{r},I_{t})\) we define identity loss as:
\[\mathcal{L}_{I}=\mathcal{L}_{CD}(M,\hat{M}) \tag{7}\]
where \(L_{CD}\) is a Chamfer loss or a sampled Chamfer loss.
Finally, DAGNN calculates the final loss as \(\mathcal{L}=\mathcal{L}_{\mathcal{SCD}}+\alpha\mathcal{L}_{I}\) where \(\alpha\) is a hyperparameter.
## 3 Results
We evaluated the proposed approach using a 4D-MRI liver dataset acquired from \(25\) volunteers [13]. The volume dimensions were \(176\times 176\times 32\) with a pixel spacing of \(1.7\times 1.7\)mm\({}^{2}\) and a slice thickness of \(3.5\) mm. Reference meshes were created as a closed surface of the liver segmentation from each of the subject's inhale phases. Ground truth meshes of other temporal sequences were then acquired by using deformation field calculated by Elastix deformable registration [14]. Volumes were resized for the registration to \(64\times 64\times 32\) due to computational complexity. The number of vertices in meshes were from interval \((1300,2000)\). When sampling loss was used, an empirically determined value of \(1000\) was chosen.
As for preprocessing steps, we centered and normalized the scale into the interval \((-1,1)\). For validation purposes, we performed a 10-fold cross-validation. The model was trained with a batch size of one with five steps accumulation gradients and with Adam optimizer with a learning rate \(1e^{-5}\). We set the weight of identity function \(L_{I}\) to \(0.05\).
Visualizations of sample predictions and their ground truth can be seen in Fig. 2. It is clear that the inferred shapes are compliant with important
Figure 2: Visualization of a signed distance (in mm) from predicted to ground mesh for two subjects.
complete and smooth surfaces. Additionally, in Fig. 3 we show ground truth and predicted delineations obtained from inferred mesh in the axial, sagittal, and coronal planes.
The average error and average Chamfer distance over the entire test sequence is presented in Table 1. We used the Chamfer distance with L2 norm and errors calculated by unsigned distance of two meshes to measure the performance. The results of our model are divided into two main categories, with and without feature pooling.
To compare our method with a state-of-the-art approach, we chose the Node2Vec [15] method that supports working with graph data. Node2Vec was trained for \(100\) epochs and found embeddings were concatenated with extracted features, similarly to our model.
Feature pooling, as used in this study, slightly outperforms the utilization of a single feature vector with respect to average error in all cases. This confirms our hypothesis, that leveraging sampling loss improves performance for both feature extraction methods. The average error for feature pooling subsided from \(3.44\pm 0.91\)mm to \(3.23\pm 0.64\)mm, and for the approach without FP from \(3.46\pm 0.70\)mm to \(3.26\pm 0.78\)mm. Adding identity loss during the training improves the results from \(3.23\pm 0.91\)mm and \(3.26\pm 0.78\)mm to \(3.05\pm 0.75\)mm and \(3.06\pm 0.7\)mm respectively, thus representing a performance improvement of \(5.6\)% and \(6.2\)%.
Interestingly, the test results of Chamfer distance are higher when the sampling loss was used without identity loss for feature pooling. We hypothesize that the model was trained using a loss that sampled points on the surface, but for the testing part conventional Chamfer loss was used. This error in Chamfer distance diminishes when identity loss is added.
Next, we present the average error in millimeters for three subjects over time in Fig. 4. The plot of the subject in first row has spikes that introduce errors yielding more than \(5\) mm. The other two subjects have very similar errors over time oscillating over a value of \(2.5\)mm without any outliers. The model repeatedly achieve error as low as \(1.8\)mm.
## 4 Conclusion
We presented a novel approach for single-view liver surface reconstruction from a surrogate 2D signal based on the combination of an attention graph neural network with a fully convolutional neural network. Our model was successful in generating full meshes with proper topology, and position and with an inference time of 0.002 seconds. We have shown that the model benefits considerably from the proposed identity loss, with a feature pooling process. Several potential extensions will be addressed in future work, such as liver motion prediction in the graph domain or using a sequence of surrogate images for liver reconstruction, instead of using a single view.
\begin{table}
\begin{tabular}{|c|c c|} \hline
**Method** & **Chamfer distance with L2 norm** & **Avg. error (mm)** \\ \hline Node2Vec [15] & \(79.45\pm 48.67\) & \(5.0\pm 3.7\) \\ FP + \(\mathcal{L}_{CD}\) & \(62.85\pm 25.18\) & \(3.44\pm 0.91\) \\ FP + \(\mathcal{L}_{SCD}\) & \(82.25\pm 24.40\) & \(3.23\pm 0.64\) \\ FP + \(\mathcal{L}_{SCD}\) + \(\mathcal{L}_{I}\) & \(63.14\pm 27.28\) & \(\mathbf{3.05\pm 0.75}\) \\ No FP + \(\mathcal{L}_{CD}\) & \(67.226\pm 5.57\) & \(3.46\pm 0.70\) \\ No FP + \(\mathcal{L}_{SCD}\) & \(66.117\pm 26.48\) & \(3.26\pm 0.78\) \\ No FP + \(\mathcal{L}_{SCD}\) + \(\mathcal{L}_{I}\) & \(\mathbf{61.34\pm 25.63}\) & \(3.06\pm 0.7\) \\ \hline \end{tabular}
\end{table}
Table 1: Prediction results for different loss functions, with and without feature pooling (FP).
Figure 4: Average error (in mm) over free-breathing sequences for three selected subjects.
Figure 3: Comparison of ground truth (green) and predicted (red) segmentations. Each row depicts different volunteer acquisitions.
## 5 Compliance with Ethical Standards
This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the local Institutional Review Board.
## 6 Acknowledgments
This research has been funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors have no relevant financial or non-financial interests to disclose.
|
2310.04003 | The Role of Federated Learning in a Wireless World with Foundation
Models | Foundation models (FMs) are general-purpose artificial intelligence (AI)
models that have recently enabled multiple brand-new generative AI
applications. The rapid advances in FMs serve as an important contextual
backdrop for the vision of next-generation wireless networks, where federated
learning (FL) is a key enabler of distributed network intelligence. Currently,
the exploration of the interplay between FMs and FL is still in its nascent
stage. Naturally, FMs are capable of boosting the performance of FL, and FL
could also leverage decentralized data and computing resources to assist in the
training of FMs. However, the exceptionally high requirements that FMs have for
computing resources, storage, and communication overhead would pose critical
challenges to FL-enabled wireless networks. In this article, we explore the
extent to which FMs are suitable for FL over wireless networks, including a
broad overview of research challenges and opportunities. In particular, we
discuss multiple new paradigms for realizing future intelligent networks that
integrate FMs and FL. We also consolidate several broad research directions
associated with these paradigms. | Zihan Chen, Howard H. Yang, Y. C. Tay, Kai Fong Ernest Chong, Tony Q. S. Quek | 2023-10-06T04:13:10Z | http://arxiv.org/abs/2310.04003v3 | # The Role of Federated Learning in a Wireless World with Foundation Models
###### Abstract
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications. The rapid advances in FMs serve as an important contextual backdrop for the vision of next-generation wireless networks, where federated learning (FL) is a key enabler of distributed network intelligence. Currently, the exploration of the interplay between FMs and FL is still in its nascent stage. Naturally, FMs are capable of boosting the performance of FL, and FL could also leverage decentralized data and computing resources to assist in the training of FMs. However, the exceptionally high requirements that FMs have for computing resources, storage, and communication overhead would pose critical challenges to FL-enabled wireless networks. In this article, we explore the extent to which FMs are suitable for FL over wireless networks, including a broad overview of research challenges and opportunities. In particular, we discuss multiple new paradigms for realizing future intelligent networks that integrate FMs and FL. We also consolidate several broad research directions associated with these paradigms.
Network intelligence, federated learning, foundation model, large language model.
## I Introduction
Foundation models (FMs), which include large language models (e.g. GPT series, PaLM2, Claude2, LLaMA2, etc.) and large vision models (e.g. CLIP, SAM, etc.), are general-purpose artificial intelligence (AI) models that can be easily adapted to multiple downstream tasks [1]. This adaptability is possible because of both size and scale: FMs are sufficiently large models that display emergent abilities not found in smaller models [2], and FMs are trained on massive (Internet-scale) data. Recently, FMs have been the catalyst of multiple new AI applications.
Mirroring the success of AI, there has also been rapid progress in the development of intelligent wireless networks. Indeed, network intelligence is envisioned to be a key component of the fifth-generation (5G)-and-Beyond wireless systems [3]. Naturally, using FMs in wireless networks could also catalyze further research in network intelligence.
Imagine a world with intelligent infrastructure, where a transportation network is connected via an intelligent wireless network, with traffic lights and cameras, as well as autonomous vehicles connected to an FM-based AI system that can monitor the various multi-modal data (e.g. weather, traffic, public events) for decision-making such as re-routing traffic, and managing crowds; see Fig. 1. In this world, distributed network intelligence plays a central role, where multiple FMs across wireless networks could collaboratively work towards solving problems in real-time, such as disaster management. Hence, it would be important to achieve collaboration across FMs and multiple-modal information fusion across the networks.
Federated learning (FL) is a concrete privacy-preserving paradigm for realizing distributed network intelligence, whereby the edge clients in the wireless network are involved in the deployment of intelligent services. Such FL-enabled networks (which is also termed federated edge learning) offload the capabilities of intelligent prediction and decision-making to edge clients [4, 5]. However, to effectively train a joint machine learning model across clients, challenges such as data heterogeneity and limited resources have to be addressed. Numerous advanced FL approaches have been recently developed to enhance the performance of federated
Fig. 1: An example of an intelligent transportation system in wireless networks, where the autonomous vehicles and edge server collaboratively make the decision assisted by FMs based on the collected multi-modal data.
edge learning systems in terms of communication efficiency, robustness, and personalization.
In a world with FMs, how do we reconcile FMs and FL over wireless networks to advance distributed network intelligence? Such an interplay between FMs and FL is naturally a confluence of opportunities and challenges. Intuitively, FL can be used to train FMs in a distributed manner, where computing resources and private data can be used in the training stage, thereby complementing the usual centralized training scheme. Dually, the excellent adaptability of pre-trained FMs could benefit the different (pre-processing/training/inference/evaluation) stages of federated edge learning systems and strengthen the intelligent decision-making ability of future wireless networks. Such opportunities are accompanied by technical challenges. Compared to conventional machine learning models, the training and inference processes of FMs are cost-intensive across various aspects: memory, storage, computing, and communication overhead. The involvement of human feedback to further fine-tune FMs is also a crucial step for some FMs [6]. When considering the deployment of FMs over a wireless network, either at the edge or in the cloud, for the purpose of either training or inference, the inherent nature of FMs poses critical challenges to storage, power consumption, and communication traffic.
In this article, we shall present a balanced discussion on both **opportunities** and **challenges**, by answering the following question:
**To what extent are FMs suitable for FL-enabled wireless networks?**
In particular, we focus on the following sub-questions: _How to design a sustainable paradigm for deploying systems that integrate FM and FL? What are the key challenges and constraints?_ To explore the possible new paradigms as well as the potential challenges for the interplay between FMs and FL, we provide a broad overview of the deployment of FMs over a federated edge learning system in Sec. II. In Sections III and IV, we address in detail how FMs and FL could benefit each other. We also highlight some future research directions.
## II Paradigms, challenges, and opportunities for the integrated FM and FL
We begin this section with a brief introduction to FMs and FL. Possible paradigms of integrated federated edge learning systems and FMs will be presented subsequently.
### _Preliminaries of FMs and FL_
**Foundation models.** In this article, FMs refer to a class of models trained over huge amounts of data and consist of parameters in the billions range, demonstrating emergent capabilities across applications and tasks like linguistic, visual, robotics, recommendations, and reasoning (See Tab. I for a representative summary of FMs [7]). A typical training pipeline of FMs consists of self-supervised training, supervised fine-tuning, and reinforcement learning with human feedback (RLHF). Overall, the level of access (e.g. via paid API access or downloadable as open-source models) is the primary consideration for how FMs could be integrated into wireless networks. For instance, usual training (including training smaller models via distillation) and customized fine-tuning could be carried out with open-source FMs, which may not be possible for proprietary FMs.
**Federated learning.** FL is a distributed learning paradigm in which multiple clients collaboratively train a machine learning model, under the coordination of a server without having to exchange private local data. (See Fig. 2 for a brief overview of FL-enabled wireless networks.) The typical federated training process is organized in terms of communication rounds, where the model parameters are exchanged between the clients and the server. Broadly speaking, FL could be divided into two general types: _cross-device FL_ (i.e. training over a large number of clients, typically with limited training data) and _cross-silo FL_ (i.e. collaborative training with a limited number of clients, e.g. hospitals and companies, typically with a large amount of training data). In both types, the challenges of limited communication resources and system/data heterogeneity would have to be addressed [8].
### _Challenges of deploying FMs over FL-enabled wireless networks_
The FL-enabled wireless network consists of multiple local clients and an edge server. Before integrating FMs into an FL-enabled wireless network, we have to take into consideration the conflicts arising from the constraints imposed by different
Fig. 2: A brief overview of vanilla machine learning paradigm in FL-enabled wireless networks.
FL scenarios versus the cost-intensive requirements of using FMs. We summarize the key challenges as follows.
* _High power consumption._ Due to the sheer Internet-scale data required for training and the tremendous number of model parameters, the training process of FMs will have substantial requirements on both computation hardware and energy consumption (See Tab. I[7]). This makes the role of energy efficiency even more critical when deploying energy-hungry FMs in a reasonably sustainable manner. Moreover, to integrate FMs into wireless networks, we typically require specialized hardware for AI computing acceleration, such as GPUs and TPUs.
* _Large storage and memory requirements._ To satisfy the requirements of training FMs, both storage and memory would have to be drastically increased to handle streaming collected/generated data and the updating of model parameters during training. Typical network architectures may not necessarily meet such storage and memory requirements.
* _Huge communication overhead._ Due to the sheer model sizes of FMs, communication overhead would be immense even for transmitting pre-trained/fine-tuned FM weights from the FM vendor to the edge of the network for downstream tasks. For FL-based training, since training FMs from scratch could take up to six months of continuous training over thousands of GPUs, this communication overhead would be magnified in both downlink and uplink over a sustained period of time. In the future deployment of intelligent networks, the transmission of FM weights would likely be a non-negligible component of network traffic, which should be addressed by future standardization for model compression or transmission coding protocols.
* _Additional latency._ The 5G-and-Beyond network has strict latency requirements. It is not clear how the integration of FMs into FL-enabled networks would meet such strict latency requirements.
* _Hallucination of FMs._ Hallucination is a crucial challenge when using FMs[7]. Here, hallucination refers to the generation of inaccurate or nonfactual content by the FM. This could lead to disastrous outcomes, especially if FMs are relied upon for critical automated decision-making, such as specific tasks in autonomous driving. Once FMs are deployed in a wireless network, it becomes a challenge how the negative outcomes of hallucinations could be detected and alleviated.
### _Possible network architectures for integrating FM and FL_
Despite the exciting possibilities brought by FMs, the aforementioned cost-intensive challenges that FMs pose would naturally constrain the architectural design for a practical, intelligent wireless network that integrates FMs and FL. For example, the stringent requirements for storage, memory, and computing resources would prevent FMs from being deployed at the edge of the network in cross-device FL settings. Taking into consideration such constraints, as well as the goal of achieving sustainable integration of FL-enabled networks and FMs, we summarize the possible architectures as follows, with respect to how FMs and FL-enabled networks could benefit from each other.
**FMs in FL-enabled wireless networks.** In real-world FL-enabled wireless networks, the inherent system heterogeneity is reflected by the diverse computation and communication capabilities across different clients, as well as the server. Hence, we cannot hope to have a "one-size-fits-all" scheme for the position and function of FMs. Specifically, for a system with sufficient computing and hardware resources at the edge clients, each client could afford a generic/personalized FM. In contrast, for a resource-constrained system, we could have an FM deployed at the edge server, or an FM provided by a cloud vendor via paid API access.
**FL for training FMs over networks.** In resource-constrained wireless networks, the usual FL model updates and aggregations may not be feasible due to the immense computing and communication requirements for training FMs. This makes a hybrid global-local (cloud-edge) model training scheme more preferable. In particular, we could offload cost-intensive computations (e.g. FM pre-training) to the server, and reserve lower-cost computations (e.g. fine-tuning/personalization) for the clients, to alleviate both communication overhead and the high demand for computational resources. Furthermore, parameter-efficient tuning strategies could also be adopted to achieve collaborative training across clients with minimum costs.
Detailed descriptions of both scenarios will be provided in Sections III and IV, respectively.
## III FMs in FL-enabled wireless networks
Consider an FL-enabled network with \(N\) clients. The limited communication resources and the inherent data/system heterogeneity of the real-world networks would hinder the performance and scalability when deploying FL. With the inclusion of FMs as a critical component of our intelligent network infrastructure, we could leverage FMs to enhance FL training performance and provide new application scenarios that conventional AI models cannot provide.
However, as indicated previously, there is no "one-size-fits-all" scheme. The way that FMs are integrated into an FL system should be aligned with the system's properties, where conceptually, FMs play the role of a customized service provider. In other words, we can think of the usage of FMs abstractly in terms of "_Foundation Model as a Service_" (FMaaS). Specifically, in existing FL-enabled networks, we could use FMs to provide different types of services in different stages, such as data pre-processing, training, and the calibration of the jointly trained model, described as follows.
### _Foundation models in pre-processing stages_
The imbalanced data widely existing in real-world wireless networks, and the induced data heterogeneity across clients is regarded as a major challenge that constrains the training performance of FL systems. As one of the most widely known properties of some FMs, the ability of the data generation can be applied to enhance the model training[9, 10]. Due to
the high-cost property, FM could be deployed either at the edge server or at the cloud data center, as described in the first two paradigms in Fig. 3. Possible integrated systems are summarized as follows.
_1) Data augmentation at the edge:_ The imbalanced data in heterogeneous networks may result in a limited number of data samples in a few classes (e.g., the minority classes in classification and recognition tasks), leading to poor representation capabilities in these classes. In this context, clients with local imbalanced data at the network edge could make a request to FM for supplementary data generation in minority classes, so as to achieve balanced local data statistics for balanced local representation learning.
_2) Synthetic data at the server:_ In the case of service unavailability and outage from the FM vendor (due to poor connection or traffic congestion), a synthetic (balanced) dataset could be constructed at the edge server based on the generative services of FMs. Given the synthetic data at the server, multiple scenarios could be explored. Firstly, at the end of each round, the data could be used to evaluate the aggregated model or the local models separately for post-processing (e.g. model re-weighting) to improve the model generalization performance and robustness, as the statistics of the model weights may be diverse in the presence of data heterogeneity or adversarial local clients/updates. Secondly, the new dataset could play a role in model distillation at the server. The post-training at the server could utilize these data for global model calibration and feature alignment.
The role of FMs as data augmenters would get the public or local synthetic data involved in global distillation or local training, enabling the trained model to learn more balanced representation knowledge from combined datasets rather than local private data. Therefore, it would significantly improve the performance of privacy protection as well as the robustness to adversarial attacks such as gradient inversion. In summary, the FM services of auxiliary data augmentation/generation could not only mitigate the negative effects brought by imbalanced data in FL but also improve the privacy protection performance of data.
### _Foundation model in the training_
In addition to the generative ability, the fast adaptation ability of FMs could be leveraged as a tool that could be involved in the training process. In this subsection, we will discuss a few applications that leverage FMs to assist in training robust (and relatively small) models over FL-enabled wireless networks [11].
In conventional deep learning, a well-trained model could act as a teacher model to transfer the knowledge for small model training (i.e. the student model). Since the pre-trained FMs acquired huge amounts of knowledge from the massive training data, it is natural to design an integrated system to retrieve and transfer the acquired knowledge of FMs to boost the small model training over the FL-enabled networks. However, the deployment strategies may differ in systems with diverse hardware capabilities.
* In cross-device FL-enabled networks, it is difficult to conduct on-device training for FMs. The server-side model deployment becomes an alternative practical option. FM could be deployed at the server to enhance global aggregated model training. For example, it could be placed as the teacher model knowledge distillation. We term this
Fig. 3: An overview of different types of architecture of integrated FMs in the federated edge learning system. FMs are deployed at the cloud server (1), edge server (2), and local clients (3), respectively.
as the global FM knowledge transfer, which is a typical application of cloud-edge collaborative training.
* In cross-silo FL or edge devices with powerful hardware (e.g. autonomous vehicles), FMs could be deployed locally, in which the pre-trained FMs could be involved in training to boost the performance of the local model. Specifically, techniques such as transfer learning and knowledge distillation could help the local model achieve better generalization performance under the guidance of FM with superior knowledge extraction and representation learning capabilities; see the third paradigm of Fig. 4.
### _Foundation models for model evaluation:_
As discussed previously, in cross-device FL-enabled networks, deploying FMs for local training may be unrealistic. Alternatively, FMs at the edge server could be opted with more functionality with access to the updated local models or aggregated model, rather than only being involved in the training. Since the pre-trained FMs exhibit excellent performance on multiple downstream tasks, the performance of FMs could be regarded as a benchmark for evaluating the smaller models. The current performance evaluation and validation are only based on limited validation and test datasets. Although the trained model achieves good generalization performance over the test set, there still exist potential over-fitting risks. Hence, the output of FM could be used as a criterion of performance evaluation for a smaller model by comparing it with the corresponding output. Once the pre-trained FMs are available in FL-enabled networks, the edge server could request such services for complementary model evaluation.
### _Potential issues_
Leveraging FMs brings promising opportunities to the FL-enabled network design and the training frameworks. To integrate FM into the FL-enabled networks, a few properties of the wireless networks and FMs would still constrain the deployment and the scalability of the integrated systems. To meet the requirements of future intelligent networks, the following issues need to be addressed.
_1) Continuous and stable wireless connection services:_ The time-varying property of wireless channels may lead to occasional unstable connections. It is possible that a service outage occurs among the clients, edge servers, and FM vendors. For example, the mobility property in autonomous driving brings challenges to service continuity. On the other hand, low latency is a key consideration of wireless communication systems. The latency brought by the FM service request may degrade the performance especially when the task outputs are involved in the latency-aware decision-making process during local or global model learning/evaluation.
_2) FM Service alignment and security:_ Despite the superior capabilities across a vast expanse of scenarios, the FM performance may fluctuate in real-world applications, with respect to fairness, bias, toxicity, and other related metrics [7]. The integrated system design must take these possible uncertainties into consideration, so as to improve the robustness and sustainability of the integrated FM and FL systems. Furthermore, it is also known that the total time for FM to complete the task varies significantly among different tasks. Similar to the first point, diverse latency would be newly introduced during this process. Thus, a joint design of the computation and communication for low latency should be explored to ensure the overall performance.
## IV FL for training FMs over networks
For the training of FMs of increasingly larger scales, after all publicly available data has been exhausted as training data, the next frontier would inevitably be personal data, which is naturally distributed across wireless networks. Due to the inherent privacy issues arising from the use of personal data, it is frequently not feasible to aggregate personal data from multiple sources into a single dataset for centralized training. Hence, FL is well-positioned in this new frontier, for the training of FMs on personal data in a privacy-preserving manner. Moreover, the distributed nature of FL allows for the use of computing resources across the network that could otherwise be idle, although we still have to address the issue that such computing resources may be limited. In this section, we discuss some possible scenarios for training FMs over such FL-enabled networks1. In particular, we will discuss usual training (i.e. training from scratch), hybrid training, and parameter-efficient training; see Fig. 4
Footnote 1: Note that we only consider cross-silo FL (with adequate infrastructure for sustained energy consumption and sufficient local data for FM training). Cross-device FL naturally cannot be compatible with the local deployment and training of FMs over a wireless network.
### _Usual FM training over FL-enabled networks_
To implement the training over the networks, the most natural solution is to treat the target FM as the usual shared model in FL with fully decentralized data across clients. However, the distributed training for FMs (mostly based on unsupervised learning) with data heterogeneity remains unexplored. Even though we assume each client has sufficient data and energy supply for FM training, it is still challenging to conduct RLHF in a distributed manner. Hence, the rest of this section will primarily focus on hybrid training and parameter-efficient training.
### _Hybrid training for FM over networks_
As discussed previously, the usual FL training for FM brings uncertainty and challenges. In centralized scenarios, the pre-trained FM has become the bedrock for mainstream applications. Motivated by this, we shall consider a "pre-train globally, fine-tune locally" scheme. More specifically, the overall training process for FMs shall be split into two parts, in which the cloud or edge server will be in charge of the pre-training process in the first stage to obtain a pre-trained FM via utilizing the centralized public data and then the distributed clients will be involved in the subsequent training process for further local full model fine-tuning or personalization based on the local private data.
The hybrid training scheme could be further divided into different types according to different objectives in the second
stage. For generic training purposes (i.e., to train a shared FM for all clients), a short FL training stage will be coordinated across the clients after receiving the pre-trained FMs, to enhance the performance of shared FMs. The shared FM would be obtained via the usual model aggregation in FL. For personalized training purposes (i.e., each client or each group of clients trains an FM), each local client will train its own personalized FM via personalized FL training frameworks. The personalization degree could be adjusted according to local preferences and characteristics.
### _Parameter-efficient fine-tuning_
No matter whether in training from scratch or in local fine-tuning based on the pre-trained FM, all the parameters of FMs will be updated. For large-scale training, such schemes may be infeasible due to the excessive training. The full model parameter update process, including forward and backward propagation, would incur additional memory and storage costs for gradients and other intermediate parameters. Enormous efforts have been devoted to addressing such constraints to explore low-cost solutions [12, 13], among which parameter-efficient fine-tuning (PEFT) is proposed to achieve fast adaptations with much less trainable parameters. Compared to the other two training schemes, performing PEFT at the local clients could be regarded as a fast adaptation method for deploying local FMs in the FL-enabled networks. In the following parts, we discuss how PEFT could benefit the training of FMs over the FL-enabled network.
* _1) Adapter-based PEFT in FL:_ The adapter-based PEFT introduces an additional adapter built upon the pre-trained model, which consists of a few layers with a small amount of trainable parameters. The adapter is usually inserted between existing layers or follows after the output layer of FMs. For the combined structure, only the adapter is updated in PEFT. The number of trainable parameters and the cost of computing and storage could be vastly reduced. Also, in FL, such integration means that only the adapter will be involved in the communication for model aggregation. The corresponding communication cost will also be much smaller. Nonetheless, additional inference latency will be induced due to the extended data path brought by the adapter layer.
* _2) Low-rank adaptation-based PEFT in FL:_ Unlike the inserted adapter, the low-rank adaptation methods (e.g.
Fig. 4: An overview of different types of training paradigms of FMs in the federated edge learning system with cloud FM pre-training.
LoRA) merge trainable low-rank matrices in parallel to the frozen pre-trained weight matrices of FMs. Benefited by the parallel merging, the low-rank adaptation methods do not introduce additional inference when compared to the pre-trained FM [12]. Aggregation would also be conducted only among the low-rank adaptation matrices with low communication costs over the wireless network. To provide personalized services, personalized adaptation matrices could also be considered.
* _3) Prompt tuning-based PEFT in FL:_ The adaptation methods demonstrate effectiveness on downstream tasks but trainable parameters still require frequent parameter exchange with potential privacy concerns. Prompt tuning (or prompt engineering) provides another perspective to improve the FM performance2. The interplay between prompt tuning and FL enables federated prompt tuning for communication-efficient solutions [14], where only the prompt vectors are learned and shared across clients. As no parameters of the FMs are trained, the prompt-tuning method provides promising solutions to realize communication-efficient and sustainable networks. Footnote 2: Prompt consists of tunable tokens per downstream task to be prepended to the input text, which can be regarded as learnable parameters.
A comparison of the ratio of trainable parameters is illustrated in Fig. 5, which clearly demonstrates the computing efficiency of PEFT methods [15]. In summary, training FMs over wireless networks could leverage the distributed data and power in a privacy-preserving manner but also present cost-intensive drawbacks with respect to the computation, storage, and communication resources. These properties bring both challenges and opportunities for a joint communication, computation, and storage framework design.
## V Future Trends and Open Issues
We have discussed the challenges of integrating FMs with FL-enabled networks, and also highlighted how FMs and FL could benefit each other over wireless networks. To exploit the potential gains and address the challenges for the interplay between FMs and FL in a robust and sustainable manner, we propose the following broad areas that warrant further investigation.
* _Incentive design for FM service request and client participation_. Training FMs is costly. In FM-integrated FL-enabled networks, the request for FM services (e.g. synthetic data generation and pre-trained FM downloading) may not be granted by the FM vendor without a proper incentive mechanism. How we incentivize the FM vendor to provide FMaaS remains an open question.
* _Joint optimization of FM QoS and computing resources_. As one of the key characteristics of FM training, the huge computational demands for training, inference, storage and communication pose a critical challenge in resource-constrained wireless networks. It is important to efficiently utilize the limited computing resources across the networks while ensuring the QoS of FM services and training performance. A joint optimization scheme to balance the trade-off between the QoS of FM services and the computing resources in FL-enabled networks could be further explored.
* _Privacy and robustness issues for FM services_. In AI-related tasks and services, the preservation of both data privacy and model privacy are increasingly important factors for building trustworthy AI systems. Numerous privacy-preserving mechanisms and robustness to different adversarial attacks have been investigated in FL-enabled networks. However, it is not clear how such methods would perform in a wireless world of FMs.
* _Task scheduling for low-latency services_. Latency is a key consideration in designing intelligent wireless networks. In FM-integrated FL-enabled networks, the non-negligible latency induced by the inference computations of FMs is heavily task-dependent. We foresee that task-adaptive scheduling protocols would be increasingly important as FMs become more prevalent in future wireless networks.
* _Communication protocol design for transmission of FMs_. On the road towards ubiquitous intelligent networks, the transmission of FM weights, as illustrated in Fig. 3 and Fig. 4, would be a non-negligible component of network traffic. Currently, there is no specific coding and communication protocol for the transmission of FMs. It would be necessary to develop efficient protocols and coding techniques for FMs while considering the inherent properties of FM architectures.
## VI Concluding Remarks
The rapid advances in FMs serve as an important contextual backdrop for the vision of FL-enabled intelligent wireless networks, while the exploration of the interplay between FMs and FL is still in the nascent stage. In this article, to explore the extent to which FMs are suitable in FL-enabled wireless networks, we presented a balanced discussion on both opportunities and challenges. A broad overview of the possible paradigms of the integrated FMs and FL was exploited. We finally provided potential future trends for achieving robust and sustainable integration of FMs and FL over wireless networks.
Fig. 5: A comparison of ratio of trainable parameters for different training schemes on GPT-3 model. |
2305.09949 | Magnetic Fields and Fragmentation of Filaments in the Hub of
California-X | We present 850 $\mu$m polarization and $\rm C^{18}O (3-2)$ molecular line
observations toward the X-shaped nebula in the California molecular cloud using
the JCMT SCUBA-2/POL-2 and HARP instruments. The 850 $\mu$m emission shows that
the observed region includes two elongated filamentary structures (Fil1 and
Fil2) having chains of regularly spaced cores. We measured the mass per unit
length of the filament and found that Fil1 and Fil2 are thermally super- and
subcritical, respectively, but both are subcritical if nonthermal turbulence is
considered. The mean projected spacings ($\Delta\bar S$) of cores in Fil1 and
Fil2 are 0.13 and 0.16 pc, respectively. $\Delta\bar S$ are smaller than
$4\times$filament width expected in the classical cylinder fragmentation model.
The large-scale magnetic field orientations shown by Planck are perpendicular
to the long axes of Fil1 and Fil2, while those in the filaments obtained from
the high-resolution polarization data of JCMT are disturbed, but those in Fil1
tend to have longitudinal orientations. Using the modified
Davis-Chandrasekhar-Fermi (DCF) method, we estimated the magnetic field
strengths ($B_{\rm pos}$) of filaments which are 110$\pm$80 and 90$\pm$60
$\mu$G. We calculated the gravitational, kinematic, and magnetic energies of
the filaments, and found that the fraction of magnetic energy is larger than 60
% in both filaments. We propose that a dominant magnetic energy may lead the
filament to be fragmented into aligned cores as suggested by Tang et al., and a
shorter core spacing can be due to a projection effect via the inclined
geometry of filaments or due to a non-negligible, longitudinal magnetic fields
in case of Fil1. | Eun Jung Chung, Chang Won Lee, Woojin Kwon, Mario Tafalla, Shinyoung Kim, Archana Soam, Jungyeon Cho | 2023-05-17T05:05:51Z | http://arxiv.org/abs/2305.09949v1 | # Magnetic Fields and Fragmentation of Filaments in the Hub of California-X
###### Abstract
We present 850 \(\mu\)m polarization and C\({}^{18}\)O (\(3-2\)) molecular line observations toward the X-shaped nebula in the California molecular cloud using the JCMT SCUBA-2/POL-2 and HARP instruments. The 850 \(\mu\)m emission shows that the observed region includes two elongated filamentary structures (Fil1 and Fil2) having chains of regularly spaced cores. We measured the mass per unit length of the filament and found that Fil1 and Fil2 are thermally super- and subcritical, respectively, but both are subcritical if nonthermal turbulence is considered. The mean projected spacings (\(\Delta S\)) of cores in Fil1 and Fil2 are 0.13 and 0.16 pc, respectively. \(\Delta\bar{S}\) are smaller than 4\(\times\)filament width expected in the classical cylinder fragmentation model. The large-scale magnetic field orientations shown by Planck are perpendicular to the long axes of Fil1 and Fil2, while those in the filaments obtained from the high-resolution polarization data of JCMT are disturbed, but those in Fil1 tend to have longitudinal orientations. Using the modified Davis-Chandrasekhar-Fermi (DCF) method, we estimated the magnetic field strengths (\(B_{\rm pos}\)) of filaments which are 110\(\pm\)80 and 90\(\pm\)60 \(\mu\)G. We calculated the gravitational, kinematic, and magnetic energies of the filaments, and found that the fraction of magnetic energy is larger than 60% in both filaments. We propose that a dominant magnetic energy may lead the filament to be fragmented into aligned cores as suggested by Tang et al., and a shorter core spacing can be due to a projection effect via the inclined geometry of filaments or due to a non-negligible, longitudinal magnetic fields in case of Fil1.
Interstellar magnetic fields (845); Interstellar medium (847); Polarimetry (1278); Submillimeter astronomy (1647); Star forming regions (1565)
## 1 Introduction
Hub-filament systems (HFSs) are the best laboratories to investigate the initial conditions for star formation. HFS consists of a hub with high column density (\(>10^{22}\) cm\({}^{-2}\)) and low axis ratio and several filaments with relatively low column density and high aspect ratio extended from the hub (Myers, 2009). They are mostly associated with active low- to high-mass star clusters (Kumar et al., 2020), and easily found both in nearby star-forming molecular clouds and more distant infrared dark clouds. Hence, HFSs have been studied using multi-wavelength observations to understand how they form and stars are generated in them (e.g., Kumar et al., 2020; Hwang et al., 2022; Bhadari et al., 2022).
The crucial process to form stars in the hubs and filaments is the fragmentation. It is believed that early star formation begins with fragmentation of filaments in hydrostatic equilibrium state into cores due to linear perturbations (e.g., Ostriker, 1964). With an assumption of the isothermal, infinitely long cylindrical structure, fragmentation in a filament occurs via gravitational perturbations with a critical wavelengths of 2 times the filament's diameter, and filaments may have cores with a regular spacing of 4 times the diameter of filament which is the fastest growing mode (e.g., Inutsuka et al., 1992).
However, the observed cores' separations are generally not matched to the spacing expected by the classical cylinder model (e.g., Tafalla & Hacar, 2015; Zhang et al., 2020), probably because of the other factors which can affect the fragmentation process of filaments such as turbulence, accreting flows, and/or magnetic fields (e.g., Fiege & Pudritz, 2000; Clarke et al., 2016; Hanawa et al., 2017).
The main drivers of star formation are gravity, turbulence, and magnetic fields, although their precise roles, particularly during the fragmentation process from filamentary molecular clouds into dense cores, are still unclear. Recently, it has been proposed that the relative significance of these three factors can determine the different evolutionary paths from clumps on the scale of 2 pc to cores on the scale of 0.6 pc (Tang et al., 2019). Specifically, Tang et al. (2019) has classified fragmentation types into \(clustered\), \(aligned\), and \(no\) fragmentation based on the distribution of cores within the natal clouds, with each type appearing to be closely related to the dominance of turbulence, magnetic fields, and gravity, respectively. However, more observational data on various filamentary molecular clouds with different fragmentation types are needed to better understand the precise roles of gravity, turbulence, and magnetic fields in star formation.
L1478 in the California molecular cloud is known as a low-mass star-forming cloud at a distance of 470 pc (Zucker et al., 2019). It has a prominent HFS to which Imara et al. (2017) refer to as California-X (shortly Cal-X) because of its X-shape. The Herschel 250 \(\mu\)m image of Cal-X given in Figure 1 shows that there are two long pc-scale filaments radiating from the bright hub to the south and to the west, respectively. The mass of the hub is \(\sim 130~{}M_{\odot}\)(Chung et al., 2019), and those of filaments at the south and west are \(\sim 130\) and \(150~{}M_{\odot}\), respectively (Imara et al., 2017). The hub includes two YSOs: one is Class I and the other is Class II (Harvey et al., 2013; Broekhoven-Fiene et al., 2014). The continuous velocity gradients of Cal-X indicate possible gas flow along the filaments into the hub (Imara et al., 2017; Chung et al., 2019). The Planck data show, though its resolution is limited (\(\sim 5^{\prime}\)), that magnetic field orientations are mostly east-to-west, hence the long filament in the west is roughly parallel to the global B-field but the hub and the southern filament is perpendicular to the global B-field.
At the central \(11^{\prime}\) area of the hub, two elongated filamentary features can be seen. Zhang et al. (2020) investigated these filaments and dense cores in the hub of Cal-X. They showed that the cores are regularly spaced along the filaments where the core spacings are shorter than the expected spacing by the classical cylinder model (Inutsuka et al., 1992). We notice that the chain of cores in the filaments of the Cal-X hub is classified as \(aligned\) fragmentation, and thus the filaments are suitable to study the role of gravity, turbulence, and magnetic fields on the fragmentation of hub/filament into cores. We have performed high-resolution polarization observations and molecular line observations using the SCUBA-2/POL-2 and HARP instruments mounted on the JCMT toward the hub of Cal-X. The paper is organized as follows. In Section 2, we describe the observations and data reductions. The results of the observations and the measured magnetic field strength are depicted in Section 3. We present the analysis and discussion in Sections 4 and 5, respectively. A summary is given in Section 6.
## 2 Observations
### Polarization Observations
We made submillimeter continuum and polarization observations at 850 \(\mu\)m toward the hub of the California-X molecular cloud. The observation was performed with the SCUBA-2/POL-2 instrument on the James Clerk Maxwell Telescope (JCMT) between 2019 October and 2021 January. The beam size at 850 \(\mu\)m wavelength is \(14^{\prime\prime}.1\) (corresponding to \(\sim\)0.03 pc at a distance of 470 pc). The standard SCUBA-2/POL-2 daisy mapping mode was used with a constant scanning speed of \(8^{\prime\prime}\) s\({}^{-1}\). The observations were done with 21 times, an average integration time of 40 minutes under dry weather conditions with submillimeter opacity at 225 GHz (\(\tau_{225~{}\rm GHz}\)) ranging between 0.05 and 0.08.
We used the pol2map script of the STARLINK/SMURF package for the 850 \(\mu\)m data reduction. The pol2map data reduction process consists of three steps. In the first step, the raw bolometer time streams for each observation are converted into separate Stokes \(I\), \(Q\), and \(U\) time streams using the process _calcqu_. In the second step, it produces improved \(I\) maps using a mask determined with signal-to-noise ratio via the process _makemap_. We set the parameter SKYLOOP=TRUE to reduce the dispersion between maps and lessen the intrinsic instabilities of the map-making algorithm1. The final \(I\) map is created by co-adding the improved individual \(I\) maps. In the final step, \(Q\) and \(U\) maps are produced from the \(Q\) and \(U\) time streams with the same masks used in the previous step. For the instrumen
tal polarization correction, the 'August 2019' IP model2 was used. The final \(I\), \(Q\), and \(U\) maps are binned with a pixel size of 4\({}^{\prime\prime}\).
Footnote 2: [https://www.eaobservatory.org/jcmt/2019/08/](https://www.eaobservatory.org/jcmt/2019/08/)
new-ip-models-for-pol2-data/
The polarized intensity (_PI_) is the quadratic sum of \(Q\) and \(U\), \(PI=\sqrt{Q^{2}+U^{2}}\), and thus the noises of \(Q\) and \(U\) always make a positive contribution to the polarization intensity (e.g., Vaillancourt, 2006). The debiased polarization intensity is estimated using the modified asymptotic estimator (Plaszczynski et al., 2014):
\[PI=\sqrt{Q^{2}+U^{2}}-\sigma^{2}\frac{1-e^{-(Q^{2}+U^{2})/\sigma^{2}}}{2\sqrt{ Q^{2}+U^{2}}}, \tag{1}\]
where \(\sigma^{2}\) is the weighted mean of the variances on \(Q\) and _U_:
\[\sigma^{2}=\frac{Q^{2}\sigma_{Q}^{2}+U^{2}\sigma_{U}^{2}}{Q^{2}+U^{2}}, \tag{2}\]
and \(\sigma_{Q}\) and \(\sigma_{U}\) are the standard errors in \(Q\) and \(U\), respectively.
The debiased polarization fraction \(P\) is calculated as
\[P=\frac{PI}{I}, \tag{3}\]
and its corresponding uncertainty is
\[\sigma_{P}=\sqrt{\frac{\sigma^{2}}{I^{2}}+\frac{\sigma_{I}^{2}(Q^{2}+U^{2})}{ I^{4}}}, \tag{4}\]
where \(\sigma_{\rm I}\) is the standard error in \(I\).
Figure 1: Herschel 250 \(\mu\)m image of the X-shaped nebula region in the California molecular cloud. The contour levels are 3, 6, 9, 12, 20, 40, and 70\(\times\sigma\) (1\(\sigma=0.12\) Jy beam\({}^{-1}\)). The yellow segments depict the large scale magnetic field orientations obtained by rotating the submillimeter Planck 353 GHz polarization orientations by 90 degree. The effective angular resolution is \(\sim 5^{\prime}\). The yellow stars denote YSOs found by Harvey et al. (2013). The JCMT SCUBA-2/POL-2 observing area of 11\({}^{\prime}\) diameter is indicated with the large dashed circle. The small dashed circle shows the inner 3\({}^{\prime}\) region with the best sensitivity.
We used the final debiased polarization vector catalog provided with a bin-size of 12'' to increase the signal-to-noise ratios in the polarization data. The 12'' bin size is also close to the beam size of the JCMT/POL-2 at 850 \(\mu\)m, which is 14.1''. The selection criteria of the polarization measurements are set to be (1) the signal-to-noise ratio (S/N) of total intensity larger than 10 (\(I/\sigma_{I}>10\)) and (2) the polarization fraction larger than 2 times of its uncertainty (\(P/\sigma_{P}>2\)).
A Flux Calibration Factor (FCF) of 668 Jy \(\rm{pW^{-1}}\) beam\({}^{-1}\) is used for the 850 \(\mu\)m Stokes \(I\), \(Q\), and \(U\) data. This FCF is larger than the standard 850 \(\mu\)m SCUBA-2 flux conversion factor of 495 Jy \(\rm{pW^{-1}}\) beam\({}^{-1}\) because a correction factor of 1.35 is multiplied due to the additional losses from POL-2 (Dempsey et al., 2013; Friberg et al., 2016; Mairs et al., 2021). The rms noise values in the \(I\), \(Q\), and \(U\) binned to pixel size 12'' are 3.2, 3.0, and 3.0 mJy beam\({}^{-1}\), respectively.
### \(C^{18}o\) (3-2) Observations
We performed \(\rm{C^{18}O}\) (\(3-2\)) line observations using the JCMT Heterodyne Array Receiver Programme (HARP; Buckle et al., 2009) to estimate the velocity dispersion of the region. The data were taken as basket-weaved scan maps over 3 nights between January 2020 and July 2022 in weather band 2 (\(\rm{\tau_{225~{}GHz}}\sim 0.065-0.08\)). The spatial resolutions is about 14 arcsec which is the same as that of JCMT/POL-2 850 \(\mu\)m data, and the spectral resolution is \(\sim 0.05\) km s\({}^{-1}\). The total observing time is \(\sim\)6 hours. We reduced the data using the ORAC-DR pipeline in STARLINK software (Buckle et al., 2012) with a recipe of 'REDUCE_SCIENCE_NARROWILINE
Figure 2: The observed 850 \(\mu\)m Stokes \(I\) image and contours. The contour levels are 3, 10, 20, 30, 50, 70, and 90\(\times\sigma\) (1\(\sigma\) is 3.2 mJy beam\({}^{-1}\)). Filaments identified with filfinder are presented with yellow polygons. The filaments’ skeletons are drawn with solid lines. The red ellipses depict the 850 \(\mu\)m cores identified using FellWalker and the red triangles indicate the positions of cores identified with the Herschel data (Zhang et al., 2020). The dashed circles are the observing area of 11′′ diameter and the best sensitivity coverage of 3′ region. The black circle at the bottom left corner shows the POL-2 850 \(\mu\)m beam size of 14.1′′. A reference scale of 0.1 pc is shown on the top left corner. _Right:_ Averaged radial column density profiles of filament1 and 2 centered on their skeletons (yellow squares) and their Gaussian fits to estimate the filaments’ widths.
and obtained the data cube with a 14'' pixel size. We resampled the data cube to a channel width of 0.1 km s\({}^{-1}\) using a 1-d Gaussian kernel. The mean rms level of the final data cube is about 0.06 K[\(T_{A}^{*}\)].
## 3 Results
### Identification of Filaments and cores
The 850 \(\mu\)m Stokes \(I\) map is presented in Figure 2. The 850 \(\mu\)m emission closely matches the Herschel 250 \(\mu\)m emission of the hub presented in Figure 1. There are two elongated filamentary structures: one at the center and the other at the west.
We used the filfinder algorithm, which employs mathematical morphology to identify filaments (Koch & Rosolowsky, 2015). filfinder takes five steps to identify filamentary structures. Simply introducing the algorithm here, it first flattens the image using an arctan transform of \(I^{\prime}=I_{0}\mathrm{arctan}(I/I_{0})\) with the normalization of \(I_{0}\equiv\exp(\mu+2\sigma)\) where \(\mu\) and \(\sigma\) are the mean and standard deviation of the log-intensity. Secondly, it makes the flattened data to be smoothed with a Gaussian beam having a full width half maximum (FWHM) of 0.05 pc. And then, it creates a mask using an adaptive threshold of the smoothed data, i.e., it keeps a pixel which has a greater intensity than the median value of the neighboring pixels within the distance of 0.1 pc, while discarding a pixel having a lower intensity. In the forth and fifth steps, small and spurious structures are removed, i.e., structures with sizes less than 5\(\pi\)(0.1 pc)\({}^{2}\) are rejected, and small spurious features of the edges are also removed by applying a 0.05 pc size median filter.
Using filfinder, we obtained four filamentary structures as outlined with yellow in Figure 2. The filaments' skeletons given by the algorithm are depicted with solid lines. The skeletons are found using a Medial Axis Transform in which the chosen skeleton pixels are the centers of the inscribed circles of the mask. Then, the length of a filament is measured along the longest path through the skeleton after pruning the sub-structures. Among the identified four filaments, we make analyses for the two largest filaments, named filament 1 (Fil1) and filament 2 (Fil2), in this study.
The mass of filament is estimated with the following equation (e.g., Hildebrand, 1983):
\[M=\frac{S_{\nu}\ d^{2}}{\kappa_{\nu}\ B_{\nu}(T_{\mathrm{d}})}, \tag{5}\]
where \(S_{\nu}\), \(\kappa_{\nu}\), and \(B_{\nu}\) are the integrated flux density, opacity, and Planck function at the wavelength of 850 \(\mu\)m, respectively. \(T_{\mathrm{d}}\) and \(d\) are the dust temperature and the distance, respectively. The dust opacity is obtained by \(\kappa_{\nu}=0.1(\nu/10^{12}\mathrm{Hz})^{\beta}\mathrm{cm}^{2}\ \mathrm{g}^{-1}\) with the assumption of a dust-to-gas ratio of 1:100 (Beckwith & Sargent, 1991), and the dust opacity index of \(\beta=2\)(Draine & Lee, 1984). The dust temperature was taken
Figure 3: _Left:_ C\({}^{18}\)O (\(3-2\)) moment maps. The contours of C\({}^{18}\)O (\(3-2\)) integrated intensity are overlaid on the moment maps depicted with gray or color tones, and the contour levels are 3, 10, 20, and 30\(\times\sigma\) (1\(\sigma=0.03\) K km s\({}^{-1}\)). The outlines of filaments are drawn with solid polygons. The dashed circles present the POL-2 observation area of 11′ diameter and its best sensitivity coverage of 3′ region. The open squares of moment 1 map depict the position of dense cores in Fil2. The black circle at the bottom left of moment 2 map shows the FWHM beam size at the C\({}^{18}\)O (\(3-2\)) frequency. _Right:_ the averaged spectra of filaments and dense cores. Red profiles overlaid on the spectra are the Gaussian fit results of filaments and dense cores, and the blue profiles are the second Gaussian components. The spectrum shown in the bottom panel is the averaged one of the southern region of Fil2 depicted with blue circle on the moment 1 map, i.e., printed as ‘South of Fil2’.
from Herschel data (Andre et al., 2010; Chung et al., 2019). The applied \(T_{\rm d}\) for Fill1 and Fil2 are 12.0\(\pm\)1.1 K and 10.8\(\pm\)0.2 K, and then their masses were derived to be 15\(\pm\)2 and 8\(\pm\)1 \(M_{\odot}\), respectively. The H\({}_{2}\) column density is calculated by dividing the mass in each pixel estimated from the Equation 5 by the pixel area. The central H\({}_{2}\) column densities (\(N_{\rm H_{2}}^{0}\); the median value of \(N_{\rm H_{2}}\) along the filament crest) are 13\(\times\)10\({}^{21}\) and 7\(\times\)10\({}^{21}\) cm\({}^{-2}\) for Fil1 and Fil2, respectively. The filaments' widths are estimated from the Gaussian fit of the averaged radial column density profiles as shown in the Figure 2. The mass per unit length (\(M_{\rm line}\)) is estimated by dividing the mass by the length. \(M_{\rm line}\) of Fil1 and Fil2 are 20\(\pm\)3 and 9\(\pm\)2 \(M_{\odot}\) pc\({}^{-1}\), respectively. The physical properties of Fil1 and Fil2 are listed in Table 1.
Zhang et al. (2020) investigated the Cal-X using the Herschel H\({}_{2}\) column density map. They used the _get-filaments_ and _getsources_ algorithms (Menshchikov et al., 2012) to identify filaments and dense cores. Filament #10 and #8 of Zhang et al. (2020) correspond to Fil1 and Fil2 of this study, respectively. Fil1 is longer and wider than F#10, but Fil2 is shorter and narrower than F#8. One noticeable thing is that the mass of F#8 is 26 \(M_{\odot}\) at the distance of 470 pc which is about three times larger than Fil2. Besides, the measured line mass of F#8 is 28 \(M_{\odot}\) pc\({}^{-1}\) implying that it is thermally supercritical while Fil2 is subcritical.
The differences can be caused by the different methods used for identification of filament and measurements of quantities as well as the Herschel far-infrared data from 70 \(\mu\)m to 500 \(\mu\)m wavelength and the JCMT sub-mm data at 850 \(\mu\)m. In addition, we cannot rule out the possibility of underestimation due to the filter-out of the structures with scale greater than a few arcmin and/or decreasing sensitivity at the larger radii than 3\({}^{\prime}\) of the POL-2 map obtained by the \(daisy\) scan mode (e.g., Holland et al., 2013). However, \(N_{\rm H_{2}}^{0}\) of Fil1 and Fil2 are consistent with those of F#10 (12\(\times\)10\({}^{21}\) cm\({}^{-2}\)) and F#8 (9.5\(\times\)10\({}^{21}\) cm\({}^{-2}\)). Besides, the average volume densities \(\bar{n}_{\rm H_{2}}\), the key physical quantity used to calculate the magnetic field strength (\(B_{\rm POS}\)) of Fil1 and Fil2, also agree to those of F#10 (39\(\times\)10\({}^{3}\) cm\({}^{-3}\)) and F#8 (22\(\times\)10\({}^{3}\) cm\({}^{-3}\)) within the uncertainties.
We used FellWalker clump-finding algorithm (Berry, 2015) to extract dense cores in the filaments. Pixels with intensities \(>1\sigma\) are used to find cores, and an object having a peak intensity higher than 10\(\sigma\) and a size larger than 2\(\times\)beam size of 14\({}^{\prime\prime}\) is identified as a real core. FellWalker algorithm considers the neighboring peaks are separated if the difference between the peak values and the minimum value (dip value) between the peaks is larger than the given threshold. We use 0.9\(\sigma\) as the threshold, and found five dense cores each in Fil1 and Fil2 regions, respectively, as shown in Figure 2. The dense cores identified from the Herschel H\({}_{2}\) column density map (Zhang et al., 2020) are presented with red triangles. The positions of 850 \(\mu\)m cores are consistent with those of Herschel dense cores. C4, C5, and C9 have offsets, but within one beam size of JCMT.
To estimate the velocity dispersions of filaments, we used C\({}^{18}\)O (3\(-\)2) data. Figure 3 shows the moment maps of C\({}^{18}\)O (3\(-\)2) and the averaged spectra of Fil1 and Fil2. The moment 0 map is integrated over the velocity range between \(-\)2.5 and 0.8 km s\({}^{-1}\). The peak position of integrated C\({}^{18}\)O (3\(-\)2) emission is well matched to that of 250 \(\mu\)m as well as of 850 \(\mu\)m emissions. Fil2 has relatively lower C\({}^{18}\)O intensity than Fil1. The velocity field of the region can be seen in the moment 1 map. The central velocities of Fil1 and Fil2 are about \(-\)1.3 km s\({}^{-1}\) and \(-\)0.2 km s\({}^{-1}\), respectively. Fil2 shows a relatively large velocity range between \(-\)1.0 to 0 km s\({}^{-1}\), while the velocity field of Fil1 gradually changes from \(-\)1.5 to \(-\)1.2 km s\({}^{-1}\) only.
The averaged spectra of Fil1 and Fil2 are given in Figure 3. The averaged spectrum of Fil1 is fairly well fitted with a single Gaussian profile, but that of Fil2 seems to have two velocity components. To investigate the velocity field of Fil2 in detail, we inspected the spectra
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Length & Width & \(N_{\rm H_{2}}^{0}\) & \(\bar{n}_{\rm H_{2}}\) & \(M\) & \(M_{\rm line}\) & \(\sigma_{\rm NT}\) \\ & (pc) & (pc) & (10\({}^{21}\) cm\({}^{-2}\)) & (10\({}^{3}\) cm\({}^{-3}\)) & (\(M_{\odot}\)) & (\(M_{\odot}\) pc\({}^{-1}\)) & (km s\({}^{-1}\)) \\ \hline Fil1 & 0.73\(\pm\)0.04 & 0.160\(\pm\)0.026 & 13\(\pm\)8 & 26\(\pm\)16 & 15\(\pm\)2 & 20\(\pm\)3 & 0.41 \\ Fil2 & 0.89\(\pm\)0.05 & 0.070\(\pm\)0.004 & 7\(\pm\)5 & 33\(\pm\)21 & 8\(\pm\)1 & 9\(\pm\)2 & 0.24 \\ \hline \end{tabular} Note. –\(M\) is the filament’s mass estimated from Equation 5, \(N_{\rm H_{2}}^{0}\) is the median column density along the crest of filament, \(\bar{n}_{\rm H_{2}}\) is the average volume density given by \(N_{\rm H_{2}}^{0}/W\) by assuming that each filament is cylindrical, and \(M_{\rm line}\) is the mass per unit length measured by dividing the mass by its length.
\end{table}
Table 1: Derived Physical Parameters of the Filaments
over the regions and presented the averaged spectra of dense cores in Fil2 in the Figure. We performed single or double Gaussian fitting for the spectra and overlaid the resulting Gaussian profiles on the spectra. The spectra of C10, C11, and C12 which are placed in the northern part of Fil2 look like having a single velocity component, but those of C8 and C9 at the south appear to have two velocity components. Moreover, the blue components of C8 and C9 are likely connected to the south of Fil2 (see the moment 1 map and spectrum of the South of Fil2 at the bottom right panel). Hence, we performed a multicomponent Gaussian fit to the averaged spectrum of Fil2 and selected the red component as a kinematic tracer of Fil2 between the two Gaussian components with the central velocities at \(-0.5\) km s\({}^{-1}\) and \(0\) km s\({}^{-1}\). This is reasonable because the red components of the cores' spectra are well connected along the whole filament, while the blue component appears to start from the south and extend to the middle of Fil2. We notice that Fil2, which is identified using the 850 \(\mu\)m
Figure 4: Polarization vectors on the 850 \(\mu\)m emission. A reference scale of polarization fraction (20%) is shown in the lower right corner of the figure. The red and white face colors denote the polarization vectors with \(2<P/\sigma_{P}\leq 3\) and \(P/\sigma_{P}>3\), respectively. The contour levels of 850 \(\mu\)m, the navy dashed circles, and the black circle in the lower left corner are the same as in Figure 2.
continuum data, may include substructures (so-called fibers) having different velocities at the south. However, it is beyond this paper's scope of investigating magnetic fields. Therefore, we leave the identification and analysis of the fibers for our future study.
The nonthermal velocity dispersion (\(\sigma_{\rm NT}\)) is calculated by extracting the thermal velocity dispersion (\(\sigma_{\rm T}\)) from the observed total velocity dispersion (\(\sigma_{\rm obs}\)):
\[\sigma_{\rm NT}=\sqrt{\sigma_{\rm obs}^{2}-\sigma_{\rm T}^{2}}. \tag{6}\]
The observed total velocity dispersion is taken from the Gaussian fit result as mentioned in the previous paragraph and shown in Figure 3. The thermal velocity dispersion of the observed molecule is
\[\sigma_{\rm T}=\sqrt{\frac{k_{\rm B}T}{\mu_{\rm obs}m_{\rm H}}}, \tag{7}\]
where \(k_{\rm B}\), \(T\), \(\mu_{\rm obs}\), and \(m_{\rm H}\) are the Boltzmann constant, gas temperature, atomic weight of the observed molecule (30 for C\({}^{18}\)O), and the hydrogen mass, respectively. As for the gas temperature, we used the dust temperature obtained from Herschel continuum data. The estimated nonthermal velocity dispersions are given in Table 1.
### Polarization Properties
Dust polarization occurs because non-spherical dust grains tend to align their minor axes parallel to the local magnetic field. This alignment results in a measurable polarization angle that can be used to estimate the strength of the interstellar magnetic field. Additionally, the polarization fraction (_P_) of thermal dust has an important meaning as an indicator of dust alignment efficiency. Though the observed polarization fraction is affected from the mixing of various strengths and disorder of magnetic fields in the line of sight as well as the dust opacity, it is still used to investigate the dust alignment efficiency. The power-law index \(\alpha\) of \(P\propto I^{-\alpha}\) is used as a parameter of dependence of \(P\) on \(I\). Zero of \(\alpha\) means that the dust grains align with the same efficiency at all optical depth, and \(\alpha=0.5\) implies a linear decrement of grain alignment efficiency along the increment of optical depth. The unity of \(\alpha\) would be the case the dust grains at higher densities do not align in any special direction but only in the thin layer at the surface of the cloud the dust grains align.
Figure 4 shows the polarization vectors on the 850 \(\mu\)m Stokes \(I\) map. The polarization segments with \(I/\sigma_{I}>10\) are presented, and those with \(2<P/\sigma_{P}\leq 3\) and \(P/\sigma_{P}>3\) are represented with red and white filled lines, respectively. It appears that the polarization fraction is lower in the brighter region. This anti-correlation of polarization fraction with intensity is more clearly presented in Figure 5. In the left panel, the debiased polarization fraction (\(P_{\rm db}\)) as a function of the normalized \(I\) intensity is shown, and a least squares single polwer-law fit of \(P_{\rm db}=P_{\sigma_{QU}}(I/\sigma_{QU})^{-\alpha}\) is overlaid with gray lines. The power-law index \(\alpha\) with vectors of \(I/\sigma_{I}>10\) is \(0.75\pm 0.09\), and that with vectors of \(I/\sigma_{I}>10\) and \(P/\sigma_{P}>2\) is \(0.89\pm 0.06\).
Pattle et al. (2019) reported that the single power-law model, which is only applicable to high signal-to-noise data with \(\alpha<0.3\), may overestimate both \(\alpha\) and \(P_{\sigma_{QU}}\) with increasing \(\alpha\), whereas the Ricean-mean model generally performs well around \(\alpha\sim 0.7\). Hence, we applied the Ricean-mean model to the non-debiased data with \(I/\sigma_{I}>10\) with the following Equation (Pattle et al., 2019):
\[P=\sqrt{\frac{\pi}{2}}\left(\frac{I}{\sigma_{QU}}\right)^{-1}\mathcal{L}_{ \frac{1}{2}}\left[-\frac{P_{\sigma_{QU}}^{2}}{2}\left(\frac{I}{\sigma_{QU}} \right)^{2(1-\alpha)}\right], \tag{8}\]
where \(\mathcal{L}_{\frac{1}{2}}\) is a Laguerre polynomial of order \(\frac{1}{2}\). The relationship between the non-debiased \(P\) and \(I\) is presented in the right panel of Figure 5 with the best fitting model. The obtained best Ricean-mean model parameters are \(\alpha=0.65\pm 0.13\) and \(P_{\sigma_{QU}}=0.20\pm 0.08\).
The molecular clouds are expected to have a value of \(\alpha\) between 0.5 and 1, and those investigated by the BISTRO survey are reported to have \(\alpha\) in a range of 0.8 and 1.0 with a single power-law model (Kwon et al., 2018; Soam et al., 2018; Liu et al., 2019; Coude et al., 2019; Ngoc et al., 2021; Kwon et al., 2022). The reported \(\alpha\) of the Ricean-mean model for the BISTRO targets are 0.34 (Ophiuchus A), 0.6\(-\)0.7 (Oph B and C regions), 0.56 (IC 5146), 0.36 (Orion A), 0.30\(-\)0.34 (DR21 filament), and 0.35 (Monoceros R2) (e.g., Pattle et al., 2019; Wang et al., 2019; Lyo et al., 2021; Ching et al., 2022; Hwang et al., 2022). The power-law index \(\alpha\) of the Cal-X hub obtained using the Ricean-maen model is steeper than those of Gould Belt molecular clouds having \(\alpha\sim 0.3\), but similar to those of Oph B and C regions. The power-law index \(\alpha\) of \(\sim\)0.65 indicates that the grain alignment still occurs inside the cloud, but its efficiency decreases at the dense regions. This trend can be explained by the recent grain alignment theory where the decreasing radiation field and the increasing density and grain sizes at the dense regions lower the grain alignment efficiency (e.g., Hoang et al., 2021).
### Magnetic Field Morphology and Strength
As mentioned in the previous section, the dust grains tend to align their shorter axes parallel to the magnetic field direction. Hence, the magnetic field can be inferred by rotating the thermal dust polarization orientations by 90 degrees. Figure 6 shows the magnetic field orientations at the region. They appear to be perpendicular to the contours (i.e., parallel to the density gradient) at some regions or parallel to the filament's skeleton at some other regions.
To investigate the relation of the B-field orientation to the density gradient and the direction of the filament in more detail, we estimated the angle difference of the magnetic field (\(\phi_{B_{\rm POS}}\)) with the density gradient (\(\phi_{\nabla\bar{\rho}}\)) and with the direction of filament's skeleton (\(\phi_{\rm skeleton}\)). The direction of the density gradient was determined from the least-squares circle of the contour line that corresponds to the density level. Figure 7(a) shows examples of \(\phi_{\nabla\bar{\rho}}\) measurement. Firstly, a contour line is drawn for the intensity level of the magnetic field vector (red and green contours for the red and green B-field vectors, respectively). Then, a least-squares circle is obtained applying the SciPy.optimize Least_squares_circle3 code to the contour points within a distance of 0.05 pc (presented with solid circles). Finally, the direction of the center of the obtained circle is determined to be the direction of the density gradient at that point (red and green dashed lines for the red and green B-field vectors, respectively). The direction of filament's skeleton is the tangent vector at each skeleton's position. For the B-field segment which is not on the skeleton, the nearest skeleton's position angle is applied to measure its angle difference.
Footnote 3: [https://scipy-cookbook.readthedocs.io/items/Least_Squares_Circle.html](https://scipy-cookbook.readthedocs.io/items/Least_Squares_Circle.html)
The result is given in Figure 7. The zero and ninety degrees of \(|\phi_{B_{\rm POS}}-\phi_{\nabla\bar{\rho}}|\) mean the parallel and perpendicular B-fields to the density gradient, respectively, and those of \(|\phi_{B_{\rm POS}}-\phi_{\rm skeleton}|\) have the same and perpendicular B-fields to the skeleton's direction. For Fil1, we cannot find any special distribution of \(|\phi_{B_{\rm POS}}-\phi_{\nabla\bar{\rho}}|\) as shown in the left panel of Figure 7. Meanwhile, the B-fields vectors in Fil1 are perpendicular to the skeleton at the center and the southeastern edge, while tend to be parallel to the skeleton at the other regions (see the middle panel of Figure 7). These distributions can be seen from the histograms given in the right panels in the Figure. In the histogram of \(|\phi_{B_{\rm POS}}-\phi_{\nabla\bar{\rho}}|\), the numbers of \(|\phi_{B_{\rm POS}}-\phi_{\nabla\bar{\rho}}|<45^{\circ}\) and \(>45^{\circ}\) are 27 and 26, respectively. However, in the relation of \(\phi_{B_{\rm POS}}\) and \(\phi_{\rm skeleton}\), the number of \(|\phi_{B_{\rm POS}}-\phi_{\rm skeleton}|<45^{\circ}\) is 33 and that of \(>45^{\circ}\) is 20. Hence, the longitudinal B-field segments are slightly more prominent in Fil1. The number of magnetic field vectors of Fil2 is small (25) compared to that of Fil1 (53). Near C9, C10, and C11, the magnetic fields look perpendicular to the density gradient but parallel to the direction of skeleton. The histograms show that the numbers of \(|\phi_{B_{\rm POS}}-\phi_{\nabla\bar{\rho}}|<45^{\circ}\) is 9 and
Figure 5: Polarization fraction as a function of Stokes \(I\) intensity. The selection criterion of \(I/\sigma_{I}>10\) is used, and the open and filled symbols indicate those with \(P/\sigma_{P}\leq 2\) and \(P/\sigma_{P}>2\), respectively. _Left:_ Relationship between the debiased \(P\) and the normalized Stokes \(I\) intensity with \(\sigma_{QU}=3.0\) mJy beam\({}^{-1}\) (the rms noise in both Stokes \(Q\) and \(U\) measurements). The solid gray line shows the best fit to a single power-law function between the debiased \(P\) and \(I/\sigma_{QU}\). The obtained power-law slope \(\alpha\) and \(P_{\sigma_{QU}}\) are \(0.75\pm 0.09\) and \(0.29\pm 0.05\), respectively. The dashed line is that for the vectors with \(P/\sigma_{P}>2\), and the obtained \(\alpha=0.89\pm 0.06\) and \(P_{\sigma_{QU}}=0.63\pm 0.06\). _Right:_ Dependence of non-debiased \(P\) on \(I\) is presented. The solid black line is the best-fitting Ricean-mean model. The obtained power-law slope \(\alpha\) and \(P_{\sigma_{QU}}\) are \(0.65\pm 0.13\) and \(0.20\pm 0.08\), respectively. The dotted black line indicates the null hypothesis case (\(\alpha=0\)).
\(>45^{\circ}\) is 16, and the numbers of \(|\phi_{B_{\rm POS}}-\phi_{\rm skeleton}|<45^{\circ}\) and \(>45^{\circ}\) are 13 and 12, respectively. Hence, the B-fields in Fil2 tend to be perpendicular to the density gradient, but do not have any relation with the direction of skeleton.
#### 3.3.2 Magnetic Field Strength
The strength of the magnetic field in the molecular clouds can be estimated using the equation among the angular dispersion of the magnetic field vectors, velocity dispersion, and number density of the gas obtained by assuming that the underlying magnetic field is uniform but distorted by the turbulence. We used the modified Davis-Chandrasekhar-Fermi method (Davis, 1951; Chandrasekhar & Fermi, 1953) provided by Crutcher et al. (2004) :
Figure 6: The magnetic field orientations. The lengths of B-field segments are equally given to better show the magnetic field orientation. The red and white face colors denote the vectors with \(2<P/\sigma_{P}\leq 3\) and \(P/\sigma_{P}>3\), respectively. The green segments depict the large scale B-field orientations of each filament deduced from Planck 353 GHz polarization vectors. The contour levels of 850 \(\mu\)m, the navy dashed circles, and the black circle in the lower left corner are the same as in Figure 2.
\[B_{\rm pos} =Q_{\rm c}\sqrt{4\pi\bar{\rho}}\frac{\sigma}{\delta\phi}\] \[\approx 9.3\sqrt{\bar{n}_{\rm H_{2}}}\frac{\Delta v}{\delta\phi}, \tag{9}\]
where \(Q_{\rm c}\) is the correction factor for the underestimation of the angular dispersion in the polarization map due to the beam integration effect and hence overestimation of the magnetic field strength, adopted as 0.5 from Ostriker et al. (2001). \(\bar{n}_{\rm H_{2}}\) is the mean volume density of the molecular hydrogen in cm\({}^{-3}\), \(\Delta v=\sigma_{\rm NT}\sqrt{8\rm ln2}\) in km s\({}^{-1}\), and \(\delta\phi\) is the magnetic field angular dispersion.
The mean H\({}_{2}\) volume density of \(\bar{n}_{\rm H_{2}}=N_{\rm H_{2}}^{0}/W\) is used by assuming the filament to be in cylindrical shape and its diameter to be the measured filament's width. \(\Delta v\) is measured from the nonthermal velocity dispersion given in Table 1.
The angular dispersion of the magnetic field orientations is measured from two different methods. The first one is the unsharp-masking method (Pattle et al., 2017) and the second one is the structure function (Hildebrand et al., 2009). The large scale magnetic fields presented by the Planck data appear to be uniform and oriented perpendicular to the filament's long axis. However, magnetic fields inside the filaments are generally much more complex due to the interaction with turbulence, gravity, and stellar feedback. Several observational studies report the possible modification of magnetic fields by gravitational contraction, outflow and its shock, stellar feedback of expanding ionization fronts of H ii region, and gas flow driven by gravity (e.g., Hull et al., 2017; Pattle et al., 2017; Pillai et al., 2020; Arzoumanian et al., 2021; Eswaraiah et al., 2021; Kwon et al., 2022). The magnetic fields in the filaments of the Cal-X hub are also possibly modified by the gravity and outflows associated with the two YSOs at the center (Imara et al., 2017). Hence, we applied an unsharp-masking method to measure the angular dispersion of the magnetic field distorted by the turbulence motions by removing the underlying magnetic field geometry (Pattle et al., 2017). We smoothed the magnetic field map using a 3\(\times\)3 pixel boxcar filter and then subtracted the smoothed map from the observed map. Then, we measured the angular dispersion from the residual map. The smoothed and residual values were calculated only when the number of data points in the 3\(\times\)3 boxcar filter was at least three. Figure 8 shows the position angle map of the observed magnetic field vectors (left), smoothed position angle map (middle), and the residual map (right). The
Figure 7: Example for the direction of density gradient measurement (a, see the text), angle differences between the magnetic fields and the density gradient (b) and the skeleton (c), and their number distribution in histograms (d and e). The angle difference of zero (and 90 degree) means the parallel (and perpendicular) B-fields to the density gradient in (b) and (d) and to the skeleton in (c) and (e).
obtained angular dispersions are \(\delta\phi=11.8\pm 2.6\) and \(16.6\pm 1.9\) degrees for the Fil1 and Fil2, respectively.
The unsharp-masking method is widely used to estimate the angular dispersions in filaments and cores. However, it assumes that the underlying B-field is approximately uniform within the boxcar filter, and thus it requires well-sampled polarization map having a sufficient number of polarization detection within the boxcar filter for a reliable underlying B-field morphology. If we choose polarization vectors having more than 4 neighboring polarization angles in their 3\(\times\)3 pixel boxcar filter, the available number of polarization vectors decreases from 54 to 34 for Fil1 and from 24 to 4 for Fil2. The resulting \(\delta\phi\) of Fil1 and Fil2 are 11.3 and 2.9 degrees, respectively. In this case, Fil1 has similar \(\delta\phi\) with that obtained from the residual values having more than 2 neighboring pixels in the filter. However, Fil2 has quite smaller \(\delta\phi\) than that from the residual values having more than 2 neighboring pixels and it may not be reliable due to the limited number of samples. Hence, we applied another statistical analysis method, the structure function of polarization angles (Hildebrand et al., 2009).
Simply introducing the structure function method, the structure function of the angle difference in a map can be expressed as the following equation:
\[\langle\Delta\Phi^{2}(l)\rangle\equiv\frac{1}{N(l)}\sum_{i=1}^{N(l)}[\Phi(x)- \Phi(x+l)]^{2}, \tag{10}\]
where \(\Phi(x)\) is the angle at the position \(x\) and \(\Delta\Phi(l)\equiv\Phi(x)-\Phi(x+l)\) is the angle difference between the vectors with separation \(l\), and \(N(l)\) is the number of pairs of the vectors. The magnetic field is assumed to be composed of a large-scale magnetic field and a turbulent component. The contribution of the large scale magnetic field to the dispersion function would be expected to increase almost linearly as \(l\) increases in a range of \(0\leq l\ll d\) with the large-scale structured magnetic field scale \(d\). The effect of turbulence on magnetic fields is expected to be (1) almost 0 as \(l\to 0\), (2) its maximum at \(l\sim\) the turbulent scale (\(\delta\)), and (3) constant at \(l>\delta\). Then, the Equation 10 can be written as:
\[\langle\Delta\Phi^{2}(l)\rangle_{\rm tot}\simeq b^{2}+m^{2}l^{2}+\sigma_{\rm M }^{2}(l), \tag{11}\]
where \(b\) is the constant turbulent contribution to the magnetic angular dispersion at \(\delta<l<d\). \(m\) characterizes the linearly increasing contribution of the large scale magnetic field. \(\sigma_{\rm M}^{2}(l)\) is the correction term for the contribution of the measurement uncertainty in dealing with the real data.
Figure 9 shows the corrected angular dispersion (\(\langle\Delta\Phi^{2}(l)\rangle_{\rm tot}-\sigma_{\rm M}^{2}(l)\)) as a function of distance \(l\). We divided the data into distance bins with separations to the pixel size of 12\({}^{\prime\prime}\), and operated best-fits using the first three data points to fulfill the condition of \(l\ll d\). \(b^{2}\) is obtained from the least square fitting the relation, and the estimated \(b\) of Fil1 and Fil2 are 19.3 and 15.8. The corresponding angular dispersion \(\delta\phi=\sqrt{b^{2}/2}\) to be applied to the modified DCF method are \(13.7\pm 9.3\) and \(11.2\pm 7.8\) degrees for Fil1 and Fil2, respectively.
The applied \(\bar{n}_{\rm H_{2}}\), \(\Delta v\), \(\delta\phi\), and measured magnetic field strengths are listed in Table 2. The magnetic field strengths of Fil1 and Fil2 using \(\delta\phi\) from the unsharp-masking method are estimated to be 120\(\pm\)40 and 60\(\pm\)20 \(\mu\)G and using \(\delta\phi\) from the structure function method are 110\(\pm\)80 and 90\(\pm\)60 \(\mu\)G, respectively. \(B_{\rm POS}\) estimated from the two methods agree to each other within the uncertainties. Hereafter, we will use 'UM' and '\(SF\)' to indicate whether the quantities are derived using \(\delta\phi\) from the unsharp-masking or structure function methods, respectively.
## 4 Analysis
### Magnetic field strength, Gravity, and Turbulence
The main drivers of star formation in the interstellar medium are the gravity, turbulence, and magnetic field. To investigate the significance of magnetic fields, we estimated the mass-to-magnetic flux ratio (\(\lambda\)) and Alfvenic Mach number (\(M_{\rm A}\)).
The observed mass-to-magnetic flux ratio, \((M/\Phi)_{\rm obs}\), is
\[(M/\Phi)_{\rm obs}=\frac{\mu_{\rm H_{2}}m_{\rm H}\bar{N}_{\rm H_{2}}}{B_{\rm pos }}, \tag{12}\]
where \(\mu_{\rm H_{2}}\) is the mean molecular weight per hydrogen molecule of 2.8, and \(\bar{N}_{\rm H_{2}}\) is the median value of the central H\({}_{2}\) column density. The observed mass-to-magnetic flux ratio is compared with the critical mass-to-magnetic flux ratio of:
\[(M/\Phi)_{\rm crit}=\frac{1}{2\pi\sqrt{G}}, \tag{13}\]
and the mass-to-magnetic flux ratio (\(\lambda_{\rm obs}\)) is:
\[\lambda_{\rm obs}=\frac{(M/\Phi)_{\rm obs}}{(M/\Phi)_{\rm crit}}. \tag{14}\]
Following Crutcher et al. (2004), we can write Equation 14 as:
\[\lambda_{\rm obs}=7.6\times 10^{-21}\bar{N}_{\rm H_{2}}/B_{\rm pos} \tag{15}\]
with \(\bar{N}_{\rm H_{2}}\) in cm\({}^{-2}\) and \(B_{\rm pos}\) in \(\mu\)G. The real \(\lambda\) is assumed to be \(\lambda_{\rm obs}/3\) using the statistical correction factor 3 for the random inclination of the filament (Crutcher et al.
2004). It is expected that the magnetic fields support the clouds if \(\lambda\) is less than 1, while the structure would gravitationally collapse if \(\lambda\) is greater than 1. Fil1 and Fil2 have \(\lambda^{\rm UM}\) of \(0.27\pm 0.11\) and \(0.31\pm 0.10\) and \(\lambda^{\rm SF}\) of \(0.31\pm 0.24\) and \(0.21\pm 0.16\), respectively, and hence they are likely supported by magnetic fields.
The Alfvenic Mach number (\(M_{\rm A}\)) is estimated by:
\[M_{\rm A}=\frac{\sigma_{\rm NT}}{V_{\rm A}}, \tag{16}\]
where \(\sigma_{\rm NT}\) is the non-thermal velocity dispersion and \(V_{\rm A}\) is the Alfven velocity, which is defined as:
\[V_{\rm A}=\frac{B}{\sqrt{4\pi\bar{\rho}}}, \tag{17}\]
where \(B\) is the total magnetic field strength and \(\bar{\rho}\) is the mean density. The statistical average value of \(B_{\rm pos}\), \((4/\pi)B_{\rm pos}\), is used for \(B\)(Crutcher et al., 2004), and the mean density is obtained from \(\mu_{\rm H_{2}}m_{\rm H}\bar{n}_{\rm H_{2}}\). Fil1 and Fil2 have \(V_{\rm A}^{\rm UM}\) of 1.25\(\pm\)0.44 and 0.53\(\pm\)0.16 km s\({}^{-1}\) and \(V_{\rm A}^{\rm SF}\) of 1.08\(\pm\)0.80 and 0.79\(\pm\)0.59 km s\({}^{-1}\), respectively. The Alfvenic Mach numbers of the two filaments are in a range of 0.3\(-\)0.5, and hence Fil1 and Fil2 are sub-Alfvenic indicating the magnetic fields dominate turbulence in the regions.
### Energy Balance
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{4}{c}{Fil1} & Fil2 \\ \hline \(\bar{n}_{\rm H_{2}}\) (\(10^{3}\) cm\({}^{-3}\)) & 26.4\(\pm\)3.5 & & 32.7\(\pm\)4.7 \\ \(\Delta v\) (km s\({}^{-1}\)) & 1.0\(\pm\)0.3 & & 0.6\(\pm\)0.2 \\ \hline \(\delta\phi\) (degree) & unsharp-masking & structure function & unsharp-masking & structure function \\ \cline{2-4} & 11.8\(\pm\)2.6 & 13.7\(\pm\)9.3 & 16.6\(\pm\)1.9 & 11.2\(\pm\)7.8 \\ \hline \(B_{\rm pos}\) (\(\mu\)G) & 120\(\pm\)40 & 110\(\pm\)80 & 60\(\pm\)20 & 90\(\pm\)60 \\ \(\lambda\) & 0.27\(\pm\)0.11 & 0.31\(\pm\)0.24 & 0.31\(\pm\)0.10 & 0.21\(\pm\)0.16 \\ \(V_{\rm A}\) (km s\({}^{-1}\)) & 1.25\(\pm\)0.44 & 1.08\(\pm\)0.80 & 0.53\(\pm\)0.16 & 0.79\(\pm\)0.59 \\ \(M_{\rm A}\) & 0.33\(\pm\)0.15 & 0.38\(\pm\)0.30 & 0.46\(\pm\)0.18 & 0.31\(\pm\)0.24 \\ \(E_{\rm B}\) (M\({}_{\odot}\) km\({}^{2}\) s\({}^{-2}\)) & 11.4\(\pm\)4.3 & 8.5\(\pm\)6.4 & 1.2\(\pm\)0.4 & 2.6\(\pm\)2.0 \\ \hline \(E_{\rm G}\) (M\({}_{\odot}\) km\({}^{2}\) s\({}^{-2}\)) & 1.3\(\pm\)0.3 & & 0.3\(\pm\)0.1 & \\ \(E_{\rm K}\) (M\({}_{\odot}\) km\({}^{2}\) s\({}^{-2}\)) & 3.0\(\pm\)1.5 & & 0.8\(\pm\)0.4 & \\ \hline \end{tabular}
\end{table}
Table 2: B-Field Strengths of Filaments
Figure 8: Position angle maps of magnetic field vectors observed (left) and smoothed with a 3\(\times\)3 pixel boxcar filter (middle), and the residual map from subtracting the smoothed map from the observations (right). The observed B-field vectors (black line segment in the left and the middle) and the smoothed B-field vectors (white line segment in the middle) are presented on the images. The smoothed and residual values obtained when the number of polarization vectors in the 3\(\times\)3 boxcar filter was less than three were excluded.
We calculated the total gravitational, kinematic, and magnetic field energies in Fil1 and Fil2. The gravitational energy is calculated from the equation of
\[E_{\rm G}^{\rm cylinder}=-\frac{GM^{2}}{L}, \tag{18}\]
where \(M\) and \(L\) are the mass and length of filament, respectively (Fiege & Pudritz, 2000). The total kinematic energy is derived as
\[E_{\rm K}^{\rm cylinder}=M\sigma_{\rm tot}^{2}, \tag{19}\]
where \(\sigma_{\rm tot}\) is the observed total velocity dispersion (e.g. Fiege & Pudritz, 2000) estimated with the mean free particle of molecular weight \(\mu_{\rm p}\)=2.37 (Kauffmann et al., 2008) by the equation:
\[\sigma_{\rm tot}=\sqrt{\sigma_{\rm NT}^{2}+\frac{k_{\rm B}T}{\mu_{\rm p}m_{\rm H }}}. \tag{20}\]
The magnetic energy is calculated with the equation of
\[E_{\rm B}=\frac{1}{2}MV_{\rm A}^{2}. \tag{21}\]
The estimated values of gravitational, kinematic, and magnetic energies in the filaments are tabulated in Table 2. Fil1 has larger \(E_{\rm G}\), \(E_{\rm K}\), and \(E_{\rm B}\) by a factor of \(\sim\)four than Fil2 has, and this is likely related to the larger mass and nonthermal velocity dispersion of Fil1 than those of Fil2. However, interestingly, the relative portions of energies are found to be similar in both filaments. The relative portions of energies are presented by donut diagrams in Figure 10. \(E_{\rm B}^{\rm SF}\) is used for the energy portions. As shown in the Figure, both of Fil1 and Fil2 have the largest portion in the magnetic energies (\(>\)60%), and the smallest portion in the gravitational energies (\(\sim\)10%). If \(E_{\rm B}^{\rm UM}\) is used, the energy portion of \(E_{\rm B}^{\rm UM}\) is 73% and 51% for Fil1 and Fil2, respectively, and the magnetic energy is still dominant in both filaments.
## 5 Discussion
The filaments in the hub of Cal-X have chains of dense cores with quasi-periodic spacings which can be the results of fragmentation due to the gravitational instability. In the filaments' regime, the mass per unit length (\(M_{\rm line}=M/L\)) of a filament is often used as a probe of filament's instability like the Jeans mass for spherical systems. In a hydrostatic isothermal cylinder model, the critical line mass (\(M_{\rm line}^{\rm th.crit}\)) where the thermal pressure is in equilibrium with the gravitational collapse is calculated by the equation of
\[M_{\rm line}^{\rm th.crit}=\frac{2c_{\rm s}^{2}}{G}, \tag{22}\]
where \(c_{\rm s}\) is the sound speed. \(M_{\rm line}^{\rm th.crit}\) is close to 16 \(M_{\odot}\) pc\({}^{-1}\) at the typical gas temperature of 10 K. The line masses of Fil1 and Fil2 are 20\(\pm\)3 and 9\(\pm\)2 \(M_{\odot}\) pc\({}^{-1}\), respectively, and hence Fil1 is thermally supercritical and Fil2 is subcritical. If nonthermal components via the turbulence which can also support the system from the gravitational collapse are considered, the critical line mass including both of nonthermal and thermal components (\(M_{\rm line}^{\rm crit}\)) can be calculated using the total velocity
Figure 10: The relative portions of gravitational energy (\(E_{\rm G}\)), kinematic energy (\(E_{\rm K}\)), and magnetic energy (\(E_{\rm B}^{\rm SF}\)) of Fil1 (left) and Fil2 (right). \(E_{\rm G}\), \(E_{\rm K}\), and \(E_{\rm B}^{\rm SF}\) are presented with gray, red, and blue colors, respectively, and the relative portions are given in %.
Figure 9: The angular dispersion function (\(<(\Delta\Phi)^{2}>^{1/2}\)) for Fil1 (top) and Fil2 (bottom). The best fit model is presented with thick solid curve, and the zero intercept of the fit determines the turbulent contribution to the total angular dispersion. The vertical and horizontal dotted lines indicate the beam size of POL-2 at the 850 \(\mu\)m wavelength (14.1\({}^{{}^{\prime\prime}}\)) and the expected \(<(\Delta\Phi)^{2}>^{1/2}\) for a random field (52\({}^{\circ}\)), respectively.
dispersion (\(\sigma_{\rm tot}\)) instead of \(c_{\rm s}\) with the following equation:
\[M_{\rm line}^{\rm crit}=\frac{2\sigma_{\rm tot}^{2}}{G}. \tag{23}\]
\(M_{\rm line}^{\rm crit}\) of Fil1 and Fil2 are \(\sim\)96 and 45 \(M_{\odot}\) pc\({}^{-1}\), respectively, and both filaments can be subcritical.
The two filaments in the hub of Cal-X are in a subcritical state if the turbulence and thermal support are considered. However, they have apparent fragmentation features of the chains of dense cores. This seems to contradict the major paradigm for core formation where the dense cores form in a gravitationally supercritical filaments via fragmentations. However, fragmentations and core formations in the thermally subcritical and transcritical filaments are reported in several molecular clouds in observations (see Pineda et al., 2022, and references therein). This is also supported by a recent simulation study. Chira et al. (2018) show that when the dynamic compression from the surrounding cloud is considered, the filaments start to fragment when they are still subcritical. Alternatively, we note that the line mass estimated from the total mass and length (\(M/L\)) is the average value, and thus the filaments can be partly supercritical especially in dense regions. We estimated the local line mass (\(m_{\rm line}\)) from the H\({}_{2}\) column density along the crest by multiplying the filament width (\(N_{\rm H_{2}}\times W\)) and showed in Figure 11. The scale of line mass is presented at the right y-axis. More than two core regions in each filament appear to be thermally supercritical (\(>16\)\(M_{\odot}\) pc\({}^{-1}\)), and the central core (C3) in Fil1 has larger line mass than \(M_{\rm line}^{\rm crit}\).
We also note that filaments could be either stabilized or destabilized by the geometry of the magnetic field: the perpendicular magnetic field to the filament's major axis has no contribution in supporting the filaments against radial collapse, while the magnetic field oriented parallel to the filament's major axis stabilizes the filament against radial contraction (e.g., Seifried and Walch, 2015). In Figure 11, the central core (C3) region is locally supercritical (\(m_{\rm line}>M_{\rm line}^{\rm crit}\)), and the magnetic field orientations in the region are perpendicular to the filament's long axis (see Figure 6 and 7(c)). This may indicate that the perpendicular B-field at the center can allow the radial accretion of the surrounding gas materials onto the filament and the filament can be locally supercritical.
Tang et al. (2019) have argued that the energy balances of gravity, turbulence, and magnetic fields affect the fragmentation features of filamentary molecular clouds. According to their arguments, filamentary molecular clouds which are dominant in \(E_{\rm B}\) would have \(aligned\) fragmentation, while those dominant in \(E_{\rm G}\) and in \(E_{\rm K}\) would have \(no\) and \(clustered\) fragmentations, respectively. Chung et al. (2022) investigated the relative importance of energies in a filament and hubs of HFSs in IC 5146, and obtained results which partly supported the Tang et al. (2019)'s suggestion. Chung et al. (2022) proposed that \(E_{\rm G}\)-dominant hubs are divided into \(clustered\) and \(no\) fragmentation types according to the portion of kinematic energies.
The hub region of California-X shows elongated filamentary structures in 850 \(\mu\)m, named as Fil1 and Fil2 in this study, and the filaments have \(aligned\) fragmentation feature. Their energy portions clearly show that the magnetic energy is dominant in both filaments. The fractions of \(E_{\rm B}^{\rm SF}\) in Fil1 and Fil2 are 67% and 69%, respectively, which agree to the Tang et al. (2019)'s suggestion. Moreover, the relative fractions of \(E_{\rm G}\), \(E_{\rm K}\), and \(E_{\rm B}\) as well as the portion of \(E_{\rm B}\) in Fil1 and Fil2 are comparable to those of the filament in IC 5146 (Chung et al., 2022).
One more noticeable characteristics in the \(aligned\) fragmentation of Fil1 and Fil2 is the quasi-periodic spacing of cores. In our 850 \(\mu\)m data, five cores are identified in each filament. Figure 11 depicts the 850 \(\mu\)m intensity along the skeletons of filaments and the cores' positions. The mean projected spacings of cores in Fil1 and Fil2 are 0.13\(\pm\)0.01 and 0.16\(\pm\)0.03 pc, respectively.
In the classical linear fragmentation models (e.g., Inutsuka et al., 1992), the core spacing is expected to be \(\sim 4\times\)filament's width for infinitely long cylindrical filaments in hydrostatic equilibrium states. The width of Fil1 is \(0.160\pm 0.026\) pc and that of Fil2 is \(0.070\pm 0.004\) pc. The core spacings are much smaller than the expected value of classical scenario. Since we do not correct for inclination, the mean projected spacings of cores are lower limits. If the inclination to the line of sight is 12 degree and 35 degree for Fil1 and Fil2, respectively, the core spacings of Fil1 and Fil2 become 0.64 pc and 0.28 pc being close to four times of each filament's width. However, the quasi-periodic spacings of fragments in filaments in the observations (e.g., Smith et al., 2023) and simulations (e.g., Clarke et al., 2016) do not always match to the expectation of classical cylinder model.
Zhang et al. (2020) used Herschel far-infrared data and investigated the filaments and cores in the California-X region. Four and five cores are found in Fil1 and Fil2 (filaments # 10 and # 8 referred in Zhang et al., 2020), respectively, and the cores are regularly spaced with \(\Delta\bar{S}\) of 0.12 and 0.16 pc assuming the distance of 470 pc. These are similar to our results from 850 \(\mu\)m data. The widths of filaments estimated using Herschel data are 0.09 pc for Fil1 and 0.13 pc for Fil2 (Zhang et al., 2020),
and thus \(\Delta\bar{S}\) are smaller than the expected core spacing in the classical cylinder fragmentation model. They propose two possibilities for the short core spacings: 1) the geometrical bending structure of the filaments and 2) the continuously accreting gas from the natal cloud in case of F#8 (Fil2 in this study).
The role of magnetic fields on the fragmentations of filaments in Cal-X is also discussed by Zhang et al. (2020). The fragmentation intervals become shorter as the longitudinal magnetic fields are stronger (Nakamura et al., 1993). On the contrary, the magnetic fields perpendicular to the filament axis is proposed to increase the fragmentation intervals (Hanawa et al., 2017). Zhang et al. (2020) suggested the longitudinal magnetic fields with \(\sim 100\)\(\mu\)G which can cause the short core spacings in the filaments of Cal-X hub. The \(B_{\rm POS}^{\rm SF}\) of Fil1 and Fil2 are \(110\pm 80\) and \(90\pm 60\)\(\mu\)G, respectively, which are comparable to the suggestion of Zhang et al. (2020). Beside, in case of Fil1, the longitudinal magnetic fields are likely more prominent than the perpendicular magnetic fields (see Section 3.3). There are magnetic field orientations being perpendicular to the filament's direction, but those vectors are confined at the central and southern regions with a portion of 38% only. And, they are strongly linked to the direction of density gradient and the large scale B-field orientation observed by Planck. Hence, we propose that the short core spacing of Fil1 in the Cal-X hub could be due to the longitudinal magnetic field orientation.
## 6 Summary
We have performed polarizations and molecular line observations toward the hub of the California-X molecular cloud using the JCMT SCUBA-2/POL-2 and HARP instruments. The main results are summarized below.
1. We identified filaments and cores from the 850 \(\mu\)m emission, and estimated physical quantities such as length, width, mass, and nonthermal velocity dispersions for two filaments (Fil1 and Fil2) having chains of dense cores. The average line mass (\(M/L\)) presents that Fil1 and Fil2 are thermally supercritical and subcritical, respectively, but both are highly subcritical if the nonthermal turbulence is considered.
Figure 11: 850 \(\mu\)m intensity (left y-axis) and its corresponding local line mass (\(m_{\rm line}\), right y-axis) along the skeleton from south to north. Pixels having distance \(<0.1\) pc from the skeletons are used. The red and blue curves are the Gaussian models for the dense cores and background filament, respectively, and the thick purple curve is the sum of them. The pink colored regions indicate the positions and mean sizes of dense cores. The dashed gray lines indicate \(M_{\rm line}^{\rm th.crit}\) at the typical gas temperature of 10 K (16 \(M_{\odot}\) pc\({}^{-1}\)), and the dotted line of the top panel is \(M_{\rm line}^{\rm crit}\) of Fil1 (96 \(M_{\odot}\) pc\({}^{-1}\)).
2. The magnetic field vectors are inferred by rotating the polarization vectors by 90 degrees. We measured the magnetic field strengths of two filaments using the modified Davis-Chandrasekhar-Fermi method, which are \(B_{\rm POS}^{\rm SF}=110\pm 80\) and \(90\pm 60\)\(\mu\)G. The mass-to-magnetic flux ratios (\(\lambda\)) and Alfvenic Mach numbers (\(M_{\rm A}\)) are calculated, and the two filaments are both magnetically subcritical and sub-Alfvenic.
3. We estimated the gravitational, kinematic, and magnetic field energies in the two filaments and compared the energy budgets. We found that magnetic energy has the largest fractions of 67% and 69% in Fil1 and Fil2, respectively. Both filaments in the hub of Cal-X have cores in a line which may be the results of filaments' fragmentations. The fragmentation types of the two filaments can be classified into \(aligned\) fragmentation and the resulting energy balance is consistent to the Tang et al. (2019)'s suggestion.
4. The mean projected core spacing of Fil1 and Fil2 are 0.13 and 0.16 pc, respectively, and they are smaller than that expected by the classical cylinder fragmentation model (\(\sim 4\times\)filament's width). An inclination of \(11^{\circ}\) and \(35^{\circ}\) to the line of sight can explain the difference between the observed projected core spacing and the model's core separation of Fil1 and Fil2, respectively. Besides, the longitudinal magnetic fields are found to be slightly dominant in Fil1. Hence, we propose that the dominant, longitudinal B-fields may affect the fragmentation of Fil1 into aligned dense cores with a short core spacing.
## Acknowledgments
The authors are grateful to the anonymous referee for the valuable comments, which helped to improve the quality of the paper. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(grant number) (NRF-2022R11A1A01053862) and the National R & D Program through the National Research Foundation of Korea Grants funded by the Korean Government (NRF-2016R1D1A1B02015014). C.W.L. was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2019R1A2C1010851), and by the Korea Astronomy and Space Science Institute grant funded by the Korea government (MSIT; project No. 2022-1-840-05). W.K. was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2021R1F1A1061794). M.T. acknowledges partial support from project PID2019-108765GB-I00 funded by MCIN/AEI/10.13039/501100011033. S.K. is supported by the National Research Council of Science & Technology (NST)-Korea Astronomy and Space Science Institute (KASI) Postdoctoral Research Fellowship for Young Scientists at KASI in South Korea.
|
2306.10948 | More on discrete convexity | In several recent papers some concepts of convex analysis were extended to
discrete sets. This paper is one more step in this direction. It is well known
that a local minimum of a convex function is always its global minimum. We
study some discrete objects that share this property and provide several
examples of convex families related to graphs and to two-person games in normal
form. | Vladimir Gurvich, Mariya Naumova | 2023-06-19T14:08:04Z | http://arxiv.org/abs/2306.10948v3 | # More on discrete convexity
###### Abstract
In several recent papers some concepts of convex analysis were extended to discrete sets. This paper is one more step in this direction. It is well known that a local minimum of a convex function is always its global minimum. We study some discrete objects that share this property and provide several examples of convex families related to graphs and to two-person games in normal form.
**AMS subjects**: 91A05, 94D10, 06E30.
**Keywords:** Convex, connected, graph, perfect graphs, kernel, game, two-person game in normal form, saddle point, Nash equilibrium.
## 1 Hereditary and Convex Discrete Families
The similarity between convex functions and submodular discrete functions is actively studied since 1970s; see for example, [9, 37, 39, 47, 72, 77, 84, 91, 87, 88]. Also in several recent papers some concepts and ideas of convex analysis were applied to discrete sets and functions [32, 40, 68, 78, 79, 74]. In both cases matroids play an important role.
The present paper is another step in this direction. It is well known that each local minimum of a convex function is always its global minimum. We study some discrete objects that have the same property and provide several examples related to graphs and two-person games in normal form.
Given a finite partially ordered set (poset) \((\mathcal{P},\succ)\) and \(P,P^{\prime}\in\mathcal{P}\), recall that \(P^{\prime}\) is a _success_ of \(P\) if \(P\succ P^{\prime}\); furthermore, \(P^{\prime}\) is an _immediate successor_ of \(P\) if \(P\succ P^{\prime}\) and \(P\succ P^{\prime\prime}\succ P^{\prime}\) for no \(P^{\prime\prime}\in\mathcal{P}\). Respectively, \(P\) is called an (immediate) predecessor of \(P^{\prime}\) if and only if \(P^{\prime}\) is an (immediate) successor of \(P\). Notation \(P\succeq P^{\prime}\) means that either \(P\succ P^{\prime}\) or \(P=P^{\prime}\).
Consider an arbitrary subset (family) \(\mathcal{F}\subseteq\mathcal{P}\). Recall that \(F\) is a _(local) minimum_ of \(\mathcal{F}\) if and only if \(F\in\mathcal{F}\) but \(F^{\prime}\not\in\mathcal{F}\) whenever \(F^{\prime}\) is an (immediate) successor of \(F\).
We denote by \(\mathcal{M}(\mathcal{F},\mathcal{P},\succ)\) and by \(\mathcal{LM}(\mathcal{F},\mathcal{P},\succ)\), respectively, the set (class) of all minima and local minima of \(\mathcal{F}\) in \((\mathcal{P},\succ)\). Furthermore, we wave some or all arguments of \(\mathcal{M}\) and \(\mathcal{LM}\) when they are uniquely determined by the context. By the above definitions, \(\mathcal{M}\subseteq\mathcal{LM}\).
A family \(\mathcal{F}\subseteq\mathcal{P}\) is called:
* _convex_ if \(\mathcal{M}(\mathcal{F})=\mathcal{LM}(\mathcal{F})\);
* _strongly convex_ if \(\mathcal{F}\) is convex and for any \(F\in\mathcal{F}\) and \(F^{\prime}\in\mathcal{M}(\mathcal{F})\) such that \(F\succ F^{\prime}\) there exists an immediate successor \(P\) of \(F\) such that \(P\in\mathcal{F}\) and \(F\succ P\succeq F^{\prime}\).
* _hereditary_ if \(P\in\mathcal{F}\) whenever \(P\) is a successor of some \(F\in\mathcal{F}\).
* _weakly hereditary_ if \(P\in\mathcal{F}\) whenever \(F\in\mathcal{F},\;F^{\prime}\in\mathcal{M}(\mathcal{F})\), and \(F\succ P\succeq F^{\prime}\).
In accordance with the above definitions, the following implications hold:
hereditary \(\Rightarrow\) weakly hereditary \(\Rightarrow\) strongly convex \(\Rightarrow\) convex,
while all inverse implications fail, as we will show in this paper. Note that the last two concepts become equivalent if family \(\mathcal{F}\) has a unique minimum and the first two are equivalent whenever \(\mathcal{M}(\mathcal{F})\supseteq\mathcal{M}(\mathcal{P})\), or more precisely, \(\mathcal{M}(\mathcal{F},\mathcal{P},\succ)\supseteq\mathcal{M}(\mathcal{P}, \mathcal{P},\succ)\).
**Remark 1**.: _The last concept could be "slightly" modified as follows:_
* _A family_ \(\mathcal{F}\) _is called_ very weakly-hereditary _if_ \(P\in\mathcal{F}\) _whenever_ \(F\in\mathcal{F}\) _and_ \(F\succ P\succeq F^{\prime}\) _for some_ \(F^{\prime}\in\mathcal{M}(\mathcal{F})\)_._
_Then to the above chain of implications we can add the following one:_
_hereditary \(\Rightarrow\) weakly hereditary \(\Rightarrow\) very weakly hereditary \(\Rightarrow\) convex._
_Yet, we will not consider this modification, since we have no "natural" example of a very weakly, but not weakly, hereditary family. Although "formal" such examples are not difficult to construct._
In the next three sections we consider several examples related to directed and non-directed graphs, complete edge-chromatic graphs, and two-person game forms and normal form games, respectively. We survey known results and obtain several new ones.
## 2 Graphs and digraphs
### Definitions and preliminaries
Given a finite (directed) graph \(G\), we denote by \(V(G)\) and \(E(G)\) the sets of its vertices and (directed) edges, respectively. Multiple edges are allowed but loops are forbidden.
A (directed) graph \(G\) is called: _null-graph_ if \(V(G)=\emptyset\) and _edge-free_ if \(E(G)=\emptyset\). The null-graph is unique and edge-free, by definition, but not vice versa.
We will consider two partial orders: related to vertices \(\succ_{V}\) and to (directed) edges \(\succ_{E}\). In the first case, \(G\succ G^{\prime}\) if \(G^{\prime}\) is an induced subgraph of \(G\), that is, \(V(G^{\prime})\subseteq V(G)\) and
\(E(G^{\prime})\) consists of all (directed) edges of \(E(G)\) whose both ends are in \(V(G^{\prime})\). In the second case, \(G\succ G^{\prime}\) if \(G^{\prime}\) is a subgraph of \(G\) defined on the same vertex-set, that is, \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})\subseteq E(G)\).
Given a graph \(G\), which may be directed or not, and a set of its (induced) subgraphs \(G_{1},\ldots,G_{n}\), define a family \((\mathcal{F}(G),\succ_{E})\) (respectively, \((\mathcal{F}(G),\succ_{V})\)) that consists of all subgraphs \(G^{\prime}\) of \(G\) containing as a (induced) subgraph at least one of \(G_{i},i=1,\ldots,n\).
**Lemma 1**.: _Both families are weakly hereditary. Furthermore, \((\mathcal{F}(G),\succ_{V})\) (respectively, \((\mathcal{F}(G),\succ_{E})\)) is hereditary if and only if \(n=1\) and \(G_{1}\) is the null-graph (respectively, the edge-free graph)._
Proof.: Consider a subgraph \(G^{\prime}\in(\mathcal{F}(G),\succ_{V})\), (respectively, \(G^{\prime}\in(\mathcal{F}(G),\succ_{E})\)) that contains (as an induced subgraph) \(G_{i}\) for some \(i\in[n]=\{1,\ldots,n\}.\) Obviously, the above property is kept when we delete a vertex from \(V(G^{\prime})\setminus V(G_{i})\) (respectively, an edge \(e\in E(G^{\prime})\setminus E(G_{i})\)) if any. Obviously, such a vertex (respectively, an edge) exists unless \(G=G_{i}\). Thus, in both cases family \(\mathcal{F}(G)\) is weakly hereditary. Obviously, it is hereditary if and only if \(G_{i}\) cannot be reduced.
### Connected graphs
A graph \(G\) is called _connected_ if for every two distinct vertices \(v,v^{\prime}\in V(G)\) it contains a path connecting \(v\) and \(v^{\prime}\). In particular, the null-graph and the one-vertex graphs are connected, since they do not have two distinct vertices.
#### 2.2.1 Order \(\succ_{V}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all connected induced subgraphs of a given graph \(G\). It is easily seen that class \(\mathcal{M}=\mathcal{LM}\) contains only the null-graph, and, moreover, \(\mathcal{F}\) is strongly convex. Yet, it is not weakly hereditary.
**Example 1**.: _Consider 2-path \((v_{1},v_{2}),(v_{2},v_{3})\). It is connected, but by deleting \(v_{2}\) we obtain a not connected subgraph induced by \(\{v_{1},v_{3}\}\)._
#### 2.2.2 Order \(\succ_{E}\)
Given a connected graph \(G\), family \(\mathcal{F}=\mathcal{F}(G)\) consists of all connected subgraphs \(G^{\prime}\) of \(G\) with \(V(G^{\prime})=V(G)\).
A graph \(G^{\prime}\) is called a _spanning tree_ of \(G\) if \(V(G^{\prime})=V(G),\ E(G^{\prime})\subseteq E(G)\), and \(G^{\prime}\) is a _tree_, that is, connected and has no cycles. By definition, all spanning trees of \(G\) are in \(\mathcal{F}\) and, obviously, they form class \(\mathcal{M}=\mathcal{LM}\). By Lemma 1, family \(\mathcal{F}\) is weakly hereditary.
**Remark 2**.: _Let \(G\) be a connected graph with weighted edges: \(w:E(G)\rightarrow\mathbb{R}\). It is well known [21, 22, 69] that one can obtain a spanning tree of \(G\) of maximal total weight by the greedy algorithm, as follows. Delete an edge \(e\in E(G)\) such that (i) \(e\) belongs to a cycle of \(G\), or in other words, the reduced graph is still connected on \(V(G)\), and (ii) \(e\) has a minimal weight among all edges satisfying (i). Proceed until such edges exist._
### Disconnected graphs
#### 2.3.1 Order \(\succ_{V}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all disconnected induced subgraphs of a given graph \(G\). By convention, the null-graph and one-vertex graphs are connected Hence, class \(\mathcal{M}(\mathcal{F})\) consists of all subgraphs of \(G\) induced by pairs of non-adjacent vertices. In particular, \(\mathcal{F}=\emptyset\) if and only if there is no such pair, that is, if graph \(G\) is complete.
**Proposition 1**.: _For every graph \(G\), family \(\mathcal{F}(G)\) is strongly connected_
Proof.: Consider a not connected induced subgraph \(G^{\prime}\) of \(G\) and any pair of non-adjacent vertices \(v^{\prime},v^{\prime\prime}\in V(G^{\prime})\). It is easily seen that either \(V(G^{\prime})=\{v^{\prime},v^{\prime\prime}\}\), in which cases \(G^{\prime}\in\mathcal{M}(\mathcal{F}(G))\), or there exists a vertex \(v\in V(G^{\prime})\setminus\{v^{\prime},v^{\prime\prime}\}\) such that subgraph \(G^{\prime\prime}\) induced by \(V(G^{\prime})\setminus\{v\}\) is not still connected. This property means that \(\mathcal{F}\) is strongly convex.
However, family \(\mathcal{F}(G)\) may be not weakly hereditary for some \(G\).
**Example 2**.: _Consider graph \(G\) that consists of a 2-path \((v_{1},v_{2}),(v_{2},v_{3})\) and an isolated vertex \(v_{0}\). This graph is disconnected, that is, \(G\in\mathcal{F}(G)\), but, by deleting \(v_{0}\), we obtain a connected graph \(G^{\prime}\not\in\mathcal{F}(G)\). Yet, \(v_{1},v_{3}\in V(G^{\prime})\) and, hence, graph \(G^{\prime\prime}\) induced by these vertices is in \(\mathcal{F}(G)\), moreover, \(G^{\prime\prime}\in\mathcal{M}(\mathcal{F}(G))\). Thus, \(\mathcal{F}(G)\) is not weakly hereditary._
_Note that strong convexity holds for \(\mathcal{F}(G)\), because one can delete \(v_{2}\) rather than \(v_{0}\)._
#### 2.3.2 Order \(\succ_{E}\)
Given a graph \(G\), family \(\mathcal{F}=\mathcal{F}(G)\) consists of all disconnected graphs \(G^{\prime}\) such that \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})\subseteq E(G)\). Then, obviously, family \(\mathcal{F}\) has a unique minimum: \(\mathcal{M}(\mathcal{F})\) consists of a unique graph, which is the edge-free graph on \(V(G)\). Obviously, deleting edges and keeping the vertex-set respects the non-connectivity. Thus, family \(\mathcal{F}\) is hereditary.
### Strongly connected directed graphs
A directed graph (digraph) \(G\) is called _strongly connected_ (SC) if for every two (distinct) vertices of \(v,v^{\prime}\in V(G)\) there is a directed path in \(G\) from \(v\) to \(v^{\prime}\).
#### 2.4.1 Order \(\succ_{V}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all SC induced subgraphs of a given digraph \(G\). This family is not convex.
**Example 3**.: _Consider a digraph \(G\) that consists of two directed cycles of length at least 3 each sharing a unique vertex. Then, \(G\) is a locally minimal SC digraph, \(G\in\mathcal{LM}(\mathcal{F}(G))\). Indeed, \(G\) is SC but we destroy this property by deleting any vertex of \(G\). Furthermore, \(G\not\in\mathcal{M}\), since each of two cycles of \(G\) is in \(\mathcal{M}\). Thus, \(G\in\mathcal{LM}\setminus\mathcal{M}\)._
#### 2.4.2 Order \(\succ_{E}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all SC subgraphs \(G^{\prime}\) of a given digraph \(G\) such that \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})\subseteq E(G)\). Note that \(\mathcal{F}(G)=\emptyset\) if and only if \(G\) is not SC. Obviously, strong connectivity is monotone non-decreasing on \(2^{E}\). In other words, for any two subgraphs \(G^{\prime}\) and \(G^{\prime\prime}\) of \(G\) such that \(V(G^{\prime})=V(G^{\prime\prime})=V(G)\) and
\(E(G^{\prime})\subseteq E(G^{\prime\prime})\subseteq E(G)\), we have: \(G^{\prime\prime}\) is SC on \(V(G)\) whenever \(G^{\prime}\) is. By Lemma 1, family \(\mathcal{F}(G)\) is weakly hereditary but not hereditary.
### Not strongly connected directed graphs
#### 2.5.1 Order \(\succ_{V}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all not SC induced subgraphs of a given digraph \(G\). It is easily seen that \(\mathcal{M}(\mathcal{F})\) consists of all pairs of vertices \(v,v^{\prime}\in V(G)\) such that at least one of two arcs \((v,v^{\prime})\) or \((v^{\prime},v)\) is missing in \(G\). There exists no such pair in \(G\) if and only if \(\mathcal{F}(G)=\emptyset\).
Assume that digraph \(G^{\prime}\) is not SC, that is, \(G^{\prime}\in\mathcal{F}\). Then, there exist two (distinct) vertices \(v,v^{\prime}\in V(G^{\prime})\) such that there is no directed path from \(v\) to \(v^{\prime}\) in \(G^{\prime}\). Hence, an induced subgraph \(G^{\prime\prime}\) of \(G^{\prime}\) is not SC, \(G^{\prime\prime}\in\mathcal{M}(\mathcal{F})\), whenever \(v,v^{\prime}\in V(G^{\prime\prime})\). Thus, family \(\mathcal{F}\) is strongly convex. Yet, it is not weakly hereditary, since an induced subgraph of a not SC digraph can be SC. Two simplest examples are two isolated vertices or one directed edge.
#### 2.5.2 Order \(\succ_{E}\)
In this case \(\mathcal{F}=\mathcal{F}(G)\) is the family of all not SC subgraphs \(G^{\prime}\) of a given digraph \(G\) such that \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})\subseteq E(G)\). Obviously, all these subgraphs are not SC whenever \(G\) is not SC. Thus, family \(\mathcal{F}\) is hereditary.
### Ternary graphs
A graph is called _ternary_ if it contains no induced cycle of length multiple to 3.
By definition, family \(\mathcal{T}\) of ternary graphs is hereditary in order \(\succ_{V}\). In contrast, in order \(\succ_{E}\) this family is not even convex, as the following example shows.
**Example 4**.: _From [30] we know that "D. Kral asked (unpublished): Is it true that in every ternary graph (with an edge) there is an edge \(e\) such that the graph obtained by deleting \(e\) is also ternary? This would have implied that all ternary graphs are 3-colourable, but has very recently been disproved; a counterexample was found by M. Wrochna. (Take the disjoint union of a 5-cycle and a 10-cycle, and join each vertex of the 5-cycle to two opposite vertices of the 10-cycle, in order.)"_
Fig. 1. Wrochna’s example
**Remark 3**.: _In [30] it was proven that chromatic numbers of all ternary graphs are bounded by a constant. Yet, it is much larger than 3._
**Remark 4**.: _Consider the skeleton graph of the cube. Obviously, an induced 6-cycle appears whenever we delete an edge. However, this graph itself contains two induced 6-cycles._
_Also, an induced 6-cycle appears whenever we delete an edge of icosidodecahedron - a polyhedron with twenty triangular faces and twelve pentagonal faces, which has 30 identical vertices, with two triangles and two pentagons meeting at each, and 60 identical edges, each separating a triangle from a pentagon (see Fig. 2. Yet, this graph itself contains triangles and induced 9-cycles._
### Non-ternary graphs
By definition, non-ternary graph contains an induced cycle of length multiple to 3 (a ternary cycle, for short). Given a graph \(G\), denote by \(\mathcal{C}_{3}(G)\) (respectively, by \(\mathcal{IC}_{3}(G)\)) the set of its (induced) ternary cycles and by \(\mathcal{F}(G)\) the family of its non-ternary subgraphs. From Lemma 1 we will derive that, with respect to (wrt) both orders \(\succ_{V}\) and \(\succ_{E}\), family \(\mathcal{F}(G)\) is weakly hereditary but not hereditary.
#### 2.7.1 Order \(\succ_{V}\)
In this case, \(\mathcal{M}(\mathcal{F}(G))=\mathcal{IC}_{3}(G)\). Given an induced subgraph \(G^{\prime}\) of \(G\) that contains a ternary cycle \(C\in\mathcal{IC}_{3}(G)\), one can delete a vertex \(v\in V(G^{\prime})\setminus V(C)\) such that the reduced graph \(G^{\prime\prime}\) still contains \(C\) as an induced subgraph unless \(G^{\prime}=C\). This exactly means that family \(\mathcal{F}(G)\) is weakly hereditary. Obviously, it is not hereditary, since deleting a vertex might destroy all ternary cycles of \(G^{\prime}\).
#### 2.7.2 Order \(\succ_{E}\)
In this case, \(\mathcal{M}(\mathcal{F}(G))\) is in a one-to-one correspondence with \(\mathcal{C}_{3}(G)\). Recall that \(V(G^{\prime})=V(G)\) for each subgraph \(G^{\prime}\in\mathcal{F}(G)\). Hence, \(G^{\prime}\) consists of a cycle \(C\in\mathcal{C}_{3}(G)\) and several isolated vertices from \(V(G)\setminus V(C)\).
Given a non-ternary subgraph \(G^{\prime\prime}\) of \(G\) such that \(V(G^{\prime\prime})=V(G)\) and \(G^{\prime\prime}\) contains a (not necessarily induced) ternary cycle \(C\in\mathcal{C}_{3}(G)\), one can delete an edge \(e\in V(G^{\prime\prime})\setminus V(C)\) such that the reduced graph still contains \(C\) unless \(G^{\prime\prime}=C\). by Lemma 1, family \(\mathcal{F}(G)\) is weakly hereditary. Obviously, it is not hereditary, since deleting an edge might destroy all ternary cycles of \(G^{\prime\prime}\).
Figure 2: Icosidodecahedron. Illustration for Luca Pacioli’s ”Divina proportione” by Leonardo da Vinci
### Perfect and imperfect graphs
#### 2.8.1 Definitions and preliminaries
Given a graph \(G\), as usual, \(\chi=\chi(G)\) and \(\omega=\omega(G)\) denote its chromatic and clique numbers, respectively. Recall that \(\chi\) is the minimum number of colors in a proper vertex-coloring of \(G\) and \(\omega\) is the number of vertices in a maximum clique of \(G\). Obviously, \(\chi(G)\geq\omega(G)\) for every graph \(G\).
Graph \(G\) is called _perfect_ if \(\chi(G^{\prime})=\omega(G^{\prime})\) for every induced subgraph \(G^{\prime}\) of \(G\).
Thus, by definition, in order \(\succ_{V}\) the family of perfect graphs is hereditary.
This concept was introduced in 1961 by Claude Berge [5] (see also [6] for more details) who made the following two conjectures:
**Perfect Graph Conjecture:**\(G\) is perfect if (and only if) the complementary graph \(\overline{G}\) is perfect. It was proven in 1972 by Laslo Lovasz [70, 71] and since then is called the _Perfect Graph Theorem (PGT)_.
A _hole_ is a cycle of length at least 4 and an _anti-hole_ is the complement of a hole.
**Strong perfect graph conjecture:** graph \(G\) is perfect if and only if it contains no induced odd holes and anti-holes, In other words, odd holes and odd anti-holes are minimal imperfect graphs in order \(\succ_{V}\). This conjecture was proven by M. Chudnovsky, N. Robertson, P. Seymour, and R. Thomas in 2002 and published in 2006 [29]. Since then this statement is called the _Strong Perfect Graph Theorem (SPGT)_.
A polynomial recognition algorithm for perfect graphs was obtained by M. Chudnovsky, G. Cornuejols, X. Liu, P. Seymour, and K. Vuskovic in 2002 and published in 2005 [28].
#### 2.8.2 Perfect graphs
**Order \(\succ_{V}\)** This family \(\mathcal{F}\) is hereditary and \(\mathcal{M}(\mathcal{F})\) contains only the null-graph.
**Order \(\succ_{E}\)** An edge of a perfect graph \(G\) is called _critical_ if deletion of it results in an imperfect graph. For example, six edges \((v_{1},v_{2}),(v_{2},v_{3}),(v_{3},v_{4}),(v_{4},v_{5}),(v_{5},v_{1})\), and \((v_{1},v_{3})\) form a perfect graph in which \((v_{1},v_{3})\) is a unique critical edge. This concept was introduced by Annegret Wagler [94]. With Stefan Hougary, she proved that a perfect graph has no critical edges if and only if it is _Meyniel_, that is, every its odd cycle of length 5 or more (if any) has at least two chords [94, Theorem 3.1].
There are perfect graphs in which all edges are critical. Some examples were given in [14, Figures 2 and 3] and called _Rotterdam_ graphs. Clearly, these graphs are in \(\mathcal{LM}(\mathcal{F})\setminus\mathcal{M}(\mathcal{F})\) and, hence, the considered family, of perfect graphs in order \(\succ_{E}\), is not convex.
Furthermore, [14, Theorem 4] claims that every edge of the complement of a Rotterdam graph is critical too. In other words, a Rotterdam graph becomes imperfect whenever we delete an edge from it or add an edge to it.
Let us note finally that no efficient characterization of the non-critical-edge-free perfect graphs is known, in contrast to the critical-edge-free ones, which are Meyniel. The main result of [28] provides a polynomial recognition algorithm for the former family.
#### 2.8.3 Imperfect graphs
Order \(\succ_{V}\)In this case, by the SPGT, \(\mathcal{M}(\mathcal{F})=\mathcal{LM}(\mathcal{F})\) and this set contains only odd holes and odd anti-holes. Again, by Lemma 1, \(\mathcal{F}\) is weakly hereditary but not hereditary.
Order \(\succ_{E}\)In 1972 Elefterie Olaru characterized minimal graphs of this family. He proved that it is convex and \(G\in\mathcal{M}=\mathcal{LM}\) if and only if \(G\) is an odd hole plus \(k\) isolated vertices for some \(k\geq 0\)[82]; see also [14, 83]. Thus, by Lemma 1, family \(\mathcal{F}\) is weakly hereditary but not hereditary.
Note that the odd anti-holes, except \(C_{5}\), are not in \(\mathcal{LM}\), since each one has an edge whose elimination would result in a graph with an induced odd hole.
#### 2.8.4 Graphs with \(\chi=\omega\)
It is easily seen that \(\mathcal{M}=\mathcal{LM}\) for both orders \(\succ_{V}\) and \(\succ_{E}\); in other words, both families are convex.
Indeed, for \(\succ_{V}\) (respectively, for \(\succ_{E}\)) sets \(\mathcal{M}\) and \(\mathcal{LM}\) are equal and contain only the null-graph (respectively, the edge-free graphs) see Propositions 2 and 3 in [14].
Yet, obviously, deleting a vertex or an edge may fail the equality \(\chi=\omega\). Thus, both considered families are not hereditary.
#### 2.8.5 Graphs with \(\chi>\omega\)
Order \(\succ_{V}\)By SPGT, every graph with \(\chi>\omega\) contains an odd hole or odd anti-hole as an induced subgraph; in other words, class \(\mathcal{M}\) contains only the odd holes and odd anti-holes.
Class \(\mathcal{LM}\) is wider; it consists of all so-called _partitionable_ graphs defined as follows: Graph \(G\) is partitionable if \(\chi(G)>\omega(G)\) but \(\chi(G^{\prime})=\omega(G^{\prime})\) for each induced subgraph \(G^{\prime}\) of \(G\) such that \(V(G^{\prime})=V(G)\setminus\{v\}\) for a vertex \(v\in V(G)\). Such definition is one of many equivalent characterizations of partitionable graphs; this follows easily from the pioneering results of [10, 85] and it is explicit in [15].
Thus, the considered family is not convex.
**Remark 5**.: _The above characterization of \(\mathcal{M}\) is based on SPGT, which is very difficult, while the case of \(\mathcal{LM}\) is simple. In contrast, partitionable graphs are much more sophisticated than the odd holes and anti-holes. Although very many equivalent characterizations of partitionable graphs are known (see, for example, [10, 4, 27, 64, 85] ) yet, their structure is complicated and not well understood. For example, the fact that each partitionable graph contains an induced odd hole or anti-hole is equivalent with the SPGT._
_The following two questions about partitionable graphs are still open. In addition to the odd holes and odd anti-holes, there is one more partitionable graph \(G_{17}\) that has 17 vertices and has no: (i) small transversals and (ii) uncertain edges. It is open whether (i) or (ii) may hold for other partitionable graphs. The conjecture that (i) cannot, if true, would significantly strengthen SPGT; see [4, 15, 27, 64] for the definitions and more details._
Order \(\succ_{E}\)By Lemma 1, the corresponding family \(\mathcal{F}\) is weakly hereditary: class \(\mathcal{M}=\mathcal{LM}\) consists of odd holes with \(k\) isolated vertices, for some \(k\geq 0\). This follows from Olaru's Theorem [82]; see also [83] and [14, Proposition 1]. Family \(\mathcal{F}\) is not hereditary, since obviously, inequality \(\chi>\omega\) may turn into equality after deleting an edge.
### Kernels in digraphs
#### 2.9.1 Definitions and preliminaries
Given a finite digraph \(G\), a vertex-set \(K=K(G)\subseteq V(G)\) is called a _kernel_ of \(G\) if it is (i) independent and (ii) dominating, that is,
(i) \(v,v^{\prime}\in K(G)\) for no directed edge \((v,v^{\prime})\in E(G)\) and
(ii) for every \(v\in V(G)\setminus K(G)\) there is a directed edge \((v,v^{\prime})\) from \(v\) to some \(v^{\prime}\in K(G)\).
This definition was introduced in 1901 by Charles Bouton [23] for a special digraph (of the popular game of NIM) and then in 1944 it was extended by John Von Neumann and Oskar Morgenstern for arbitrary digraphs in [80].
It is not difficult to verify that an even directed cycle has two kernels, while an odd one has none. This obvious observation was generalized in 1953 by Richardson [86] as follows: A digraph has a kernel whenever all its directed cycles are even. The original proof was simplified in [67, 66, 81, 42, 8, 36].
**Remark 6**.: _It is not difficult to verify that a digraph has at most one kernel whenever all its directed cycles are odd [13]. This claim combined with the Richardson Theorem imply that an acyclic digraph has a unique kernel. The latter statement is important for game theory, allowing to solve finite acyclic graphical zero-sum two-person games. Of course, it has a much simpler direct proof [80]._
Already in 1973 Vasek Chvatal proved that it is NP-complete to recognize whether a digraph has a kernel [26].
#### 2.9.2 Kernell-less digraphs
**Order \(\succ_{E}\)**
In this case, given a digraph \(G\), family \(\mathcal{F}(G)\) contains only the digraphs \(G^{\prime}\) such that \(V(G^{\prime})=V(G),E(G^{\prime})\subseteq E(G)\), and \(G^{\prime}\) has no kernel. By Richardson's Theorem, \(G^{\prime}\in\mathcal{M}(\mathcal{F}(G))\) if and only if \(G^{\prime}\) is a directed odd cycle in \(G\) (plus the set of isolated vertices \(v\in V(G)\setminus V(G^{\prime})\)).
In 1980 Pierre Duchet [35] conjectured that every kernel-less digraph \(G^{\prime}\) has an edge \(e\in E(G^{\prime})\) such that the reduced digraph \(G^{\prime\prime}=G^{\prime}-e\) (that is, \(E(G^{\prime\prime})=E(G^{\prime})\setminus\{e\}\)) is still kernel-less unless \(G^{\prime\prime}\) is an odd cycle plus \(k\) isolated vertices for some \(k\geq 0\); in other words, family \(\mathcal{F}\) of kernel-less digraphs is convex, \(\mathcal{M}(\mathcal{F})=\mathcal{LM}(\mathcal{F})\). This statement, if true, would significantly strengthen Richardson's theorem. Yet, it was shown in [3] that a circulant with 43 vertices is a counter-example, a locally minimal but not minimal kernel-free digraph.
Let us recall that a circulant \(G=G_{n}(\ell_{1},\ldots,\ell_{q})\) is defined as a digraph with \(n\) vertices, \(V(G)=[n]=\{1,\ldots,n\}\) and \(nq\) arcs, \(E(G)=\{(i,i+j)\mid i\in[n],j\in[q]=\{1,\ldots,q\}\}\), where standardly all sums are taken \(\mod n\).
**Example 5**.: _([3]) It was shown that a circulant \(G_{n}(1,7,8)\) has a kernel if and only if \(n\equiv 0\mod 3\) or \(n\equiv 0\mod 29\). Hence, \(G_{43}(1,7,8)\) is kernel-less. Yet, a kernel appears whenever an arc of this circulant is deleted. Due to circular symmetry, it is sufficient to consider only three cases and delete one of the arcs (43, 1), (43, 7), or (43, 8). It is not difficult to verify that, respectively, the following three subsets become kernels:_
\(K_{1}=\{1,5,10,14,16,19,25,28,30,34,39,43\}\)_,_
\(K_{7}=\{7,9,11,13,22,24,26,28,37,39,41,43\}\)_,_
\(K_{8}=\{3,5,8,14,17,19,23,28,32,34,37,43\}\subseteq\{1,\ldots,43\}=V\)_._
Thus, the set of edge-minimal kernel-free digraphs is a proper subset of the locally edge-minimal ones. Although, only one digraph from the difference is known, it seems that the latter class, unlike the former one, is difficult to characterize. For example, it is not known whether a circulant \(G_{n}(\ell_{1},\ell_{2})\) can be a locally edge-minimal kernel-less digraph, but it is known that it cannot if \(n\leq 1,000,000\)[3].
Order\(\succ_{V}\)
In this case also family \(\mathcal{F}\) of the kernel-less digraphs is not convex. Although it seems difficult to characterize (or recognize in polynomial time) both classes \(\mathcal{M}(\mathcal{F})\) and \(\mathcal{LM}(\mathcal{F})\) of the (locally) vertex-minimal kernel-less digraphs, yet, some digraphs from \(\mathcal{LM}\setminus\mathcal{M}\) can be easily constructed; see for example, [36, 41, 42, 43]. For completeness we provide one more example.
**Example 6**.: _Circulant \(G=G_{16}(1,7,8)\) is kernel-less, since 16 is not a multiple of 3 or 29. Yet, a kernel appears whenever we delete a vertex from \(G\). Due to circular symmetry, without loss of generality (wlog) we can delete "the last" vertex, 16, and verify that vertex-set \(\{1,3,5,7\}\) becomes a kernel. Hence, \(G\in\mathcal{LM}\), but \(G\not\in\mathcal{M}\), since \(G\) contains a directed triangle, \(1+7+8=16\), which is kernel-less._
#### 2.9.3 Digraphs with kernels
We will show that in each order \(\succ_{V}\) or \(\succ_{E}\) the corresponding family \(\mathcal{F}\) is strongly convex but not weakly hereditary.
Given an arbitrary digraph \(G\), obviously, class \(\mathcal{M}(\mathcal{F})\) contains a unique digraph in both cases: the null-graph for \(\succ_{V}\) and the edge-free graph with vertex-set \(V(G)\) for \(\succ_{E}\). Thus, by Lemma 1, both families are not weakly hereditary.
**Proposition 2**.: _Both families \((\mathcal{F},\succ_{V})\) and \((\mathcal{F},\succ_{E})\) are strongly convex._
Proof.: Fix a digraph \(G\) ith a kernel \(K\subseteq V(G)\).
In order \(\succ_{V}\) delete all vertices of \(V(G)\setminus K\), if any, one by one. By definition, \(K\) remains a kernel in every reduced digraph. This reduction results in an independent set, which is a kernel itself. Now we can delete all vertices one by one getting the null-graph at the end.
In order \(\succ_{V}\), first, we delete all arcs within \(V(G)\setminus K\), then all arcs from \(V(G)\setminus K\) to \(K\) (if any, one by one, in both cases). By definition, \(K\) remains a kernel in every reduced digraph. This reduction results in the edge-free digraph on the initial vertex-set \(V(G)\).
#### 2.9.4 On kernel-solvable graphs
A graph is called _kernel-solvable_ if every its clique-acyclic orientation has a kernel; see [11, 12, 13] for the precise definitions and more details. In 1983 Claude Berge and Pierre Duchet conjectured that a graph is kernel-solvable if and only if it is perfect. The "only if part" follows easily from the SPGT (which remained a conjecture till 2002). The "if part" was proven in [11]; see also [12, 13]. This proof does not depend of SPGT.
As we know, the family of kernel-less digraphs is not convex wrt order \(\succ_{E}\), in contrast with the family of not kernel-solvable graphs, which is convex [12, Theorem 1].
Complete edge-chromatic graphs
### Definitions and preliminaries
A \(d\)-graph \({\cal G}=(V;E_{1},\ldots,E_{d})\) is a complete graph whose edges are colored by \(d\) colors \(I=[d]=\{1,\ldots,d\}\), or in other words, are partitioned into \(d\) subsets some of which might be empty. These subsets are called _chromatic components_. For example, we call G a \(2\)- or \(3\)-graph if G has only \(2\), respectively, \(3\), non-empty chromatic components. According to this definition order \(\succ_{E}\) makes no sense for \(d\)-graphs, so we will restrict ourselves by \(\succ_{V}\).
The following \(2\)-graph \(\Pi\) and \(3\)-graph \(\Delta\) will play an important role:
\(\Pi=(V;E_{1},E_{2})\), where \(V=\{v_{1},v_{2},v_{3},v_{4}\},\ E_{1}=\{(v_{1},v_{2}),(v_{2},v_{3}),(v_{3},v_{ 4})\}\), and \(E_{2}=\{(v_{2},v_{4}),(v_{4},v_{1}),(v_{1},v_{3})\}\);
\(\Delta=(V;E_{1},E_{2},E_{3})\), where \(V=\{v_{1},v_{2},v_{3}\},\ E_{1}=\{(v_{1},v_{2})\},E_{2}=\{(v_{2},v_{3})\}\), and \(E_{3}=\{(v_{3},v_{1})\}\).
Fig. 3. \(2\)-graph \(\Pi\) and \(3\)-graph \(\Delta\)
Note that both chromatic components of \(\Pi\) are isomorphic to \(P_{4}\) and that \(\Delta\) is a \(3\)-colored triangle.
Let us also remark that, formally, \(d\)-graphs \(\Pi(d)\) (respectively, \(\Delta(d)\)) defined for every integer \(d\geq 2\) (respectively, \(d\geq 3\)), while \(d-2\) (respectively, \(d-3\)) of their chromatic components are empty. We will omit argument \(d\) assuming that it is a fixed parameter.
Both \(d\)-graphs \(\Pi\) and \(\Delta\) were first considered, for different reasons, in 1967 by Tibor Gallai in his fundamental paper [45]. \(\Delta\)-free d-graphs are sometimes called _Gallai's_ graphs.
The \(\Pi\)- and \(\Delta\)-free d-graphs have important applications to \(d\)-person positional games and to read-once Boolean functions in case \(d=2\)[13, 14, 16, 17, 46, 50, 51, 52, 53, 56, 57].
### Substitution for graphs and \(d\)-graphs
Given a graph \(G^{\prime}\), a vertex \(v\in V(G)\), and graph \(G^{\prime\prime}\) such that \(V(G^{\prime})\cap V(G^{\prime\prime})=\emptyset\), substitute \(G^{\prime\prime}\) for \(v\) in \(G^{\prime}\) connecting a vertex \(v^{\prime\prime}\in V(G^{\prime\prime})\) with a vertex \(v^{\prime}\in V(G^{\prime})\) if and only if \(v\) and \(v^{\prime}\) were adjacent in \(G^{\prime}\). Denote the obtained graph by \(G=G^{\prime}(v\to G^{\prime\prime})\) and call it the _substitution_ of \(G^{\prime\prime}\) for \(v\) in \(G^{\prime}\) or simply the _substitution_ when arguments are clear from the context.
Substitution \({\cal G}={\cal G}^{\prime}(v\to{\cal G}^{\prime\prime})\) for \(d\)-graphs is defined in a similar way. We will see that many important classes of graphs and \(d\)-graphs are closed wrt substitution. This will be instrumental in our proofs. See Section 3.9 for more details.
### Complementary connected \(d\)-graphs
We say that a \(d\)-graph \({\cal G}\) is _complementary connected (CC)_ if the complement of each chromatic component of \({\cal G}\) is connected on \(V\), in other words, if for each two vertices \(u,w\in V\) and color \(i\in[d]=\{1,\ldots,d\}\) there is a path between \(u\) and \(w\) without edges of \(E_{i}\).
By convention, the null-\(d\)-graph and one-vertex \(d\)-graph are not CC. It is easily seen that there is no CC \(d\)-graph with two vertices and that \(\Delta\) (respectively, \(\Pi\)) is a unique CC \(d\)-graph with three (respectively, four) vertices.
It is also easily seen that \(\Pi\) and \(\Delta\) are minimal CC \(d\)-graphs, that is, they do not contain induced CC subgraphs. Moreover, there are no others.
**Theorem 1**.: _([51, 53]). Every CC \(d\)-graph contains \(\Pi\) or \(\Delta\). _
**Remark 7**.: _In case of \(\Pi\), that is, for \(d=2\), the result was obtained earlier [31, 89, 92, 93]. It was one of the problems on the 1970 Moscow Mathematics Olympiad, which was successfully solved by seven high-school students [44]._
This theorem can be strengthened as follows:
**Theorem 2**.: _([14]). Every CC \(d\)-graph \(\mathcal{G}\), except \(\Pi\) and \(\Delta\), contains a vertex \(v\) such that the reduced \(d\)-graph \(\mathcal{G}[V\setminus\{v\}]\) is still CC. _
This statement was announced in [51, 53] and proven in [14], see also [56]. It implies that, by deleting vertices one by one, we can reduce every CC \(d\)-graph to a copy of \(\Pi\) or \(\Delta\).
In other words, family \(\mathcal{F}=\mathcal{F}^{CC}\) of CC \(d\)-graphs is convex and the class \(\mathcal{M}=\mathcal{LM}\) contains only 2-graph \(\Pi\) and 3-graph \(\Delta\). Let us show that \(\mathcal{F}_{d}=\mathcal{F}^{CC}_{d}\) is not strongly convex, already for \(d=2\). Since chromatic components of a \(d\)-graph may be empty, this also shows that \(F^{CC}_{d}\) is not strongly convex, for any \(d\).
It is both known and easily seen [56] that the family of CC graphs, as well as CC \(d\)-graphs, is closed wrt substitution.
**Example 7**.: _Consider \(2\)-graph \(\Pi\) on the vertex-set \(\{v_{1},v_{2},v_{3},v_{4}\}\) and substitute for \(v_{4}\) another 2-graph \(\Pi^{\prime}\) on the vertex-set \(\{v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3},v^{\prime}_{4}\}\). Obtained \(2\)-graph \(\mathcal{G}=\Pi(v_{4}\rightarrow\Pi^{\prime})\) is CC, since \(\Pi\) and \(\Pi^{\prime}\) are CC and family \(\mathcal{F}_{d}\) is closed wrt substitution. Furthermore, it is easy to verify that the CC property disappears if we delete \(v_{1}\), \(v_{2}\) or \(v_{3}\) from \(\mathcal{G}\). Thus, we cannot reduce \(\mathcal{G}\) to \(\Pi^{\prime}\) keeping CC, which means that family \(\mathcal{F}^{CC}_{2}\) of the CC \(2\)-graphs is not strongly convex._
_However, we can reduce \(\mathcal{G}\) to \(\Pi\) keeping CC, in agreement with convexity of \(\mathcal{F}^{CC}_{2}\)._
### Not CC \(d\)-graphs
Let us denote this family by \(\mathcal{F}_{d}=\mathcal{F}^{not-CC}_{d}\) and show that it is strongly convex but not weakly hereditary if \(d>1\). Of course, \(\mathcal{F}_{1}\) is hereditary.
A two-vertex \(d\)-graph, that is, a single edge, is CC only if \(d=1\).
**Proposition 3**.: _Assume that \(d>1\) and \(|V|\geq 2\). Each not CC \(d\)-graph \(\mathcal{G}=(V;E_{1},\ldots,E_{d})\) contains a vertex \(v\in V\) such that the sub-\(d\)-graph \(\mathcal{G}[V\setminus v]\) is still not CC._
Proof.: Since \(\mathcal{G}\) is not CC, there is a color \(i\in[d]\) such that graph \(\overline{G_{i}}=(V,\overline{E_{i}})\) is not connected. As we know, one can eliminate vertices of \(V\) one by one keeping this property until \(V\) is reduced to two vertices. However, the obtained not CC \(d\)-graph is still not minimal, since by convention, the null-\(d\)-graph and a one-vertex \(d\)-graph are not CC either. Thus, the null-\(d\)-graph is the only (locally) minimal CC \(d\)-graph. So, by the last two steps, we reduce the obtained two-vertex not CC \(d\)-graph to a one-vertex \(d\)-graph and then to the null-\(d\)-graph. Both are not CC. Thus, family \(\mathcal{F}_{d}\) is strongly convex for all \(d\).
Yet, it is not weakly hereditary whenever \(d>1\).
**Example 8**.: _Let us add a vertex \(v_{0}\) to \(\Pi\) or \(\Delta\) and connect it to all other vertices by edges of the same color. Clearly, the obtain \(d\)-graph is not CC if \(d>1\). Yet, deleting vertex \(v_{0}\) from it we obtain \(\Pi\) or \(\Delta\), which are both CC. Thus, family \(\mathcal{F}_{d}\) is not hereditary. Moreover, it is not weakly hereditary, by Lemma 1 unless \(d=1\)._
### CIS property of \(d\)-graphs
Given a \(d\)-graph \(\mathcal{G}=(V;E_{1},\ldots,E_{d})\), choose a maximal independent set \(S_{i}\subseteq V\) in every graph \(G_{i}=(V,E_{i})\) and denote by \(\mathcal{S}=\{S_{i}\mid i\in[d]=\{1,\ldots,d\}\}\) the obtained collection; furthermore set \(S=\bigcap_{i=1}^{d}S_{i}\). Obviously, \(|S|\leq 1\) for every \(\mathcal{S}\). Indeed, if \(v,v^{\prime}\in S\) then \((v,v^{\prime})\not\in E_{i}\) for all \(i\in[d]\), that is, this edge has no color.
We say that \(\mathcal{G}\) has the CIS property and call \(\mathcal{G}\) a _CIS \(d\)-graph_ if \(S\neq\emptyset\) for every selection \(\mathcal{S}\). CIS \(d\)-graphs were introduced in 2006 in [1]; see also [14, 17, 20, 33, 34, 53, 56, 57, 96, 97].
For \(d=2\), a \(2\)-graph consists of two complementary graphs \(G_{1}\) and \(G_{2}\) on the same vertex-set \(V\). In this case CIS property means that in \(G_{i}\) every maximal clique \(C\) intersects every maximal stable set \(S\) for \(i=1,2\). (This explains the name CIS.) Obviously, \(C\) and \(S\) may have at most one vertex in common.
### Not CIS \(d\)-graphs
It is easy to verify that \(d\)-graphs \(\Pi\) and \(\Delta\) are not CIS, while every their sub-\(d\)-graph is CIS, in other words, \(\Pi\) and \(\Delta\) are minimal not CIS \(d\)-graphs. Moreover, they are also locally minimal and there are no other.
**Theorem 3**.: _([2]). Every not CIS \(d\)-graph \(\mathcal{G}=(V;E_{1}\ldots,E_{d})\), except \(\Pi\) and \(\Delta\), has a vertex \(v\in V\) such that the reduced \(d\)-graph \(\mathcal{G}[V\setminus\{v\}]\) is not CIS. _
In other words, for any \(d\geq 2\), family \(\mathcal{F}_{d}=\mathcal{F}_{d}^{not-CIS}\) of not CIS \(d\)-graphs is convex and class \(\mathcal{M}(\mathcal{F}_{d})=\mathcal{LM}(\mathcal{F}_{d})\) consists of only of \(\Pi\) if \(d=2\) and of \(\Pi\) and \(\Delta\) if \(d>2\).
Interestingly, family \(\mathcal{F}_{d}^{CC}\) of CC \(d\)-graphs with \(d>1\) has the same properties: it is convex and class \(\mathcal{M}(\mathcal{F}_{d}^{CC})=\mathcal{LM}(\mathcal{F}_{d}^{CC})\) contains only \(\Pi\) and \(\Delta\). However, these two families differ, \(F_{d}=\mathcal{F}_{d}^{CC}\neq\mathcal{F}_{d}^{not-CIS}=F_{d}^{\prime}\). Moreover, both differences \(\mathcal{F}_{d}\setminus\mathcal{F}_{d}^{\prime}\) and \(\mathcal{F}_{d}^{\prime}\setminus\mathcal{F}_{d}\) are not empty, already for \(d=2\).
**Example 9**.: _Consider the bull-graph (also called A-graph) is self-complementary; hence, the corresponding bull \(2\)-graph \(\mathcal{B}\) given on Figure 5 is both CC and CIS. Thus, \(\mathcal{B}\in\mathcal{F}_{2}\setminus\mathcal{F}_{2}^{\prime}\)._
_Consider \(2\)-graph \(\Pi\) colored by colors 1 and 2; add to it a new vertex \(v_{5}\) and connect it to four vertices of \(\Pi\) by four edges of the same color, say 1. It is easily seen that the obtained \(2\)-graph \(\mathcal{G}\) is not CC and not CIS. Thus, \(\mathcal{G}\in\mathcal{F}_{2}^{\prime}\setminus\mathcal{F}_{2}\)._
_The above two examples also show that both set-differences are not empty for every \(d\geq 2\), since chromatic components may be empty._
For each \(d\geq 2\) both families \(\mathcal{F}_{d}\) of CC \(d\)-graphs and \(\mathcal{F}_{d}^{\prime}\) of not CIS \(d\)-graphs are convex, and \(\mathcal{F}_{d}\) is not strongly convex, already for \(d=2\). It remains only to prove that \(\mathcal{F}_{2}^{\prime}\) not strongly convex. It was shown in [2] that CIS \(d\)-graphs are closed wrt substitution.
**Example 10**.: _Consider the bull \(2\)-graph \(\mathcal{B}\) defined by the edges_
\((v_{1},v_{2}),(v_{2},v_{3}),(v_{3},v_{4}),(v_{2},v_{5}),(v_{3},v_{5})\) _of color \(1\),_
\((v_{2},v_{4}),(v_{4},v_{1}),(v_{1},v_{3}),(v_{1},v_{5}),(v_{4},v_{5})\) _of color \(2\)._
_Note that vertices \(\{v_{1},v_{2},v_{3},v_{4}\}\) induce a \(\Pi\). As we already mentioned, \(\mathcal{B}\) is a CIS \(2\)-graph, while \(\Pi\) is not. Let us substitute \(v_{5}\) in \(\mathcal{B}\) by \(2\)-graph \(\Pi^{\prime}\) defined by the edges:_
\((v_{1}^{\prime},v_{2}^{\prime}),(v_{2}^{\prime},v_{3}^{\prime}),(v_{3}^{\prime },v_{4}^{\prime})\) _of color \(1\),_
\((v_{2}^{\prime},v_{4}^{\prime}),(v_{4}^{\prime},v_{1}^{\prime}),(v_{1}^{ \prime},v_{3}^{\prime})\) _of color \(2\)._
_The resulting \(2\)-graph \(\mathcal{B}^{\prime}=\mathcal{B}(v_{5}\to\Pi)\) is not CIS. Indeed, it is easily seen that two disjoint vertex-sets \(C=\{v_{2},v_{3},v_{2}^{\prime},v_{3}^{\prime}\}\) and \(S=\{v_{1},v_{4},v_{1}^{\prime},v_{4}^{\prime}\}\) form in \(\mathcal{B}^{\prime}\) maximal cliques of colors 1 and 2, respectively._
_In contrast, we obtain a CIS \(2\)-graph, by substituting \(v_{5}\) in \(\mathcal{B}\) by a proper sub-\(2\)-graph \(\mathcal{G}\) of \(\Pi^{\prime}\). Indeed \(\Pi^{\prime}\) is minimal not CIS, hence, \(\mathcal{G}\) is CIS, \(\mathcal{B}\) is CIS too, and CIS \(d\)-graphs are closed wrt substitution._
_Summarizing, we conclude that \(2\)-graph \(\mathcal{B}^{\prime}\) is not CIS, but one obtains a CIS sub-\(2\)-graph by deleting any vertex \(v\in\{v_{1}^{\prime},v_{2}^{\prime},v_{3}^{\prime},v_{4}^{\prime}\}\) from \(\mathcal{B}^{\prime}\). Hence, if we want to stay in \(\mathcal{F}^{\prime}(\mathcal{B}^{\prime})\), we can only delete a vertex from \(V(\Pi)=\{v_{1},v_{2},v_{3},v_{4}\}\), keeping \(\Pi^{\prime}\) but destroying \(\Pi\). Thus, \(\mathcal{B}^{\prime}\) cannot be reduced to \(\Pi\) within \(\mathcal{F}^{\prime}(\mathcal{B}^{\prime})\), which means that family \(\mathcal{F}_{2}^{\prime}\) is not strongly convex._
_However, in agreement with convexity of family \(\mathcal{F}^{\prime}_{2}(\mathcal{B}^{\prime})\), one can reduce \(\mathcal{B}^{\prime}\) to \(\Pi^{\prime}\) staying within this family._
_Similarly, we can substitute \(v_{5}\) in \(\mathcal{B}\) by \(\Delta\), on vertices \(v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3}\) edge-colored arbitrarily. In particular, we can use colors 1, or 2, or any other. Again, it is not difficult to verify that the resulting \(d\)-graph graph \(\mathcal{B}^{\prime\prime}=\mathcal{B}(v_{5}\to\Delta)\) is not CIS. (No CIS \(d\)-graph with \(\Delta\) is known, see Section 3.10.) Yet, deleting a vertex of \(\Delta\) from \(\mathcal{B}^{\prime\prime}\) we obtain a CIS sub-\(d\)-graph of \(\mathcal{B}^{\prime\prime}\). Indeed, \(\Delta\) is minimal not CIS, hence, any its sub-\(d\)-graph is CIS, bull \(2\)-graph \(\mathcal{B}\) is CIS too, and CIS \(d\)-graphs are closed wrt substitution. Hence, if we want to stay in \(\mathcal{F}_{3}(\mathcal{B}^{\prime\prime})\) then \(\mathcal{B}^{\prime\prime}\) can be reduced to \(\Delta\) induced by \(v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3}\), but not to \(\Pi\), induced by \(\{v_{1},v_{2},v_{3},v_{4}\}\), This disproves the strong convexity of \(\mathcal{F}_{3}(\mathcal{B}^{\prime\prime})\)._
\(v_{1}\)\(v_{2}\)\(v^{\prime}_{3}\)\(v^{\prime}_{2}\)\(v^{\prime}_{1}\)\(v^{\prime}_{2}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{3}\)\(v^{\prime}_{2}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{2}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{\prime}_{4}\)\(v^{\prime}_{3}\)\(v^{\prime}_{4}\)\(v^{
### II- and \(\Delta\)-free \(d\)-graphs
Clearly, the family of \(\Pi\)- and \(\Delta\)-free \(d\)-graphs is hereditary; class \(\mathcal{M}=\mathcal{LM}\) contains only null-\(d\)-graph.
We know that for two different convex families, CC and not CIS \(d\)-graphs, class \(\mathcal{M}=\mathcal{LM}\) contains only \(d\)-graphs \(\Pi\) and \(\Delta\). Hence, the following three properties of a \(d\)-graph \(\mathcal{G}\) are equivalent:
(i) \(\mathcal{G}\) is \(\Pi\)- and \(\Delta\)-free;
(ii) \(\mathcal{G}\) contains no CC sub-\(d\)-graph;
(iii) \(\mathcal{G}\) contains only CIS subgraphs.
This result allows us to construct one-to-one correspondences between
(j) \(\Pi\)- and \(\Delta\)-free \(d\)-graphs;
(jj) vertex \(d\)-colored rooted trees;
(jjj) tight and rectangular game forms of \(d\) players.
For examples, see [56, Figures 1-6], and [1, Figures 11-13]. In its turn, this result enables us to characterize read-once Boolean functions, when \(d=2\), [46, 50, 51, 53, 55] and normal forms of graphical \(d\)-person games modelled by trees [1, 51, 52, 53, 56, 57].
### More on CIS \(d\)-graphs
Several examples of CIS \(d\)-graphs can be found in [1, Section 1.1, Figures 1,2,6]. For example, each \(\Pi\)- and \(\Delta\)-free \(d\)-graph is CIS. (Furthermore, according to \(\Delta\)-conjecture, no CIS \(d\)-graph contains \(\Delta\); see Section 3.10 below.)
We have no efficient characterization or recognition algorithm for CIS \(d\)-graphs, even for \(d=2\). The problem looks difficult because family \(\mathcal{F}_{2}^{CIS}\) of CIS \(2\)-graphs is not hereditary. For example, bull \(2\)-graph is CIS, but deleting its "top vertex" we obtain sub-\(2\)-graph \(\Pi\), which is not CIS. Moreover, family \(\mathcal{F}_{2}^{CIS}\) is not even convex.
**Example 11**.: _For any integer \(n>2\), we will construct a \(2\)-graph \(\mathcal{G}_{n}\) such that \(\mathcal{G}_{n}\in\mathcal{LM}(\mathcal{F}_{2}^{CIS})\setminus\mathcal{M}( \mathcal{F}_{2}^{CIS})\). To do so, consider complete bipartite \(n\times n\) graph \(K_{n,n}\), its line graph \(G_{n}=L(K_{n,n})\), and its complement \(\overline{G_{n}}\). These two complementary graphs on the same vertex-set form the required \(2\)-graph \(\mathcal{G}_{n}=\mathcal{L}(\mathcal{K}_{(n,n)}\) for each \(n>2\); see [1, Figure 1.4] as an example for \(n=3\)._
_It is easy to verify that \(\mathcal{G}_{n}\) is a CIS \(2\)-graph, but for each \(v\in V(\mathcal{G}_{n})\) the reduced sub-\(2\)-graph \(\mathcal{G}_{n}[V\setminus v]\) is not CIS. Due to symmetry, it is enough to check this claim for just one arbitrary \(v\in V(\mathcal{G}_{n})\). Thus, \(L(\mathcal{G}_{n})\) is a locally minimal CIS \(2\)-graph. Yet, it is not minimal, since only the null-\(2\)-graph is. (Moreover, every \(2\)-graph with at most \(3\) vertices is CIS and the only minimal not CIS \(2\)-graph is \(\Pi\).)_
No efficient characterization of locally minimal CIS \(d\)-graphs is known. However, \(\Delta\)-conjecture, if true, would allow us to reduce arbitrary \(d\) to \(d=2\); see subsection 3.10 below.
Let us note finally that a \(2\)-graph \(\mathcal{G}=(V;E_{1},E_{2})\) is CIS whenever each maximal clique of its chromatic component \(G_{i}=(V,E_{i})\) has a simplicial vertex, for \(i=1\) or \(i=2\)[1]. Hence, every \(2\)-graph \(\mathcal{G}\) is a subgraph of a CIS \(2\)-graph \(\mathcal{G}^{\prime}\). This is easy to verify [1, Proposition 1 and Corollary 1]. Note, however, that the size of \(\mathcal{G}^{\prime}\) is exponential in the size of \(\mathcal{G}\).
Thus, CIS \(d\)-graphs cannot be described in terms of forbidden subgraphs, already for \(d=2\). This is not surprising, since this family is not hereditary.
Given a CIS \(d\)-graph \(\mathcal{G}\) and a partition \(\mathcal{P}\) of its colors \([d]=\{1,\ldots,d\}\) into \(\delta\) non-empty subsets such that \(2\leq\delta\leq d\), merging colors in each of this subsets we obtain a \(\delta\)-graph \(\mathcal{G}^{\prime}=\mathcal{G}^{\prime}(\mathcal{G},\mathcal{P})\), which we will call the _projection_ of \(\mathcal{G}\) wrt color-merging \(\mathcal{P}\). It is not difficult to verify (see [1] for details) that:
* if \(\mathcal{G}\) is CIS then \(\mathcal{G}^{\prime}\) is, but not vice versa;
* if \(\mathcal{G}^{\prime}\) contains a \(\Delta\) then \(G\) does, but not vice versa;
* if \(\mathcal{G}^{\prime}\) contains a \(\Pi\) then \(\mathcal{G}\) contains a \(\Pi\) or \(\Delta\).
We can reformulate the first two claims as follows: \(\mathcal{G}^{\prime}\) is CIS or, respectively, Gallai's whenever \(\mathcal{G}\) is.
For \(\delta=2\) the first claim implies that merging an arbitrary set of chromatic components of a CIS \(d\)-graph results in a CIS graph.
### Modular decomposition of Gallai's (\(\Delta\)-free) \(d\)-graphs
The operation of substitution \(G=G^{\prime}(v\to G^{\prime\prime})\) for graphs and \(\mathcal{G}=\mathcal{G}^{\prime}(v\to\mathcal{G}^{\prime\prime})\)\(d\)-graphs was already defined above. It can be similarly introduced for multi-variable functions and for many other objects; see [75] for more details; In this paper \(G^{\prime\prime}\) and \(\mathcal{G}^{\prime\prime}\) are referred to as _modules_ and substitution as _modular decomposition_.
We say that a family \(\mathcal{F}\) of graphs or digraphs is exactly closed wrt substitution if \(G\in\mathcal{F}\) if and only if \(G^{\prime},G^{\prime\prime}\in\mathcal{F}\) and, respectively, \(\mathcal{G}\in\mathcal{F}\) if and only if \(\mathcal{G}^{\prime},\mathcal{G}^{\prime\prime}\in\mathcal{F}\). It is both known and easy to verify that the following families are exactly closed wrt substitution: perfect, CIS, CC, and, \(P_{4}\)-free graphs; CIS, CC, Galai (\(\Delta\)-free), \(\Pi\)- and \(\Delta\)-free \(d\)-graphs.
Recall that a graph \(G\) is called _CIS_ if \(C\cap S\neq\emptyset\) for every maximal clique \(C\) and maximal stable set \(S\) of \(G\). Given a CIS (respectively, CC) 2-graph \(\mathcal{G}=(V;E_{1},E_{2})\), each of its two chromatic components \(G_{i}=(V,E_{i}),\ i=1,2\) is a CIS (respectively, CC) graph. See more about CIS and CC graphs in [1, 2, 14, 17, 20, 33, 34, 53, 56, 57, 96, 97].
Perfect graphs are closed wrt complementation, by the Perfect Graph Theorem [70, 71]. Obviously, CIS, CC, and \(P_{4}\)-free graphs also have this property, just by definition.
Let \(\mathcal{F}\) be a family of Gallai \(d\)-graphs \(\mathcal{G}=(V;E_{1},\ldots,E_{d})\) such that the family \(\mathcal{F}^{\prime}\) of their chromatic components \(G_{i}=(V,E_{i}),\ i\in[d]=\{1,\ldots,d\}\) is (i) closed wrt complementation and (ii) exactly closed wrt substitution. For example, family \(\mathcal{F}^{\prime}\) that contains only perfect, or CIS, or CC, or \(P_{4}\)-free graphs has properties (i) and (ii).
Every \(d\)-graphs \(\mathcal{G}\in\mathcal{F}\) with \(d\geq 3\) can be decomposed by two non-trivial \(d\)-graphs, that is, \(\mathcal{G}=\mathcal{G}^{\prime}(v\to\mathcal{G}^{\prime\prime})\), where \(d\)-graphs \(\mathcal{G}^{\prime}\) and \(\mathcal{G}^{\prime\prime}\) are distinct from \(\mathcal{G}\) and from the trivial one-vertex \(d\)-graph.
As a corollary, we conclude that Gallai's \(d\)-graphs whose chromatic components have properties (i) and (ii) can be decomposed by the 2-colored such \(d\)-graphs. In other words, for Gallai's \(d\)-graphs, the case of arbitrary \(d\) can be reduced to \(d=2\). This statement follows from the results of Gyarfas and Simonyi [65], which, in their turn, are based on the results
of Cameron, Edmonds, and Lovasz [24, 25], Moring [75], and Gallai [45]; see more details in [1, 2] and [56, Section 4].
For example, the modular decomposition of \(\Pi\)- and \(\Delta\)-free \(d\)-graphs has important applications in theory of positional (graphical) \(n\)-person games modelled by trees; in particular, it is instrumental in characterizing the normal forms of these games [1], [49, Remark 3], [51, Chapter 5], and [52, 53, 56].
### \(\Delta\)-conjecture
All CIS \(d\)-graphs are Gallai's, or in other words, \(d\)-graphs containing \(\Delta\) are not CIS.
This conjecture was suggested in [51, remark on page 71, after the proof of Claim 17]
and it remains open. Some partial results were obtained in [51]. In particular:
(i) Every \(\Pi\)- and \(\Delta\)-free \(d\)-graph is CIS.
(ii) It is sufficient to prove \(\Delta\) conjecture for \(3\)-graphs; then, it follows for \(d\)-graphs with arbitrary \(d\).
The second claim was proven by Andrey Gol'berg (private communications) in 1975 as follows: Consider the projection \(\mathcal{G}^{\prime}=\mathcal{G}^{\prime}(\mathcal{G},\mathcal{P})\) of \(\mathcal{G}\) wrt a color-merging \(\mathcal{P}\). As we know, \(\mathcal{G}^{\prime}\) is CIS whenever \(G\) is. Suppose \(\Delta\)-conjecture fails for \(\mathcal{G}\), in other words, \(\mathcal{G}\) is CIS but not Gallai, that is, it contains a \(\Delta\), say \(\Delta_{0}\). Consider a color-merging \(\mathcal{P}\) with \(\delta=3\) such that three colors of \(\Delta_{0}\) are still pairwise distinct in \(\mathcal{P}\). Then, projection \(\mathcal{G}^{\prime}=\mathcal{G}^{\prime}(\mathcal{G},\mathcal{P})\) still contains a \(Delta\), but \(\mathcal{G}^{\prime}\) is a CIS \(3\)-graph. Thus, \(\Delta\)-conjecture fails for \(3\)-graphs too.
According to the previous subsection, each CIS \(d\)-graph is a modular decomposition (that is, a superposition of substitutions) of CIS \(2\)-graphs, modulo \(\Delta\)-conjecture. If it holds, studying CIS \(d\)-graphs is reduced to studying CIS graphs. Yet, the latter is still difficult.
Let us note that in case of perfect, CC or \(P_{4}\)-free chromatic components of a \(d\)-graph, we still have to require that it is \(\Delta\)-free; yet, in case of CIS this requirement may be waved, modulo \(\Delta\)-conjecture; see more details in [1, 20, 33, 34, 56, 57].
## 4 Finite two-person normal form games and game forms
In this section we consider matrices, that is, mappings \(M:I\times J\to R\), whose rows \(I=\{i_{1},\ldots,i_{n}\}\) and columns \(J=\{j_{1},\ldots,j_{m}\}\) are the strategies of Alice and Bob, respectively, while \(R\) may vary: it is real numbers \(\mathbb{R}\) or their pairs \(\mathbb{R}^{2}\) in case of matrix and bimatrix games, respectively, and \(R=\Omega=\{\omega_{1},\ldots,\omega_{k}\}\) is a finite set of outcomes in case of game forms. In all cases, \(\mathcal{P}=\mathcal{P}(M)=I\cup J\) is the ground set and \(succ\) is the containment order over \(\mathcal{P}\); in other words, \(\mathcal{P}(M)\) consists of all submatrices of \(M\). By convention, we identify all elements of \(\mathcal{P}\) with \(I=\emptyset\) or \(J=\emptyset\): they correspond to the empty submatrix, which is the unique minimum in \((\mathcal{P},\succ)\).
### Saddle points and Nash equilibria
#### 4.1.1 Saddle point free matrices
In this case \(R=\mathbb{R}\) is the set of real numbers. We assume that Alice is the maximizer and she controls the rows, while Bob is the minimizer and he controls the columns.
An entry of \(M\) is a _saddle point_ (SP) if and only it is minimal in its row and maximal in its column (not necessarily strictly, in both cases). It is well known that a matrix has a SP if and only if its maxmin and minmax are equal. Obviously, a \(2\times 2\) matrix \(M\) has no SP if and only if one of its diagonals is strictly larger than the other, that is, \([r_{i_{1},j_{1}};r_{i_{2},j_{2}}]\cap[r_{i_{1},j_{2}};r_{i_{2},j_{1}}]=\emptyset\).
In 1964 Lloyd Shapley [90] proved that a matrix has a SP if (but not only if) every its \(2\times 2\) submatrix has a SP. In other words, for the family \(\mathcal{F}\) of all SP free matrices, class \(\mathcal{M}(\mathcal{F})\) of the minimal SP free matrices consists of the \(2\times 2\) SP free matrices. This result was strengthen as follows:
**Theorem 4**.: _([16]). Every SP free matrix of size larger that \(2\times 2\) has a row or column such that it can be deleted and the remaining matrix is still SP free. _
In other words, \(\mathcal{LM}(\mathcal{F})=\mathcal{M}(\mathcal{F})\), that is, family \(\mathcal{F}\) is convex. Yet, it is not strongly convex, as the following example shows.
**Example 12**.: _Consider the following \(4\times 4\) 0,1-matrix \(M\):_
\[\begin{bmatrix}0&1&0&0\\ 1&0&0&0\\ 1&1&0&1\\ 1&1&1&0\end{bmatrix}\]
_The following observations are easy to verify:_
_Matrix \(M\) is SP free and it contains two (locally) minimal SP free \(2\times 2\) submatrices: the first one, \(M_{1}\), upper left, is determined by the first two rows and columns of \(M\), while the second one, \(M_{2}\), lower right, is determined by the last two rows and columns of \(M\)._
_Furthermore, if we eliminate one of the last (respectively, first) two rows or columns of \(M\), then a SP appears (respectively, it does not) in the obtained submatrix. Hence, keeping SP freeness one can reduce \(M\) to \(M_{2}\) but not to \(M_{1}\). The first claim is in agreement with convexity, while the second one disproves strong convexity of the family of SP free matrices._
#### 4.1.2 Matrices with saddle points
**Proposition 4**.: _Given a matrix \(M\) with a SP, family \(\mathcal{F}=\mathcal{F}(M)\) of all submatrices of \(M\) with a SP is strongly convex._
Proof.: Obviously, class \(\mathcal{M}=\mathcal{M}(\mathcal{F}(M))\) consists of all \(1\times 1\) submatrices (that is, entries) of \(M\). By assumption \(M\in\mathcal{F}\); let \((i^{*},j^{*})\) be a SP in \(M\). Obviously, it remains a SP when we delete from \(M\) a row distinct from \(i^{*}\) or a column distinct from \(j^{*}\). Hence, \(\mathcal{M}=\mathcal{LM}\), or in other words, \(\mathcal{F}\) is convex.
To prove strong convexity, fix an arbitrary entry \((i_{0},j_{0})\) in \(M\) and reduce \(M\) deleting successively its rows, except \(i_{0},i^{*}\), and columns, except \(j_{0},j^{*}\). As we already mentioned, \((i^{*},j^{*})\) remains a SP. Consider three cases:
If \(i_{0}=i^{*}\) and \(j_{0}=j^{*}\), we arrive to the \(1\times 1\) submatrix \((i_{0},j_{0})=(i^{*},j^{*})\).
If \(i_{0}\neq i^{*}\) and \(j_{0}\neq j^{*}\) we arrive to the \(2\times 2\) submatrix formed by these two rows and columns. Note that \((i^{*},j^{*})\) is still a SP. Then delete row \(i^{*}\) getting a \(1\times 2\) submatrix. Clearly, it has a SP, which may be not \((i^{*},j^{*})\), yet. Finally, we delete column \(j^{*}\) getting \((i_{0},j_{0})\).
If \(i_{0}=i^{*}\) or \(j_{0}=j^{*}\) but not both, then we arrive to a \(1\times 2\) or \(2\times 1\) submatrix that consists of \((i_{0},j_{0})\) and \((i^{*},j^{*})\). Note that \((i^{*},j^{*})\) is still a SP. Then we delete it getting \((i_{0},j_{0})\) in one step.
However, family \(\mathcal{F}(M)\) is not weakly hereditary.
**Example 13**.: _Consider matrix_
\[\begin{bmatrix}0&1&0\\ 0&0&1\end{bmatrix}\]
_It has two saddle points, both in the first column, but, by deleting this column, we obtain a SP free matrix._
#### 4.1.3 Absolutely determined matrices
A matrix is called _absolutely determined_ if every its submatrix has a SP. By Shapley's theorem [90], it happens if and only if each \(2\times 2\) submatrix has a SP. This condition can by simplified in case of symmetric matrices [59, 60, 61]. By definition the considered family is hereditary.
#### 4.1.4 Nash equilibria free bimatrices
A bimatrix game \((A,B)\) is defined as a pair of mappings \(a:I\times J\to\mathbb{R}\) and \(b:I\times J\to\mathbb{R}\) that specify the utility (or payoff) functions of Alice and Bob, respectively. Now both players are maximizers.
A situation \((i,j)\in I\times J\) is called a _Nash equilibrium_ (NE) if no player can improve the result by choosing another strategy provided the opponent keeps the same strategy, that is, if \(a(i,j)\geq a(i^{\prime},j)\forall i^{\prime}\in I\) and \(b(i,j)\geq b(i,j^{\prime})\forall j^{\prime}\in J\).
In other words, \(i\) is a best response to \(j\) for Alice and \(j\) is a best response to \(j\) for Bob.
Clearly, Nash equilibria generalize saddle points, which correspond to the zero-sum case: \(a(i,j)+b(i,j)=0\) for all \(i\in I\) and \(j\in J\).
However, unlike SP free games, the minimal NE-free bimatrix games may be larger than \(2\times 2\). Let us recall an example from [60]. Consider a \(3\times 3\) bimatrix game \((A,B)\) such that
\[\begin{array}{l}b(i_{1},j_{1})>b(i_{1},j_{2})\geq b(i_{1},j_{3}),\\ b(i_{2},j_{3})>b(i_{2},j_{1})\geq b(i_{2},j_{2}),\\ b(i_{3},j_{2})>b(i_{3},j_{3})\geq b(i_{3},j_{1});\\ \\ a(i_{2},j_{1})>a(i_{1},j_{1})\geq a(i_{3},j_{1}),\\ a(i_{1},j_{2})>a(i_{3},j_{2})\geq a(i_{2},j_{2}),\\ a(i_{3},j_{3})>a(i_{2},j_{3})\geq a(i_{1},j_{3}).\end{array}\]
Naturally, for situations in the same row (respectively, column) the values of \(b\) (respectively, \(a\)) are compared, since Alice controls rows and has utility function \(a\), while Bob controls columns and has utility function \(b\).
It is easy to verify that:
\(b(i_{1},j_{1})\) is the unique maximum in row \(i_{1}\) and
\(a(i_{1},j_{1})\) is a second largest in the column \(j_{1}\).
\(b(i_{2},j_{3})\) is the unique maximum in row \(i_{2}\) and
\(a(i_{2},j_{3})\) is a second largest in column \(j_{3}\);
\(b(i_{3},j_{2})\) is the unique maximum in row \(i_{3}\) and
\(a(i_{3},j_{2})\) is a second largest in column \(j_{2}\);
\(a(i_{2},j_{1})\) is the unique maximum in column \(j_{1}\) and
\(b(i_{2},j_{1})\) is a second largest in row \(i_{2}\); \(a(i_{1},j_{2})\) is the unique maximum in column \(j_{2}\) and \(b(i_{1},j_{2})\) is a second largest in row \(i_{1}\); \(a(i_{3},j_{3})\) is the unique maximum in column \(j_{3}\) and \(b(i_{3},j_{3})\) is a second largest in row \(i_{3}\).
Consequently, this game is NE-free, since no situation is simultaneously the best in its row wrt \(b\) and in its column wrt \(a\). Yet, if we delete a row or column then a NE appears. For example, let us delete \(i_{1}\). Then the situation \((i_{3},j_{2})\) becomes a NE. Indeed, \(b(i_{3},j_{2})\) is the largest in the row \(i_{3}\) and \(a(i_{3},j_{2})\) is a second largest in the column \(j_{2}\), yet, the largest, \(a(i_{1},j_{2})\), was deleted. Similarly, situations \((i_{1},j_{1}),(i_{2},j_{3}),(i_{1},j_{2}),(i_{3},j_{3}),(i_{2},j_{1})\) become NE after deleting lines \(i_{2},i_{3},j_{1},j_{2},j_{3}\), respectively.
Thus, \((A,B)\) is a locally minimal NE-free bimatrix game. Moreover, it is also minimal. Indeed, one can easily verify that all \(2\times 2\) subgames of \((A,B)\) have a NE and, of course, \(1\times 2,\ 2\times 1\), and \(1\times 1\) games always have it.
In general, the following criterion of local minimality holds:
**Theorem 5**.: _([16, Theorem 3]) A bimatrix game \((A,B)\) is a locally minimal NE-free game if and only if it satisfies the following four conditions:_
_(i) it is square, that is, \(|I|=|J|=k\);_
_(ii) there exist two one-to-one mappings (permutations) \(\sigma:I\to J\) and \(\delta:J\to I\) such that their graphs, \(gr(\sigma)\) and \(gr(\delta)\), are disjoint in \(I\times J\), or in other words, if \((i,\sigma(i))\neq(\delta(j),j)\) for all \(i\in I\) and \(j\in J\);_
_(iiia) \(a(\delta(j),j)\) is the unique maximum in column \(j\) and a second largest (though not necessarily unique) in row \(\delta(j)\);_
_(iiib) \(b(i,\sigma(i))\) is the unique maximum in row \(i\) and a second largest (though not necessarily unique) in column \(\sigma(i)\). _
Thus, we have a simple explicit characterization of the class \(\mathcal{LM}\) of the locally minimal NE-free bimatrix games. However, not each such game is minimal. Indeed, mappings \(\sigma\) and \(delta\) define \(2k\) entries of a \(k\times k\) bimatrix locally minimal NE-free game. Yet, it may contain some smaller \(k^{\prime}\times k^{\prime}\) NE-free subgames; see [16, Section 3] for more details.
Thus, the family of NE-free bibatrix games is not convex. Interestingly, in this case class \(\mathcal{M}\) is more complicated than \(\mathcal{LM}\). In contrast to the latter, no good characterization of the former is known.
It is not difficult to verify that in the zero-sum case \(k\) cannot be larger than \(2\).
In contrast, bimatrix games with NE are similar to matrix games with SP. This family is weakly hereditary but not hereditary. The proof from the previous subsection can be applied in this case.
### Tightness
Given a finite set of outcomes \(\Omega\), let \(X\) and \(Y\) be finite sets of strategies of Alice and Bob, respectively. A mapping \(g:X\times Y\rightarrow\Omega\) is called a _game form_. One can view a game form as a game without payoffs, which are not given yet.
A game form is called _tight_ if its rows and columns form dual hypergraphs. Several equivalent definitions of tightness can be found, for example, in [62, 63]. Nine examples of game forms are given in Figure 1; the first six are tight, the last three are not.
Tightness is equivalent with SP-solvability [38, 48] and with NE-solvability [49, 54, 63].
#### 4.2.1 Not tight game forms
Since tightness and SP-solvability are equivalent, the Shapley Theorem implies that all minimal not tight game forms are of size \(2\times 2\). Two sets of outcome corresponding to two diagonals are disjoint. There are three such game forms:
\begin{tabular}{|c|c|} \hline \(\omega_{1}\) & \(\omega_{2}\) \\ \hline \(\omega_{2}\) & \(\omega_{1}\) \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \(\omega_{1}\) & \(\omega_{2}\) \\ \hline \(\omega_{3}\) & \(\omega_{1}\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline \(\omega_{1}\) & \(\omega_{2}\) \\ \hline \(\omega_{3}\) & \(\omega_{4}\) \\ \hline \end{tabular}
Fig. 9. Three not tight \(2\times 2\) game forms
This result was strengthened in [16], where it was shown that these three game forms are the only locally minimal not tight ones. In other words, family \(\mathcal{F}\) of not tight game forms is convex and class \(\mathcal{M}(\mathcal{F})=\mathcal{LM}(\mathcal{F})\) consists of the above three game forms.
Here we will strengthen this result further as follows.
**Theorem 6**.: _Family \(\mathcal{F}\) of not tight game forms is strongly convex but not weakly hereditary._
Proof.: Obviously, a game form with a single outcome is tight.
Consider game forms with two outcomes \(\omega_{1}=a\) and \(\omega_{2}=b\). Obviously, such game form \(g\) is tight if and only if one of the following cases holds:
Case (rca): Game form \(g\) contains an \(a\)-row and an \(a\)-column, that is, there exists an \(x_{0}\in X\) and \(y_{0}\in Y\) such that \(g(x,y)=a\) whenever \(x=x_{0}\) or \(y=y_{0}\).
Case (rcb): Game form \(g\) contains a \(b\)-row and a \(b\)-column.
Case (rab): Game form \(g\) contains an \(a\)-row and a \(b\)-row.
Case (cab): Game form \(g\) contains an \(a\)-column and a \(b\)-column.
**Example 14**.: _Family \(\mathcal{F}\) is not weakly hereditary, consider the following \(4\times 4\) game form:_
\begin{tabular}{|c|c|c|c|} \hline \(a\) & \(b\) & \(a\) & \(b\) \\ \hline \(b\) & \(a\) & \(a\) & \(b\) \\ \hline \(a\) & \(a\) & \(a\) & \(b\) \\ \hline \(b\) & \(b\) & \(b\) & \(a\) \\ \hline \end{tabular}
_Obviously, it is not tight but becomes tight if we delete the last rows or column (or two last rows or columns). In contrast, every other, not last, row or column can be deleted and the the obtained reduced game form remains not tight._
This example proves that family \(\mathcal{F}\) is not weakly hereditary, yet, it does not disprove strong convexity. Actually, family \(\mathcal{F}\) is strongly convex. To show this, consider a not tight game form \(g:X\times Y\to\Omega\) of size larger than \(2\times 2\) and fix a \(2\times 2\) not tight subform \(g^{*}:X^{*}\times Y^{*}\to\Omega\) in it, that is, \(|X^{*}|=|Y^{*}|=2\), \(X^{*}\subseteq X,\ Y^{*}\subseteq Y\), and at least one of these two containments is strict. Wlog, we can assume that \(g^{*}\) is formed by the first two rows and columns of \(g\).
It is enough to show that one can delete a row \(x\in X\setminus X^{*}\) or a column \(y\in Y\setminus Y^{*}\) such that the reduced game form \(g^{\prime}\) is still not tight.
It is both obvious and well-known that merging outcomes respects tightness. Hence, wlog, we can assume that \(\Omega=\{\omega_{1},\omega_{2}\}=\{a,b\}\) consists of two outcomes, and that \(g^{*}\) is
\begin{tabular}{|c|c|} \hline \(a\) & \(b\) \\ \hline \(b\) & \(a\) \\ \hline \end{tabular}
It is also clear that adding an \(a\)-row or \(a\)-column to a tight game form respects its tightness. In other words, deleting an \(a\)-row or an \(a\)-column from a not tight game form \(g\) respects its non-tightness. By symmetry, the same holds for \(b\)-rows and \(b\)-columns as well.
Assume for contradiction that after deleting a row \(x\in X\setminus X^{*}\), or a column from \(y\in Y\setminus Y^{*}\) from \(g\), the obtained reduced subform \(g^{\prime}\) is tight. Then, as we know, one of the cases: (rca), (rcb), (rab), (cab) holds. Assume wlog that we delete a column (rather than a row) and consider all four cases.
In case (rca) \(g^{\prime}\) has an \(a\)-column \(x\in X\setminus X^{*}\). Deleting this column from \(g\) we obtain a not tight game form, which is a contradiction. By symmetry, case (rca) resolved too.
Case (cab) is trivial, since then both \(g^{\prime}\) and \(g\) are tight, which is a contradiction.
Thus, only case (rab) remains, and we can assume that after deleting each column \(y\in Y\setminus Y^{*}\) from \(g\) the obtained reduced submatrix \(g^{\prime}\) has both \(a\)- and \(b\)-rows. This is possible (only) when \(g\) contains \(4\) rows and \(3\) columns.
Yet, by symmetry, we can also assume that after deleting each row \(x\in X\setminus X^{*}\) from \(g\) the obtained reduced submatrix \(g^{\prime\prime}\) has both \(a\)- and \(b\)-columns.
This is already impossible. To see it, consider the \(4\times 4\) game form shown in Fig. 10 in which we can assign \(a\) or \(b\) arbitrarily to every symbol \(*\). It is not difficult to verify that every such assignment results in contradiction: the obtained subform is either tight, or not
tight and among its last two rows and columns there is at least one deleting which results in a subform that is still not tight.
#### 4.2.2 Tight game forms
Tightness is not hereditary. For example, game form \(g_{3}\) is tight but after deleting the last column one obtains \(g_{8}\), which is not tight.
Moreover, it was shown in [16] that the family \(\mathcal{F}\) of tight game forms is not convex. Class \(\mathcal{M}(\mathcal{F})\) contains only \(1\times 1\) game forms, while \(\mathcal{LM}(\mathcal{F})\) is a complicated class, which seems difficult to characterize; only some necessary and some sufficient conditions are obtained in [16]. Note that \(g_{3}\not\in\mathcal{LM}(\mathcal{F})\), since deleting its row keeps tightness.
#### 4.2.3 Totally tight game forms
A game form is called _totally tight (TT)_ if every its subform is tight; for example, \(g_{3}\) in Figure 8 is TT.
**Proposition 5**.: _([19]). Tightness of all \(2\times 2\) subforms already implies total tightness. _
Sketch of the proof. This result is implied by the following two criteria of solvability [48, 90]. The first states that a game has a SP if (but not only if) every its \(2\times 2\) subgame has a SP [90]. The second claims that a game form is _zero-sum-solvable_ if and only if it is tight [48]; see more details in [54, 62, 63]. Let us note that the second result is implicit already in [38].
It is easily seen that the next three properties of the \(2\times 2\) game forms are equivalent:
(i) tightness, (ii) total tightness, (iii) presence of a constant row or column.
As we know, there are only three not tight \(2\times 2\) game forms; they are given in Figure 9.
Thus, family of the TT game forms is characterized by these three forbidden subforms. Hence, this family is hereditary.
It is also known that a game form is TT if and only if it is _acyclic_, that is, for arbitrary payoffs of two players the obtained game has no _improvement cycle_; see [19] for the proof and precise definitions.
An explicit recursive characterization of the TT game forms is also given in [19].
#### 4.2.4 Not totally tight game forms
As we already mentioned, a game form is not TT if and only if it contains at least one of the three not tight \(2\times 2\) subforms given in Figure 9. Thus, family of not TT game forms is weakly hereditary. Obviously, it is not hereditary. Indeed, by deleting a row or column from a \(2\times 2\) game form one obtains a \(1\times 2\) or \(2\times 1\) game form, which is tight.
Figure 10: Case (rab), a contradiction.
## Acknowledgements
This research was performed in the framework of the HSE University Basic Research Program.
|
2302.04212 | Complete Graphical Language for Hermiticity-Preserving Superoperators | Universal and complete graphical languages have been successfully designed
for pure state quantum mechanics, corresponding to linear maps between Hilbert
spaces, and mixed states quantum mechanics, corresponding to completely
positive superoperators. In this paper, we go one step further and present a
universal and complete graphical language for Hermiticity-preserving
superoperators. Such a language opens the possibility of diagrammatic
compositional investigations of antilinear transformations featured in various
physical situations, such as the Choi-Jamio{\l}kowski isomorphism, spin-flip,
or entanglement witnesses. Our construction relies on an extension of the
ZW-calculus exhibiting a normal form for Hermitian matrices. | Titouan Carette, Timothée Hoffreumon, Émile Larroque, Renaud Vilmart | 2023-02-08T17:29:36Z | http://arxiv.org/abs/2302.04212v3 | # Complete Graphical Language for Hermiticity-Preserving Superoperators
###### Abstract
Universal and complete graphical languages have been successfully designed for pure state quantum mechanics, corresponding to linear maps between Hilbert spaces, and mixed states quantum mechanics, corresponding to completely positive superoperators. In this paper, we go one step further and present a universal and complete graphical language for Hermiticity-preserving superoperators. Such a language opens the possibility of diagrammatic compositional investigations of antilinear transformations featured in various physical situations, such as the Choi-Jamiolkowski isomorphism, spin-flip, or entanglement witnesses. Our construction relies on an extension of the ZW-calculus exhibiting a normal form for Hermitian matrices.
## 1 Introduction
Experimentally, all one can infer from a process theory is an outcome distribution. In the case of quantum theory, this distribution is given by the Born rule, stating that the probability of measuring state 1\(|\phi\rangle\) from a prepared state \(|\psi\rangle\) is \(\left|\langle\phi|\psi\rangle\right|^{2}\). A symmetry of the theory is a transformation of the states that leave this rule invariant, _i.e._\(T\) is a symmetry of quantum theory if it obeys
Footnote 1: We use the usual Dirac notation for complex vectors (\(|\Psi\rangle\in\mathbb{C}^{n}\) called _ket_) and their dual (\(\langle\Psi|=|\Psi\rangle^{\dagger}\), called _bra_, and where \(\dagger\) is the _dagger_, that is, the transpose conjugate or Hermitian adjoint). See [37] for more info.
\[\left|\langle\phi|\psi\rangle\right|^{2}=\left|\langle T\left(\phi\right)|T \left(\psi\right)\rangle\right|^{2}\,, \tag{1}\] |
2308.11122 | Families of isogenous elliptic curves ordered by height | Given a family of products of elliptic curves over a rational curve defined
over a number field $K$, and assuming that there exists no isogeny between the
pair of elliptic curves in the generic fiber, we establish an upper bound for
the number of special fibers with height at most $B$ where the two factors are
isogenous. Our proof provides an upper bound that is dependent on $K$, the
family, and the bound of height $B$. Furthermore, by introducing a slight
modification to the definition of the height of the parametrizing family, we
prove a uniform bound depends solely on the degree of the family, the field
$K$, and $B$. Based on the uniformity, and the fact that the idea of using
Heath-Brown type bounds on covers and optimizing the cover to count rational
points on specific algebraic families has not been exploited much yet, we hope
that the paper serves as a good example to illustrate the strengths of the
method and will inspire further exploration and application of these techniques
in related research. | Yu Fu | 2023-08-22T01:57:37Z | http://arxiv.org/abs/2308.11122v2 | # Families of isogenous elliptic curves ordered by height
###### Abstract
Given a family of products of elliptic curves over a rational curve over a number field \(K\), we give a bound for the number of special fibers of height at most \(B\) such that the two factors are isogenous. We prove an upper bound that depends on \(K\), the family, and the height \(B\). Moreover, if we slightly change the definition of the height of the parametrizing family, we prove a uniform bound that depends only on the degree of the family, \(K\) and the height \(B\).
## 1 Introduction
Let \(k\) be a field. Let \(X\to S\) be a family of algebraic varieties over \(k\), where \(S\) is irreducible, and let \(X_{\eta}\) be the generic fiber of this family. People care about what properties of \(X_{\eta}\) extend to other fibers and how we can measure the size of specializations such that a specific property does not extend. For example, the Hilbert Irreducibility Theorem says that for a Galois covering \(X\to\mathbb{P}^{n}\) over a number field \(K\), for most of the rational points \(t\in\mathbb{P}^{n}(K)\) the specializations over \(t\) generate a Galois extension with Galois group \(G\). Moreover, the size of the complement set, which can be considered as locus of _'exceptional'_ points, can be bounded as in [1].
In [1], Ellenberg, Elsholtz, Hall, and Kowalski studied families of Jacobians of hyperelliptic curves defined over number fields by affine equations
\[y^{2}=f(x)(x-t),\ t\in\mathbf{A}^{1},\]
with the assumption that the generic fiber is geometrically simple. They proved that the number of geometrically non-simple fibers in this family, with the height of the parametrizer \(t\leq B\), is bounded above by a constant \(C(f)\) depending on \(f\). Moreover, they obtained an effective bound using the analytic method that depends on the primes dividing the discriminant of \(f(x)\) and the genus of the family.
In this paper, we study families of pairs of elliptic curves defined over a rational curve over a number field \(K\). To be precise, let \(C\) be a rational curve over \(K\) isomorphic to \(\mathbb{P}^{1}\) which parametrizes a one-dimensional family of pairs of elliptic curves and let \((E_{t},E_{t}^{\prime})\) be the generic fiber of this family over \(K(t)\), with the assumption that there exists no isogeny between \(E_{t}\) and \(E_{t}^{\prime}\). We prove an upper bound for the number of specializations of \((E_{t},E_{t}^{\prime})\) such that the two factors are isogenous, with the assumption that the height of the parameter \(t\leq B\). The method relies on constructions of explicit covers of \(X(1)\times X(1)\) and results in dimension growth conjecture, as proven in [1]. Moreover, by a proper choice of definition of the height of \(t\), the upper bound depends only on the degree and height of the parameter, see Theorem 1.3. The independence of our bound is an exciting aspect for possible further applications.
To discuss the results, we first fix the general setting and terminology (see section SS2 for more details). Define \(\iota\) to be the map from \(C\) to \(\mathbb{P}^{3}\) which is a composition of a finite map via the \(j\)-invariant, followed by the Segre embedding:
\[\iota:C\to X(1)\times X(1)\to\mathbb{P}^{1}\times\mathbb{P}^{1}\hookrightarrow \mathbb{P}^{3}. \tag{1.1}\]
To be precise, the map \(\iota\) sends \(t\) to
\[(E_{t},E_{t}^{\prime})\mapsto(j(E_{t}),j(E_{t}^{\prime}))\mapsto(j(E_{t})j(E_{t }^{\prime});j(E_{t});j(E_{t}^{\prime});1).\]
Degrees and heights are computed with respect to this fixed embedding. We prove the following theorem:
**Theorem 1.1**.: _Let \(K\) be a number field of degree \(d_{K}\). let \(C\) be a rational curve over \(K\) isomorphic to \(\mathbb{P}^{1}\) which parametrizes a one-dimensional family of pairs of elliptic curves \((E,E^{\prime})\). Let \((E_{t},E_{t}^{\prime})\) be the generic fiber of this family over \(K(t)\), and suppose that there exists no \(\overline{K(t)}\)-isogeny between \(E_{t}\) and \(E_{t}^{\prime}\). Let \(d=\deg\iota^{*}\mathcal{O}_{\mathbb{P}^{3}}(1)\) be the degree of the parameter family \(C\) defined with respect to \(\iota\). Let \(H(\iota)\) be the height of \(\iota\) defined by the projective height of the coefficients of the defining polynomials of \(j(E_{t})\) and \(j(E_{t}^{\prime}).\) Let \(H:C(K)\to\mathbb{R}\) be the projective height defined over \(K\). Define \(S(B)\) to be the set_
\[S(B)=\{t\in C(K)|H(t)\leq B,\text{ there is an $\overline{\mathbb{Q}}$-isogeny between $E_{t}$ and $E_{t}^{\prime}$}\}.\]
_There is an absolute constant \(M\) such that for any \(B\geq M\), we have_
\[|S(B)|\lesssim_{K}d^{4+\epsilon}(\log H(\iota)+\log B)^{6}.\]
**Definition 1.2**.: For a point \(P_{t}\in C\) parametrized by \(t\in K\), define the height \(H(P_{t})\) to be the projective height of \(\iota(P_{t})\in\mathbb{P}^{3}\).
Note that in Theorem 1.1, \(H(t)\) is the height of \(t\) as an element of \(K\). If we change the definition of the height from \(H(t)\) to \(H(P_{t})\) and assume that \(H(P_{t})\leq B,\) then we get an _uniform_ bound on the number of points \(t\) such that \(E_{t}\) and \(E_{t}^{\prime}\) are geometrically isogenous. Moreover, this uniform bound only depends on \(K\), the height \(B,\) and the degree of the family.
**Theorem 1.3**.: _Assume the same hypothesis as in Theorem 1.1. Let \(S^{\prime}(B)\) be the set_
\[S^{\prime}(B)=\{t\in C(K)|H(P_{t})\leq B,\text{ there is an $\overline{\mathbb{Q}}$-isogeny between $E_{t}$ and $E_{t}^{\prime}$}\}.\]
_Then we have_
\[|S^{\prime}(B)|\lesssim_{K}d^{4}(\log B)^{6}.\]
### Relations with Unlikely Intersections
Although Theorem 1.1 indicates the sparsity of isogeny between elliptic curves in a family, one should emphasize that this is not an unlikely intersection problem on its own. There are infinitely many \(t\in\overline{\mathbb{Q}}\) such that \(E_{t}\) and \(E_{t}^{\prime}\) are isogenous! However, since we are working over a fixed number field \(K\), this number has to be finite. Nevertheless, one may consider questions in more generalized settings.
Let \(S\) be a GSpin Shimura variety and denote by \(\{Z_{i}\}_{i}\) a sequence of special divisors on \(S\). Let \(C\hookrightarrow S\) be a curve whose generic fiber has _maximal monodromy_.
**Definition 1.4**.: Define the set \(Z(C)\) to be the intersection of \(C\) with the infinite union of the special divisors
\[Z(C)=C\cap\bigcup_{i}Z_{i}.\]
If \(C\) is a curve over \(\mathbb{C}\), then the set \(Z(C)\) is infinite, which is a classical result dating back to the 1980s. In the work [10], Maulik, Shankar, and Tang proved a similar result for curves \(C\) over \(\overline{\mathbb{F}}_{p}\). The result also holds if \(C=\operatorname{Spec}\mathcal{O}_{K}\) where \(K\) is a number field, as proved by Shankar-Shankar-Tang-Tayou in [11].
Since \(X(1)\times X(1)\) is a GSpin Shimura variety, one can take the special divisors to be \(Z_{n}=Y_{0}(n)\), which parametrizes \(n\)-isogenous pairs of elliptic curves. Our Theorem 1.3 can be reformulated as follows:
**Theorem 1.5**.: _Let \(K\) be a number field of degree \(d_{K}\). Let \(C\in X(1)\times X(1)\) be a rational curve defined over \(K\) parametrizing a family of non-isotrivial and generically non-isogenous elliptic curves. Let \(d\) be the projective degree of \(C\) defined by \(\iota\). For a positive integer \(B\) define the set \(Z(C;B)\) to be the set of \(K\)-valued points on \(C\) such that_
\[Z(C;B)=\{x\in Z(C)\mid H(\iota(x))\leq B\}.\]
_We have_
\[|Z(C;B)|\lesssim_{K}d^{4}(\log B)^{6}.\]
The discussion above suggests the following question (which we do not claim to know the answer to):
**Question:** Suppose \(C\) is a curve defined over \(\overline{\mathbb{Q}}\) and let \(Z(C;B)\) denote the set of \(\overline{\mathbb{Q}}\)-valued points in \(Z(C)\) whose absolute height is bounded above by \(B\). One may ask if \(Z(C;B)\) is finite. If this is the case, can we get an upper bound in terms of the bounded height \(B\)?
### Non-simple abelian varieties in a family
Theorem 1.1 can be considered as a generalization of the results from Ellenberg, Elsholtz, Hall, and Kowalski. In work in progress, we hope to generalize their main results [13]Theorem A, Theorem B[1] to any family of abelian varieties parametrized by an irreducible rational curve. To be explicit, let \(X\in\mathcal{A}_{g}/\mathbb{Q}\) be a curve that parametrizes a \(1\)-dimensional family of abelian varieties where we denote by \(A_{x}\) the fiber of this family over \(x\in X\). Let \(d\) be the projective degree of \(X\). We aim to obtain a uniform bound for the number of \(x\in X(\mathbb{Q})\) such that \(A_{x}\) is geometrically non-simple, using a method similar to what we use in this article. By contrast, all upper bounds for the number of non-simple fibers studied in previous literature depend on \(X\), and a uniform upper bound that does not depend on \(X\) would follow inexplicitly from Lang's conjecture via the result of Caporaso-Harris-Mazur [1], as explained in [1].
**Organization of the paper.** In SS2, we recall the notion of heights on projective spaces, Hecke correspondence, the modular diagonal quotient surfaces, and some results on the dimension growth conjectures, which we will use later. In SS3, we interpret the counting problem into counting rational points on projective curves with certain level structures and construct 'nice' Galois covers that capture the information of being isogenous. In SS4, we construct certain projective embeddings with respect to the covers in SS3, such that the dimension growth conjecture applies. In SS5, we give an upper bound on the change of heights between covers so that one can bound the height of the lifting points. Finally, we prove Theorem 1.1 and Theorem 1.3 in SS6.
**Acknowledgments.** The author wishes to thank Jordan Ellenberg for many helpful suggestions. Also, The author wants to show her gratitude to Asvin G. and Ananth Shankar for the useful conversations.
Preliminaries
This section introduces the notations, definitions, and geometric objects for future use. Also, we recall some results in arithmetic geometry, primarily a result of the dimension growth conjectures, that play an essential role in our proofs.
Let \(C\) be a rational curve over \(K\) isomorphic to \(\mathbb{P}^{1}\) parametrizing a one-dimensional family of pairs of elliptic curves, and let \((E_{t},E^{\prime}_{t})\) be the generic fiber of this family over \(K(t)\), with the assumption that there exists no isogeny between \(E_{t}\) and \(E^{\prime}_{t}\). Without loss of generality, one may write \(E_{t}\) and \(E^{\prime}_{t}\) in the Weierstrass form
\[E_{t}:y^{2}=x^{3}+f(t)x+g(t) \tag{2.1}\]
\[E^{\prime}_{t}:y^{2}=x^{3}+f^{\prime}(t)x+g^{\prime}(t) \tag{2.2}\]
where \(f(t)\), \(g(t)\), \(f^{\prime}(t)\) and \(g^{\prime}(t)\) are rational functions over \(K\). Therefore, the \(j\)-invariants of \(E_{t}\) and \(E^{\prime}_{t}\), denoted by \(j(E_{t})\) and \(j(E^{\prime})\), are also rational functions.
### Heights of points
For a rational point \(a\in K\), define the height \(H(a)\) of \(a\) to be
\[H(a):=\prod_{v\in M_{K}}\max\{1,|a|_{v}\}^{\frac{n_{v}}{[K,a]}}\]
where \(n_{v}=[K_{v}:\mathbb{Q}_{v}]\).
The height of a \(K\)-rational point \(P\in\mathbb{P}^{n}\) with homogeneous coordinates \(P=(x_{0},\cdots,x_{n})\) is defined to be
\[H(P)=\prod_{v\in M_{K}}\max\{|x_{0}|_{v},\cdots,|x_{n}|_{v}\}^{\frac{n_{v}}{[K,a]}}\]
where \(n_{v}=[K_{v}:\mathbb{Q}_{v}].\) Recall that we define \(\iota\) to be the map obtained via the \(j\)-invariant and via the Segre embedding in \(\mathbb{P}^{3}\), see (1.1).
Recall the definition of \(H(P_{t})\) in Definition 1.2. The following lemma indicates that
\[H(P_{t})=H(j(E_{t}))H(j(E^{\prime}_{t})).\]
**Lemma 2.1**.: _Let \(\sigma_{n}\) be the Segre embedding of \(n\)-copies of \(\mathbb{P}^{1}\)_
\[\sigma_{n}:\underbrace{\mathbb{P}^{1}\times\mathbb{P}^{1}\times\ldots\times \mathbb{P}^{1}}_{\text{$n$-times}}\hookrightarrow\mathbb{P}^{2^{n}-1}\]
_such that for a point \((x_{1},\cdots,x_{n})\in\mathbb{P}^{1}\times\cdots\times\mathbb{P}^{1}\)_
\[\sigma_{n}(x_{1},\cdots,x_{n})=(\prod_{1\leq i\leq n}x_{i},\prod_{i_{1}<i_{2}< \cdots<i_{n-1}}x_{i_{1}}\cdots x_{i_{n-1}},\ldots,x_{1},\ldots,x_{n},1).\]
_Let \(H(.)\) be the projective height defined above. We have_
\[H(\sigma_{n}(x_{1},\cdots,x_{n}))=H(x_{1})\cdots H(x_{n}).\]
Proof.: By definition of the projective height,
\[H(\sigma_{n}(x_{1},\cdots,x_{n}))=\prod_{v\in M_{K}}\max\{|\prod_{1\leq i\leq n }x_{i}|_{v},\cdots,|x_{1}|_{v},\cdots,|x_{n}|_{v},1\}^{\frac{n}{[K,a]}}\]
and
\[H(x_{1})\cdots H(x_{n})=\prod_{v\in M_{K}}\{\max\{1,|x_{1}|_{v}\}\cdots\max\{ 1,|x_{n}|_{v}\}\}^{\frac{n_{v}}{[K,a]}}.\]
A direct observation shows that for each \(v\in M_{K}\),
\[\max\{|\prod_{1\leq i\leq n}x_{i}|_{v},\cdots,|x_{1}|_{v},\cdots,|x_{n}|_{v},1\}= \max\{1,|x_{1}|_{v}\}\cdots\max\{1,|x_{n}|_{v}\}.\]
### The modular diagonal quotient surfaces
Later in this article, we will define the modular surface \(X_{\hat{H}_{\Delta}}(m)\) (see (3.1)) that lies in a special family, called the _modular diagonal quotient surfaces_, which arise naturally as the (coarse) moduli space to the moduli problem that classifies isomorphisms between mod \(m\) Galois representations attached to pairs of elliptic curves \(E/K\).
The modular curve \(X(m)\) is a Galois cover of \(X(1)\) with Galois group
\[G=SL_{2}(\mathbb{Z}/m\mathbb{Z})/\{\pm 1\}.\]
Let \(\epsilon\) be an element in \((\mathbb{Z}/m\mathbb{Z})^{\times}\). Let \(\alpha_{\epsilon}\) be the automorphism of \(G\) defined by conjugation with \(Q_{\epsilon}=\left(\begin{smallmatrix}\epsilon&0\\ 0&1\end{smallmatrix}\right)\), i.e. \(\alpha_{\epsilon}(g)=Q_{\epsilon}gQ_{\epsilon}^{-1}\). The product surface \(X(m)\times X(m)\) carries an action of the twisted diagonal subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) defined by
\[\alpha_{\epsilon}:\,\Delta_{\epsilon}=\{(g,\alpha_{\epsilon}(g)):g\in G\}.\]
**Definition 2.2**.: The twisted diagonal quotient surface defined by \(\alpha_{\epsilon}\) is the quotient surface \(X_{\epsilon}:=\Delta_{\epsilon}\backslash X(m)\times X(m)\) obtained by the action of \(\Delta_{\epsilon}\).
\(X_{\epsilon}\) can be viewed as the moduli space of the triple \((E_{1},E_{2},Q)\), where
\[Q:E_{1}[p]\stackrel{{\sim}}{{\to}}E_{2}[p]\]
multiplies the Weil pairing by \(\epsilon\).
The modular diagonal quotient surfaces and their modular interpretation are widely used in studying Mazur's question[14][15], Frey's conjecture[16], and so on.
### Uniform bound for points of bounded height on curves
Let \(X\) be an irreducible projective curve defined over a number field \(K\) with a degree \(d\) embedding into \(\mathbb{P}^{n}\). Denote by \(N(X,B)\) the set of \(K\)-rational points on \(X\subseteq\mathbb{P}^{n}\) of projective heights at most \(B\). Heath-Brown proved a uniform bound for rational points on a curve \(X\) with bounded height in [12, Theorem 5], which addresses that
\[N(X,B)\leq O_{\epsilon}(B^{2/d+\epsilon}).\]
The result on irreducible projective curves with the removal of the term \(B^{\epsilon}\) without needing \(\log B\) was proved by Walsh in [17], using a combination of the determinant method based on the \(p\)-adic approximation introduced by Heath-Brown [12] with the method due to Ellenberg and Venkatesh in [10]. The uniform upper bound on \(N(X,B)\) with an explicit term of polynomial growth depends on \(d\) was proved by Castryck, Cluckers, Dittmann, and Nguyen [1]:
**Theorem 2.3**.: _[_1_, Theorem 2]_ _Given \(n\geq 1\), there exists a constant \(c=c(n)\) such that for all \(d>0\) and all integral projective curves \(X\in\mathbb{P}^{n}_{\mathbb{Q}}\) of degree \(d\) and all \(B\geq 1\) one has_
\[|N(X,B)|\leq cd^{4}B^{2/d}.\]
A year later, Paredes and Sasyk [11] extended the work of Castryck, Cluckers, Dittmann, and Nguyen to give uniform estimates for the number of rational points of bounded height on projective varieties defined over global fields. More precisely, they proved the following extension of [1, Theorem 2] to global fields.
**Theorem 2.4**.: [10, Theorem 1.8] _Let \(K\) be a global field of degree \(d_{K}\). Let \(H\) be the absolute projective multiplicative height. For any integral projective curve \(C\subseteq\mathbb{P}_{K}^{N}\) of degree \(d\) it holds_
\[|\{\boldsymbol{x}\in C(K):H(\boldsymbol{x})\leq B\}|\lesssim_{K,N}\begin{cases} d^{4}B^{\frac{2d_{K}}{d}}&\text{ if $K$ is a number field,}\\ d^{8}B^{\frac{2d_{K}}{d}}&\text{ if $K$ is a function field.}\end{cases}\]
We will discuss this result in section SS6, where we apply it.
## 3 Construct Galois Covering with Level Structures
In this section, we construct 'nice' Galois coverings of \(X(1)\times X(1)\) with level structures. To be precise, for a large enough rational prime \(m\), these are quotients of \(X(m)\times X(m)\) by certain kinds of subgroups of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\), with the property that one can lift a point \(t\in S(B)\) to one of the quotients. In other words, the Galois coverings capture points that parametrize isogenous pairs \((E_{t},E_{t}^{\prime})\) in the family. The main theorem in this section is Lemma 3.11.
We assume that \(m\) is a rational prime such that \(m\geq M^{\prime}=\max\{17,M_{1}\}\) for some absolute integer \(M_{1}\) defined in Theorem 3.4. Also we want \(m\) to be a prime between \(2(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\) and \(4(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\) once we fix \(B\), which we will make precise in SS6. This condition implies that we need to assume \(4(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\geq M^{\prime}\) and thus we can always assume \(B\) is greater than the absolute integer \(M=e^{(\frac{M^{\prime}}{4})^{2}}\) to make everything go through. Let \(X(m)\times X(m)\) be the surface parametrizing \(4\)-tuples \((E,E^{\prime},\phi,\phi^{\prime})\), where \(E_{t}\) and \(E_{t}^{\prime}\) are elliptic curves and \(\phi\), \(\phi^{\prime}\) are \(m\)-level structures, i.e.
\[\phi:E[m]\to(\mathbb{Z}/m\mathbb{Z})^{2},\phi^{\prime}:E^{\prime}[m]\to( \mathbb{Z}/m\mathbb{Z})^{2}\]
are isomorphisms of group schemes preserving the Weil pairing. Let \(H\) be a subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) and define \(X_{H}\) to be the quotient
\[X_{H}:=(X(m)\times X(m))/H. \tag{3.1}\]
\(p\)**-torsion monodromy representations and the lifting criterion.** Let \(E\) be an elliptic curve over a number field \(K\) and let \(p\) be a prime. Denote by
\[\rho_{E,p}:G_{K}\to\operatorname{Aut}(E[p])\]
the \(p\)-torsion Galois representation associated to the \(p\)-torsion points \(E[p]\) of the elliptic curve \(E\). It is a standard fact that the elliptic curve \(E/K\) admits an isogeny of degree \(p\) defined over \(K\) if and only if the image \(\rho_{E,p}\left(G_{K}\right)\) is contained in a Borel subgroup of \(\operatorname{Aut}(E[p])\). If \(E/K\) and \(E^{\prime}/K\) are related by an isogeny over \(K\) of degree coprime to \(p\), then this isogeny induces a \(G_{K}\)-module isomorphism \(E[p]\simeq E^{\prime}[p]\), which identifies the images \(\rho_{E,p}\left(G_{K}\right)\) and \(\rho_{E^{\prime},p}(G_{K})\) up to change of basis. See [11] for an explicit description of the images of \(p\)-torsion Galois representations attached to the product of two isogenous elliptic curves with an isogeny of degree \(p\).
**Definition 3.1**.: Define \(H_{\Delta}\) to be the image of the diagonal map
\[\Delta:SL_{2}(\mathbb{Z}/m\mathbb{Z})\to SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL _{2}(\mathbb{Z}/m\mathbb{Z}).\]
We will prove, in Proposition 3.7, that there exists an isogeny \(\phi_{t}:E_{t}\to E_{t}^{\prime}\) defined over \(\overline{\mathbb{Q}}\) if and only if the monodromy of the \(p\)-torsion Galois representation on \(E_{t}[p]\times\mathbb{Q}\)
\(E_{t}^{\prime}[p]\) is contained in a group \(\tilde{H}_{\Delta}\), which contains \(H_{\Delta}\) as an index \(2\) subgroup for all but finitely many \(p\).
_Remark 3.2_.: It is a classical result that for elliptic curves defined over a number field \(K\), there are finitely many \(j\)-invariants with complex multiplication in \(K\). Denote this number by \(C(K)\). We need to bound the number of points \(P_{t}\) on \(C\) whose \(j\)-invariant lies in this set. There are at most \(C(K)^{2}\) points \(P_{t}\in C\) such that \(\iota(P_{t})=(j(E_{t}),j(E_{t}^{\prime}))\) contains CM \(j\)-invariants. Therefore in our setting, we can discard them and focus on pairs of elliptic curves without complex multiplication.
**Lemma 3.3**.: _Let \(E_{1}\) and \(E_{2}\) be two elliptic curves without complex multiplication over a number field \(K\). If there exists an isogeny \(\phi:E_{1}\to E_{2}\) defined over \(K\) then the \(p\)-torsion Galois representation of \(E_{1}\times E_{2}\)_
\[\operatorname{Gal}(\bar{K}/K)\to GL_{2}(\mathbb{F}_{p})\times GL_{2}( \mathbb{F}_{p})\]
_has image conjugate to \(H_{\Delta}\) for primes \(p\) not dividing the degree of \(\phi\)._
Proof.: First, by Serre's open image theorem, the \(p\)-torsionGalois representation of each factor
\[\operatorname{Gal}(\bar{K})\to GL_{2}(\mathbb{F}_{p})\]
is surjective for all large enough primes \(p\).
Suppose \(E_{1}/K\) and \(E_{2}/K\) are related by an isogeny over \(K\) of degree \(d\), then for all primes \(p\nmid d\) this isogeny induces a \(G_{K}\)-module isomorphism from \(E_{1}[p]\) to \(E_{2}[p]\), which identifies the images \(\rho_{E_{1},p}\left(G_{K}\right)\) and \(\rho_{E_{2},p}(G_{K})\). The lemma follows.
Note that Lemma 3.3 is a result of pairs of elliptic curves over number fields. Later in the proof of Proposition 4.9, we need to use the result over the function field \(K(t)\) that classifies elliptic curves up to isogeny by their \(p\)-torsion Galois representations. We present a beautiful theorem by Bakker and Tsimerman [1, Theorem 1].
**Theorem 3.4**.: _Let \(k\) be an algebraically closed field of characteristic \(0\). For any \(N>0\), there exists \(M_{N}>0\) such that for any prime \(p>M_{N}\) and any smooth quasi-projective curve \(U\) of gonality \(n<N\), non-isotrivial elliptic curves \(\mathcal{E}\) over \(U\) are classified up to isogeny by their \(p\)-torsion local system \(\mathcal{E}[p]\)._
**Definition 3.5**.: Let \(\tilde{H}_{\Delta}\) be the maximal subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) that contains \(H_{\Delta}\) as an index \(2\) subgroup.
The following lemma, together with the fact that if \(F\) is a field with more than \(5\) elements then the only proper normal subgroup of \(SL_{2}(F)\) is the group \(\{\pm 1\}\), proves that \(\tilde{H}_{\Delta}\) is the unique proper subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) that contains \(H_{\Delta}\) as an index \(2\) subgroup.
**Lemma 3.6**.: _Let \(A\) be a group and let \(G=A\times A\). Define \(\Delta=\{(a,a)\mid a\in A\}\) as the diagonal subgroup of \(G\). If \(\Delta\leq H\leq G\) then there exists a normal subgroup \(N\) of \(A\) such that \(H=\{(g,h)\in G\mid gh^{-1}\in N\}\)._
Proof.: Let \(N=\{h\in A\mid(h,1)\in H\}\) be a subgroup of \(G\). We claim that \(N\) is the desired normal subgroup. Indeed, for any \(a\in A\) and \((h,1)\in H\), we have \((aha^{-1},1)=(a,a)(h,1)(a^{-1},a^{-1})\in H\), therefore \(N\) is a normal subgroup of \(A\).
For any \(a,a^{\prime}\in A\), we have \((aa^{\prime-1},1)(a^{\prime},a^{\prime})=(a,a^{\prime})\). Therefore \((a,a^{\prime})\in H\) is and only if \((aa^{\prime-1},1)\in H\), if and only if \(aa^{\prime-1}\in A\) by the definition of \(A\)
**Proposition 3.7**.: _Let \(E_{1}\) and \(E_{2}\) be elliptic curves without complex multiplication over \(\mathbb{Q}\). There exists an isogeny \(\phi:E_{1}\to E_{2}\) defined over \(\overline{\mathbb{Q}}\) if and only if the \(p\)-torsion Galois representation_
\[\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to GL_{2}(\mathbb{F}_{p}) \times GL_{2}(\mathbb{F}_{p})\]
_has image contained \(\tilde{H}_{\Delta}\), for all primes \(p\) not dividing the degree of \(\phi\)._
Proof.: We need the following lemma:
**Lemma 3.8**.: _Let \(E_{1}\), \(E_{2}\) be elliptic curves without complex multiplication defined over a number field \(K\). If \(E_{1}\) and \(E_{2}\) are isogenous over \(\overline{\mathbb{Q}}\) then there exists a quadratic twist of \(E_{2}\) that is isogenous to \(E_{1}\) over \(K\)._
Suppose there exists an isogeny \(\varphi:E_{1}\to E_{2}\) over \(\overline{\mathbb{Q}}\) then by Lemma 3.8 and Lemma 3.3, there is a quadratic extension \(L/K\) such that \(G_{L}\) has diagonal image in \(GL_{2}(\mathbb{F}_{p})\times GL_{2}(\mathbb{F}_{p})\). Therefore the image of \(G_{K}\) is contained in a subgroup of \(GL_{2}(\mathbb{F}_{p})\times GL_{2}(\mathbb{F}_{p})\) which contains \(H_{\Delta}\) as an index \(2\) subgroup. For the other direction, we take the preimage of the \(H_{\Delta}\), which can be written in the form \(G_{F}\) for some number field \(F\), that is quadratic over \(K\). By Proposition 2.19, there is an isogeny \(\varphi:E_{1}\to E_{2}\) defined over \(F\) which completes the proof.
Proof of Lemma 3.8.: Let \(\varphi:E_{1}\to E_{2}\) be an isogeny over \(\overline{\mathbb{Q}}\) and \(G_{\overline{Q}/K}=\operatorname{Gal}(\overline{\mathbb{Q}}/K)\). For every \(g\in G_{\overline{Q}/K}\), \(\varphi^{g}=[\alpha(g)]\circ\varphi\) is another isogeny \(E_{1}\to E_{2}\) of the same degree as \(\varphi\). Here we define \(\alpha:G_{\overline{Q}/K}\to\mathbb{R}\) be a character on \(G_{\overline{Q}/K}\). Since for all elliptic curves without complex multiplication over \(\overline{\mathbb{Q}}\), there exists a cyclic isogeny \(\varphi:E_{1}\to E_{2}\) up to sign and all other isogenies \(\psi\) from \(E_{1}\) to \(E_{2}\) can be written as \(\psi=\varphi\circ[m]\) for some integer \(m\), we have \(\alpha(g)=\pm 1\). Hence \(\alpha(g)\) is a quadratic character and there exists \(d\in K^{*}\) such that \(\alpha(g)=g(\sqrt{d})/\sqrt{d}\). Thus the quadratic twist of \(E_{2}\) by \(d\) is the desired twist.
**Definition 3.9**.: Let \(H_{\mathrm{p}}:=H_{1}\times H_{2}\), where \(H_{1}\) and \(H_{2}\) (possibly \(H_{1}=H_{2}\)) are maximal parabolic subgroups of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\).
_Remark 3.10_.: All maximal parabolic subgroups of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\) are in the same conjugacy class. Therefore the covers constructed in this way are all isomorphic to each other. If \(t\in S(B)\) lifts to one, it lifts to all.
**Lemma 3.11**.: _Any rational point \(t\in S(B)\) for some \(B\) admits a lifting to one of the congruence covers: \(X_{\tilde{H}_{\Delta}}(K)\) or \(X_{H_{p}}(K)\)._
Proof.: By the argument above, when there is a \(K\)-isogeny of degree \(m\nmid d\), it induces a \(G_{K}\)-isomorphism \(E_{t}[m]\simeq E_{t}^{\prime}[m]\) which implies an isomorphism of the mod \(m\) Galois images, up to a change of basis given by conjugating by an element \((1,g)\in SL_{2}(\mathbb{Z}/m\mathbb{Z})\). Applying Proposition 3.7, we get the conclusion that if \(m\) does not divide the degree of the isogeny, the Galois image lies in \(\tilde{H}_{\Delta}\). As for the rest of the lemma, when \(d=m\), we have the Galois image of the \(m\)-torsion monodromy representation of both \(E_{1}\) and \(E_{2}\) contained in a Borel subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\). Since any Borel subgroup is maximal parabolic in \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\), this proves lemma 3.11.
## 4 Geometric Interpretation and Projective Embeddings
In this section, we explore the geometric interpretation of the question and construct projective embeddings, which allow us to transform the question into counting rational points with bounded height in projective spaces.
**Definition 4.1**.: Let \(H\) be a subgroup of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) such that \(H\) is either \(\tilde{H}_{\Delta}\) or \(H_{\mathrm{p}}\). Define \(C_{H}\) to be the lifting of \(C\) to the modular surface \(X_{H}:=X(m)\times X(m)/H\), which is given by the pullback of the following diagram.
### The curve \(C_{h}\) is integral
In order to apply the result by Paredes and Sasyk, we need to prove that the lifting \(C_{H}\) of \(C\) is integral for large enough \(m\). We need the following Goursat's lemma:
**Lemma 4.2**.: [1, Theorem 5.5.1] _Let \(G\) and \(H\) be groups, and let \(K\) be a subdirect product of \(G\) and \(H\); that is, \(K\leq G\times H\), and \(\pi_{G}(K)=G,\pi_{H}(K)=H\), where \(\pi_{G}\) and \(\pi_{K}\) are the projections onto the first and second factor, respectively from \(G\times H\). Let \(N_{1}=K\cap\ker\left(\pi_{G}\right)\) and \(N_{2}=K\cap\ker\left(\pi_{H}\right)\). Then \(N_{2}\) can be identified with a normal subgroup \(N_{G}\) of \(G,N_{1}\) can be identified with a normal subgroup \(N_{H}\) of \(H\), and the image of \(K\) in \(G/N_{G}\times H/N_{H}\) is the graph of an isomorphism \(G/N_{G}\cong H/N_{H}\)._
Proof.: See [1, Theorem 5.5.1].
Also, we prove the following proposition:
**Proposition 4.3**.: _For all \(m\geq 17\), the Galois image of the \(p\)-torsion monodromy representation of \(E_{t}[m]\) and \(E_{t}^{\prime}[m]\) in the generic fiber is \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\)._
Proof.: Suppose the Galois image of the \(m\)-torsion monodromy representation is some proper subgroup \(G\) of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\). Then we have a dominant map \(f:C\to X(m)/G\). Since the genus of \(C\) is zero, this implies that the genus of the modular curve \(X(m)/G\) is zero.
Let \(\mathcal{N}(m)\) be the quantity such that
\[\mathcal{N}(m):=\min\{\mathrm{genus}(X(m)/G)\mid G\subsetneq SL_{2}(\mathbb{ Z}/m\mathbb{Z}),\ G\ \mathrm{maximal}\}.\]
Cojocaru and Hall proved the genus formula for \(X(m)/G\) for all possible maximal subgroup \(G\) of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\), which is summarized in a table [1, Table 2.1]. Moreover, they proved that
\[\mathcal{N}(m)=\frac{1}{12}\left[m-(6+3\mathrm{e}_{2}+4\mathrm{e}_{3})\right]>0\]
for \(m\geq 17\). The proposition follows.
Now we are ready to prove that \(C_{H}\) is irreducible. This follows as a consequence of Proposition 4.3:
**Lemma 4.4**.: _For each choice of \(H\), the curve \(C_{H}\) is integral._
Proof.: We have the covering map \(q_{\tilde{U}}:\tilde{U}\to U\) where \(U\) is the connected dense open subset of \(C\) parametrizing smooth points. Later in the proof of Proposition 4.9, we showed that the Galois image of the \(m\)-torsion monodromy representation of \(\pi_{1}(U)\) is the full group \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\), by using Proposition 4.3 and Lemma 4.2. Therefore the monodromy of \(\pi_{1}(U)\) acts transitively on the right cosets \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})/H\), which implies that the cover \(C_{H}\) is connected for both \(H=\tilde{H}_{\Delta}\) and \(H=H_{\mathrm{p}}\). The lemma follows by the fact that the quotient space of a connected space is connected.
### Construct projective embeddings
In this subsection, we give an explicit construction of projective embeddings of \(C_{H}\), denoted by \(\iota_{H}\). We make the following diagram commute for each case's choice of \(N\in\mathbb{Z}\). Here the rational map \(\mathbb{P}^{N}\dashrightarrow\mathbb{P}^{3}\) is a projection of coordinates of \(\mathbb{P}^{N}\).
(4.1)
**Case I: The modular diagonal quotient surfaces.**
Recall the definition of the modular diagonal quotient surfaces in SS2.4. In our case where \(\epsilon=1\), \(X_{H_{\Delta}}(m)(K)\) has the moduli interpretation that it is the set of isomorphism classes of triples \((E_{1},E_{2},\psi)\), where \(E_{1}\), \(E_{2}\) are elliptic curves over \(K\) and \(\psi:E_{1}[m]\stackrel{{\sim}}{{\to}}E_{2}[m]\) is an isomorphism of the \(m\)-torsion subgroups of the elliptic curves which preserves the Weil pairing. Let \(t\) be a rational point of \(C(K)\) such that there exists a point \((E_{t},E^{\prime}_{t},\psi:E_{t}[m]\stackrel{{\sim}}{{\to}}E^{ \prime}_{t}[m])\) which is a point of \(X_{H_{\Delta}}(m)(K)\). One may notice that it is not obvious that \(C_{\tilde{H}_{\Delta}}\) is connected, and we will address this point in Lemma 4.4.
We now define some functions on \(C_{\tilde{H}_{\Delta}}\) in order to apply the result from [1] and to bound the number of points on \(C_{\tilde{H}_{\Delta}}\).
For a fixed \(t\in K\), there is a list of elliptic curves isogenous to \(E_{t}\) through cyclic isogenies of degree \(m\) given by the list of cyclic subgroups of \(E_{t}[m]\), say
\[E_{t,1},\cdots,E_{t,m+1}.\]
Similarly we have a list of \(m\)-cyclic subgroups of \(E^{\prime}_{t}[m]\) parametrizing the \(m\)-cyclic isogenies of \(E^{\prime}_{t}\), with the corresponding list of elliptic curves isogenous to \(E^{\prime}_{t}\):
\[E^{\prime}_{t,1},\cdots,E^{\prime}_{t,m+1}.\]
_Remark 4.5_.: These isogenies will most likely _not_ be defined over \(K\) unless \(E_{t}[m]\) or \(E^{\prime}_{t}[m]\) has a rational cyclic subgroup. Even then, most of them will not be defined over \(K\).
For each point \((E_{t},E^{\prime}_{t},\psi)\) on \(\tilde{C}\), we have \(m+1\) cyclic subgroups of \(E_{t}[m]\) and \(m+1\) cyclic subgroups of \(E^{\prime}_{t}[m]\) which can be placed in natural bijection with each other under \(\psi\). We can re-order the lists for \(E^{\prime}_{t}\) such that \(E_{t,1}\) is in correspondence with \(E^{\prime}_{t,1}\) and so on.
**Definition 4.6**.: Let \(F\) be a function defined on \(X_{\tilde{H}_{\Delta}}\) given by
\[F(E_{t},E^{\prime}_{t},\psi)=j(E_{t,1})j(E^{\prime}_{t,1})+\cdots+j(E_{t,m+1} )j(E^{\prime}_{t,m+1}).\]
**Lemma 4.7**.: \(F\) _is defined over \(\mathbb{Q}\). Moreover, \(F\) is an element of the function field \(\mathbb{Q}(X_{\tilde{H}_{\Delta}})\)._
Proof.: For any \(g\in\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\), \(t\in\overline{\mathbb{Q}}\), denote by \((t,\psi)\) one of the preimages on \(X_{\tilde{H}_{\Delta}}\).
\[F((t,\psi)^{g})=\Sigma_{i=1}^{m+1}j(E_{t^{g},i})j(E^{\prime}_{t^{g},i})= \Sigma_{i=1}^{m+1}(j(E_{t,i})j(E^{\prime}_{t,i}))^{g}.\]
The second equality holds since \(\psi^{g}=\psi\) and for an arbitrary \(1\leq i\leq m+1\), there is a unique \(k\) such that
\[(j(E_{t,i}))^{g}=j(E_{t,k})\]
and once we fix \(i\), it is also true that
\[(j(E^{\prime}_{t,i}))^{g}=j(E^{\prime}_{t,k})\]
for the same \(k\).
**Definition 4.8**.: Define \(\iota^{\prime}_{\hat{H}_{\Delta}}:C_{\hat{H}_{\Delta}}\to\mathbb{P}^{1}\times \mathbb{P}^{1}\times\mathbb{P}^{1}\) to be the function given by
\[\iota^{\prime}_{\hat{H}_{\Delta}}((E_{t},E^{\prime}_{t},\psi))=(F(E_{t},E^{ \prime}_{t},\psi),j(E_{t}),j(E^{\prime}_{t})).\]
**Proposition 4.9**.: _For \(m\geq 17\), \(\iota^{\prime}_{\hat{H}_{\Delta}}\) is generically an embedding of \(C_{\hat{H}_{\Delta}}\) into \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\). This is equivalent to saying that the subfield \(M\subset K(C_{\hat{H}_{\Delta}})\) generated by \(j(E_{t})\), \(j(E^{\prime}_{t})\) and \(F\) is the whole function field._
The proof of Proposition 4.9 splits into two parts. The first part is to show that \(M\) is not contained in \(K(C)\). The second part is to show there is no intermediate extension between \(K(C)\) and \(K(C_{\hat{H}_{\Delta}})\), so that \(K(C)\subseteq M\subseteq K(C_{\hat{H}_{\Delta}})\) and \(M\neq K(C)\) implies \(M=K(C_{\hat{H}_{\Delta}})\).
First, we present some representation theoretical lemma which we will use later. Let \(G\) be a finite group and let \(V\) and \(V^{\prime}\) be permutation representations of \(G\) of the same dimension, such that \(G\) acts \(2\)-transitively on some finite sets by \(V\) and \(V^{\prime}\). It is a standard fact that the corresponding representation \(V\)(resp. \(V^{\prime}\)) is the direct sum of the trivial representation and an irreducible representation whose coordinates sum to \(0\). Denote the irreducible representation by \(V_{0}\)(resp. \(V^{\prime}_{0}\)). We prove that if \(v\in V\), \(v^{\prime}\in V^{\prime}\) such that \(\langle v,v^{\prime}\rangle=\langle v,(v^{\prime})^{g}\rangle\) for any \(g\in G\), then either \(v\) or \(v^{\prime}\) is fixed by \(G\). We first prove the following lemma.
**Lemma 4.10**.: _If \(v\in V\)(resp. \(v^{\prime}\in V^{\prime}\)) is not in the trivial representation generated by \([1,\cdots,1]^{T}\), the set \(\{v-v^{g}\}\)(resp. \(\{v^{\prime}-(v^{\prime})^{g}\}\)), as \(g\) ranges over \(G\), span the space of all vectors in \(V_{0}\)(resp. \(V^{\prime}_{0}\))._
Proof.: Let \(W_{v}\)(resp. \(W^{\prime}_{v}\)) be the subspace \(\mathrm{Span}\{v-v^{g}\}\)(resp. \(\mathrm{Span}\{(v^{\prime})-(v^{\prime})^{g}\}\)) as \(g\) ranges over \(G\). We claim that \(W_{v}\) is a sub-representation of \(V_{0}\), the irreducible representation of the permutation representation given by the condition that the sum of the coordinates equals \(0\). Indeed, for any \(v\in W_{v}\), \(\sigma,\tau\in G\), one have
\[(v-v^{\sigma})^{\tau}=v^{\tau}-v^{\sigma\tau}=(v-v^{\sigma\tau})-(v-v^{\tau}).\]
The lemma follows from the face that \(V_{0}\)(resp. \(V^{\prime}_{0}\)) is irreducible.
**Lemma 4.11**.: _If \(v\in V\), \(v^{\prime}\in V^{\prime}\) such that \(\langle v,v^{\prime}\rangle=\langle v,(v^{\prime})^{g}\rangle\) for any \(g\in G\), then either \(v\) or \(v^{\prime}\) is fixed by \(G\)._
Proof.: If \(\langle v,v^{\prime}\rangle=\langle v,(v^{\prime})^{g}\rangle\) for all \(g\in G\), then
\[\langle v,(v^{\prime})-(v^{\prime})^{\gamma}\rangle=0\]
for all \(g\in G\).
If \(W_{v^{\prime}}=0\), then \(v^{\prime}=(v^{\prime})^{g}\) for any \(g\in G\) thus \(v^{\prime}\) is fixed by \(G\). If \(W_{v^{\prime}}\neq 0\), by lemma 4.10, \(v^{\prime}-(v^{\prime})^{g}\) span the space \(V^{\prime}_{0}\) which is the orthogonal complement of the trivial representation. Therefore
\[v\in W_{v^{\prime}}^{\perp}=\mathbb{C}\langle[1,\cdots,1]^{T}\rangle,\]
thus fixed by \(G\).
Now we apply Lemma 4.11 to the case where \(G=SL_{2}(\mathbb{Z}/m\mathbb{Z})\) and where \(V\) and \(V^{\prime}\) are \(m+1\) dimensional representations of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\) spanned by vectors \(v_{t}\) and \(v_{t}^{\prime}\) respectively, as \(t\) ranges over \(C\). For \(t\in C\) such that \(E_{t}\) and \(E_{t}^{\prime}\) are both non-singular, define \(v_{t}\) and \(v_{t}^{\prime}\) by
\[v_{t}=(j(E_{t,1}),\cdots,j(E_{t,m+1})) \tag{4.2}\]
\[v_{t}^{\prime}=(j(E_{t,1}^{\prime}),\cdots,j(E_{t,m+1}^{\prime})). \tag{4.3}\]
It is easy to see that \(V\) and \(V^{\prime}\) are permutation representations of \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\) by its transitive action on the basis.
Recall the definition of \(F\) in 4.6. We may also write \(F\) as an inner product:
\[F(E_{t},E_{t}^{\prime},\psi)=\langle v_{t},v_{t}^{\prime}\rangle=j(E_{t,1})j(E _{t,1}^{\prime})+\cdots+j(E_{t,m+1})j(E_{t,m+1}^{\prime}).\]
Note that \(F\) is defined on \(C_{\tilde{H}_{\Delta}}\) by restriction, but \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\) does not act on \(C_{\tilde{H}_{\Delta}}\). By Lemma 4.4\(\tilde{C}_{\tilde{H}_{\Delta}}\) is connected, we define \(C^{\prime}\) as the Galois closure of \(C_{\tilde{H}_{\Delta}}\) over \(C\). This is equivalent to saying that \(C^{\prime}\) is the pullback of \(C\) all the way to \(X(m)\times X(m)\), see diagram 4.4. Therefore the pullback of \(F\) to \(X(m)\times X(m)\) is a function on \(C^{\prime}\) with \(F^{\sigma}=F\) for all \(\sigma\in\tilde{H}_{\Delta}\). We prove that \(F\) is not defined on \(C\).
(4.4)
**Lemma 4.12**.: \(F\) _is not defined on \(C\)._
Proof.: For each \(\gamma\in SL_{2}(\mathbb{Z}/m\mathbb{Z})\), the action of \(\gamma\) on \(F\) is given by
\[F(E_{t},E_{t}^{\prime},\psi)^{\gamma}=\langle v_{t},(v_{t}^{\prime})^{\gamma}\rangle.\]
If \(F\) is defined on \(C\), then for every \(\gamma\in SL_{2}(\mathbb{Z}/m\mathbb{Z})\) we have \(F^{\gamma}=F\) therefore \(\langle v_{t},v_{t}^{\prime}\rangle=\langle v_{t},(v_{t}^{\prime})^{\gamma}\rangle\). By Lemma 4.11, either \(v_{t}\) or \(v_{t}^{\prime}\) is fixed by \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\). But this implies either
\[j(E_{t,1})=\cdots=j(E_{t,m+1})\]
or
\[j(E_{t,1}^{\prime})=\cdots=j(E_{t,m+1}^{\prime}).\]
Since there exists \(t\in C\) such that \(E_{t}\) and \(E_{t}^{\prime}\) are both non-CM elliptic curves, this cannot happen. Therefore \(F\) cannot be defined on \(C\).
Now we are ready to prove the generic injectivity of \(\iota^{\prime}_{H_{\Delta}}\). We show no intermediate cover exists between \(\tilde{C}\) and \(C\) by an argument using monodromy.
Proof of Proposition 4.9.: By lemma 4.11, we have the argument that \(F\) is not defined on \(C\).
Recall that \(C_{\tilde{H}_{\Delta}}\) is defined to be the cover of \(C\) constructed by pulling back \(C\to X(1)\times X(1)\) in the previous context. Let \(U\subset C\) be the dense open locus parametrizing smooth points, i.e., pairs of genuine elliptic curves. The etale fundamental group \(\pi_{1}(U)\)
is a quotient of the absolute Galois group of \(K(t)\), which acts on the \(m\)-torsion of the generic fiber, say \(E_{t}[m]\) and \(E_{t}^{\prime}[m]\), in the usual Galois way. Proposition 4.3 asserts that for \(m\geq 17\), the map
\[\rho_{m}:\pi_{1}(U)\to SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m \mathbb{Z})\]
is surjective on each of the two factors. Therefore the reduction map
\[\bar{\rho}_{m}:\pi_{1}(U)\to PSL_{2}(\mathbb{Z}/m\mathbb{Z})\times PSL_{2}( \mathbb{Z}/m\mathbb{Z})\]
is surjective on each factor. For \(m\geq 5\), the projective special linear group \(PSL_{2}(\mathbb{Z}/m\mathbb{Z})=SL_{2}(\mathbb{Z}/m\mathbb{Z})/\{\pm 1\}\) is simple. By assumption, the generic fiber \(E_{t}\) and \(E_{t}^{\prime}\) of the family are not isogenous. Therefore by the work of Bakker and Tsimerman [16, Theorem 1], there exists an absolute constant \(M_{1}\) such that for any prime \(m>M_{1}\), the image of \(\rho_{m}\) on each of the factors is non-isomorphic. Hence the inequivalence condition in Lemma 4.2 is satisfied. Lemma 4.2 leads to the conclusion that the Galois image of the \(m\)-torsion monodromy representation of \(\pi_{1}(U)\) is full in \(PSL_{2}(\mathbb{Z}/m\mathbb{Z})\times PSL_{2}(\mathbb{Z}/m\mathbb{Z})\).
We prove that there is no intermediate cover between \(C_{\tilde{H}_{\Delta}}\) and \(C\). Suppose there is a curve \(X\) such that
\[C_{\tilde{H}_{\Delta}}\to X\to C\]
with all maps of degrees greater than \(1\). Since the connected covering space of \(C\) is in bijection with the subgroups of \(\pi_{1}(C)\), \(X\) corresponds to a proper subgroup \(H^{\prime}\) strictly containing \(\tilde{H}_{\Delta}\). However, \(\tilde{H}_{\Delta}\) is a maximal subgroup of \(PSL_{2}(\mathbb{Z}/m\mathbb{Z})\times PSL_{2}(\mathbb{Z}/m\mathbb{Z})\), which implies that \(X\) is isomorphic to \(C_{\tilde{H}_{\Delta}}\), contradiction. Therefore \(C_{\tilde{H}_{\Delta}}\) is birational to its image under \(\iota^{\prime}_{\tilde{H}_{\Delta}}\), which proves the proposition.
We have constructed a generic embedding \(\iota^{\prime}_{\tilde{H}_{\Delta}}\) of \(C_{\tilde{H}_{\Delta}}\) into a product of projective lines from the argument above. Composing with the Segre embedding, we get a generic embedding of \(C_{\tilde{H}_{\Delta}}\) into \(\mathbb{P}^{7}\), denoted by \(\iota_{\tilde{H}_{\Delta}}\).
\[\iota_{\tilde{H}_{\Delta}}:C_{\tilde{H}_{\Delta}}\hookrightarrow\mathbb{P}^{1 }\times\mathbb{P}^{1}\times\mathbb{P}^{1}\hookrightarrow\mathbb{P}^{7}.\]
One notice that \(\iota_{\tilde{H}_{\Delta}}\) fits into the diagram 4.1 at the beginning of this chapter, with \(N=7\).
**Case II: the maximal parabolic quotient surfaces.** When \(H=H_{\mathrm{p}}\) which is a product of maximal parabolic subgroups, the quotient \(X(m)\times X(m)/H\) is isomorphic to \(X_{0}(m)\times X_{0}(m)\). The corresponding projective embedding \(\iota_{H_{\mathrm{p}}}\) can be constructed in the context of Hecke correspondence of level \(SL_{2}(\mathbb{Z}/m\mathbb{Z})\) followed by the Segre embedding, as following:
\[\iota_{H_{\mathrm{p}}}:X_{0}(m)\times X_{0}(m)\hookrightarrow\mathbb{P}^{1} \times\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\hookrightarrow \mathbb{P}^{15}. \tag{4.5}\]
To be explicit, for a point \(\tilde{P}_{t}=(E_{t},\tilde{E}_{t},E_{t}^{\prime},\tilde{E}_{t}^{\prime})\) that lifts \(P_{t}=(E_{t},E_{t}^{\prime})\), where \(E_{t}\) and \(\tilde{E}_{t}\) are linked by a cyclic isogeny of degree \(m\) and same for \(E_{t}^{\prime}\) and \(\tilde{E}_{t}^{\prime}\), one may write \(\iota_{H_{\mathrm{p}}}\) as
\[\iota_{H_{\mathrm{p}}}:(E_{t},\tilde{E}_{t},E_{t}^{\prime},\tilde{E }_{t}^{\prime}) \mapsto j(E_{t})\times j(\tilde{E}_{t})\times j(E_{t}^{\prime}) \times j(\tilde{E}_{t}^{\prime})\] \[\mapsto (j(E_{t})j(\tilde{E}_{t})j(E_{t}^{\prime})j(\tilde{E}_{t}^{ \prime});\cdots;j(E_{t});j(\tilde{E}_{t});j(E_{t}^{\prime});j(\tilde{E}_{t}^{ \prime});1).\]
In this case, we choose \(N=15\) in diagram 4.1.
Bound for the Change of Heights
In this section, we give an upper bound on the height of a point in \(C_{H}(K)\) lying over a point \(P_{t}\in C(K)\), in terms of the height \(H(t)\) and the level \(m\). The main theorem of this section is Proposition 5.4.
### Hecke correspondence and modular polynomials
Modular polynomials of elliptic curves, the so-called 'elliptic modular polynomials,' are the most common and simplest examples of modular equations. For a positive integer \(m\), the classical modular polynomial \(\Phi_{m}\) is the minimal polynomial of \(j(mz)\) over \(\mathbb{C}(j)\). In other words we have \(\Phi_{m}(j(mz),j(z))=0\). The bivariate polynomial \(\Phi_{m}(X,Y)\) is symmetric of degree \(\psi(m)=m\prod_{p|m}(1+p^{-1})\) in both variables, and its coefficients grow super-exponentially in \(m\). The modular curve \(Y_{0}(m)\) is birational to its image in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) with \(\Phi_{m}\) an equation for this image. The graph of \(\Phi_{m}\) describes the Hecke correspondence such that there exists a cyclic isogeny of degree \(m\) between projections onto each copy of \(\mathbb{P}^{1}\).
For elliptic curves \(E_{1}\) and \(E_{2}\) linked with a cyclic isogeny of order \(m\), we aim to find an upper bound for the height \(H(j(E_{1}))\), in terms of \(H(j(E_{2}))\) and the coefficients of \(\Phi_{m}\). This has been worked out by Pazuki in [11].
**Theorem 5.1**.: _[_11_, Theorem 1.1]_ _Let \(\varphi:E_{1}\to E_{2}\) be a \(\overline{\mathbb{Q}}\)-isogeny between two elliptic curves defined over \(\overline{\mathbb{Q}}\). Let \(j_{1}\) and \(j_{2}\) be the respective \(j\)-invariants. Then one has_
\[\left|h\left(j_{1}\right)-h\left(j_{2}\right)\right|\leq 9.204+12\log\deg\varphi\]
_where \(h(.)\) denotes the absolute logarithmic Weil height._
Theorem 5.1 leads to the following corollary:
**Corollary 5.2**.: _Let \(\varphi:E_{1}\to E_{2}\) be a \(\overline{\mathbb{Q}}\)-isogeny between two elliptic curves defined over \(\overline{\mathbb{Q}}\) which is cyclic of degree \(m\). Let \(j_{1}\) and \(j_{2}\) be the respective \(j\)-invariants. Then one has_
\[H(j_{1})<Am^{12}H(j_{2})\]
_for some absolute constant \(A\). Here \(H(.)\) denotes the projective height._
### Bounding change of heights
We prove an upper bound on the product of heights, which we will use later.
**Lemma 5.3**.: _Let \(d\) be the projective degree of \(C\) under the embedding \(\iota\), see (1.1). Let \(H(\iota)\) be the height of \(\iota\) defined by the height of the coefficients of the defining polynomials of \(j(E_{t})\) and \(j(E_{t}^{\prime}).\) Then for every \(t\in K\) with \(H(t)\leq B\), we have_
\[H(P_{t})\leq(d+1)H(\iota)B^{d}.\]
Proof.: By Lemma 2.1 we have
\[H(P_{t})=H(j(E_{t}))H(j(E_{t}^{\prime})).\]
The lemma then follows from [14, VIII, Theorem 5.6], which asserts that when there is a map of degree \(d\) between two projective spaces, say
\[F:\mathbb{P}^{m}\rightarrow\mathbb{P}^{M},\]
then for all points \(P\in P^{m}(\overline{\mathbb{Q}})\) there are positive constant \(C_{1}\) and \(C_{2}\) depending on \(F\) such that
\[C_{1}H(P)^{d}\leq H(F(P))\leq C_{2}H(P)^{d}.\]
Write \(F=[f_{0},\cdots,f_{M}]\) using homogeneous polynomials \(f_{i}\) having no common zeros. Let \(H(F)\) be the height of \(F\) defined by the height of the coefficients of \(f_{i}\). The constant \(C_{1}\) and \(C_{2}\) can be explicitly calculated in terms of \(M\), \(m\) and \(H(F)\). Especially, we can let \(C_{1}=\binom{m+d}{m}H(F).\) The lemma follows from the assumption that \(F=\iota\), \(m=1\) and \(M=3\).
By Lemma 3.11, for our choice of \(m\in\mathbb{Z}\), a rational point \(t\in C(K)\) with \(t\in S(B)\) for some \(B\) lifts to a rational point on one of the covers \(C_{\tilde{H}_{\Delta}}\subset X_{\tilde{H}_{\Delta}}\) or \(C_{H_{p}}\subset X_{H_{p}}\). We have the following proposition:
**Proposition 5.4**.: _Fix \(m\in\mathbb{N}\). Let \(t\in K\) be a rational point such that \(t\in S(B)\) for some \(B\). Let \(P_{t}\) denote the point on \(C\) parametrized by \(t\), and denote by \(\tilde{P}_{t}\) a lifting of \(P_{t}\) to one of the covers \(C_{H}\) in Lemma 3.11. Let \(H(\iota_{H}(\tilde{P}_{t}))\) denote the projective height of \(\tilde{P}_{t}\) with respect to \(\iota_{H}\)._
_If \(H=\tilde{H}_{\Delta}\), then_
\[H(\iota_{H}(\tilde{P}_{t}))\leq(m+1)m^{24(m+1)}A^{2(m+1)}((d+1)H(\iota))^{m+2} B^{d(m+2)}.\]
_If \(H=H_{p}\), then_
\[H(\iota_{H}(\tilde{P}_{t}))\leq m^{24}(d+1)^{2}H(\iota)^{2}B^{2d}.\]
Proof.: **Case 1: \(P_{t}\) lifts to \(\tilde{C}_{\tilde{H}_{\Delta}}(K)\)**
As noted above, passing to the quotient \(C_{\tilde{H}_{\Delta}}\) we have \(\iota_{\tilde{H}_{\Delta}}\) embeds \(C_{\tilde{H}_{\Delta}}\) into \(\mathbb{P}^{7}\) by composing \(\iota^{\prime}_{\tilde{H}_{\Delta}}\) with the Segre embedding. A point \(\tilde{P}_{t}=(E_{t},E^{\prime}_{t},\psi:E_{t}[m]\stackrel{{ \sim}}{{\to}}E^{\prime}_{t}[m])\) that is a lift of \(P_{t}=(E_{t},E^{\prime}_{t})\) embedded into \(\mathbb{P}^{7}\) as following:
\[(E_{t},E^{\prime}_{t},\psi)\to F\times j(E_{t})\times j(E^{\prime}_{t}) \hookrightarrow[Fj(E_{t})j(E^{\prime}_{t}),Fj(E_{t}),Fj(E^{\prime}_{t}), \cdots,1].\]
Since our ultimate goal is to count rational points parametrized by \(t\in K\), we need a formula relating the height of \(\iota_{H}(\tilde{P}_{t})\) with heights of \(F\), \(j(E_{t})\) and \(j(E^{\prime}_{t})\). By Lemma 2.1, we have
\[H(\iota_{H}(\tilde{P}_{t}))=H(F)H(j(E_{t}))H(j(E^{\prime}_{t})). \tag{5.1}\]
Let \(i\) be an integer between \(1\) and \(m+1\) such that
\[H(j(E_{t,i})j(E^{\prime}_{t,i}))=\max_{1\leq k\leq m+1}H(j(E_{t,k})j(E^{\prime }_{t,k})).\]
By Definition 4.6 and Corollary 5.2 and Lemma 5.3, together with the fact that for any \(\alpha,\beta,\alpha_{1}\cdots\alpha_{r}\in\overline{\mathbb{Q}}\),
\[H(\alpha\beta)\leq H(\alpha)H(\beta)\]
and
\[H(\alpha_{1}+\cdots\alpha_{r})\leq rH(\alpha_{1})\cdots H(\alpha_{r}),\]
we have
\[H(F) =H(j(E_{t,1})j(E^{\prime}_{t,1})+\cdots+j(E_{t,m+1})j(E^{\prime}_ {t,m+1})) \tag{5.2}\] \[\leq(m+1)H(j(E_{t,i}))^{m+1}H(j(E^{\prime}_{t,i}))^{m+1}\] (5.3) \[\leq(m+1)m^{24(m+1)}A^{2(m+1)}H(j(E_{t}))^{m+1}H(j(E^{\prime}_{t} ))^{m+1}\] (5.4) \[\leq(m+1)m^{24(m+1)}A^{2(m+1)}((d+1)H(\iota))^{m+1}B^{d(m+1)}. \tag{5.5}\]
The constant \(A\) comes from Corollary 5.2, and the first part of the lemma follows from (5.1).
**Case 2: \(P_{t}\) lifts to one of the maximal parabolic quotient surfaces**
Recall the definition of \(H_{\mathfrak{p}}\) (3.9) and \(\iota_{H_{\mathfrak{p}}}\) (4.5). As in the previous case, Lemma 2.1 implies that
\[H(\iota_{H_{\mathfrak{p}}}(\tilde{P}_{t}))=H(j(E_{t}))H(j(\tilde{E}_{t}))H(j(E_ {t}^{\prime}))H(j(\tilde{E}_{t}^{\prime}))\]
By Corollary 5.2 and Lemma 5.3, where \(\tilde{E}_{t}\) is \(m\)-isogenous to \(E_{t}\) and \(\tilde{E}_{t}^{\prime}\) is \(m\)-isogenous to \(E_{t}^{\prime}\), we have
\[H(\iota_{H_{\mathfrak{p}}}(\tilde{P}_{t}))\ll m^{24}(d+1)^{2}H(\iota)^{2}B^{2d}.\]
## 6 Proof of the Main Theorems
### Proof of Theorem 1.1
The previous sections show that for a rational point \(P_{t}\) on \(C\), we have two types of possible liftings to some modular surfaces with \(m\)-level structures. Accordingly, we divide the proof of Theorem 1.1 into two parts and analyze each part's contribution to \(|S(B)|\). We make optimization of \(m\) in terms of the height \(B\) of \(t\) as.
**Case 1: Contributions from modular diagonal quotient surfaces.**
Recall that we have the following commutative diagram:
Let \(\iota_{\tilde{H}_{\Delta}}\) be the composition of \(\iota^{\prime}_{\tilde{H}_{\Delta}}\) with the Segre embedding. In order to apply Theorem 2.4, we bound the degree of \(\iota_{\tilde{H}_{\Delta}}\), which depends on \(m\) and the projective degree of \(C\).
Let \(\deg_{C_{\tilde{H}_{\Delta}}}(F)\) be the degree of \(F\) as a function on \(C_{\tilde{H}_{\Delta}}\). The degree of the function \(\iota^{\prime}_{\tilde{H}_{\Delta}}\) on \(C_{\tilde{H}_{\Delta}}\) can be viewed as a tridegree, which we denote by \((\deg_{C_{\tilde{H}_{\Delta}}}(F),e,e^{\prime})\). When we pass to \(\mathbb{P}^{7}\) by compose with the Segre embedding, we have
\[\iota_{\tilde{H}_{\Delta}}=\deg(F)+e+e^{\prime}.\]
Here \(e\) denotes the degree of the function \(j(E_{t})\) on \(C_{\tilde{H}_{\Delta}}\) and \(e^{\prime}\) is the degree of \(j(E_{t}^{\prime})\) on \(C_{\tilde{H}_{\Delta}}\). Let \(\alpha\) be the degree of the cover \(q\) and let \(d_{E}\) be the degree of the \(j\)-invariant map \(C\to\mathbb{P}^{1}\). Therefore \(e=\alpha d_{E}\). Similarly, we have \(e^{\prime}=\alpha d_{E^{\prime}}\). Therefore
\[\deg(\iota_{\tilde{H}_{\Delta}})>e+e^{\prime}=\alpha(d_{E}+d_{E^{\prime}})= \alpha d.\]
The degree of \(q\) is equal to the index of \(\tilde{H}_{\Delta}\) inside the Galois group \(G=SL_{2}(\mathbb{Z}/m\mathbb{Z})\times SL_{2}(\mathbb{Z}/m\mathbb{Z})\). We have
\[\alpha=[G:\tilde{H}_{\Delta}]=m^{3}(1-\frac{1}{m^{2}})=m(m+1)(m-1).\]
Therefore
\[\deg(\iota_{\tilde{H}_{\Delta}})\geq m(m+1)(m-1)d.\]
In order to get an upper bound of \(\deg(\iota_{\tilde{H}_{\Delta}})\), we need an upper bound for the degree of \(F\) over \(C_{\tilde{H}_{\Delta}}\). Recall from diagram 4.4, \(C^{\prime}\) is the Galois closure of \(C_{\tilde{H}_{\Delta}}\) over \(C\) and the function \(F\) is defined to be
\[F=j(E_{t,1})j(E^{\prime}_{t,1})+\cdots+j(E_{t,m+1})j(E^{\prime}_{t,m+1}).\]
The individual terms \(j(E_{t,i})\) and \(j(E^{\prime}_{t,i})\) are defined on \(C^{\prime}\), instead of on \(C_{\tilde{H}_{\Delta}}\). A point on \(C^{\prime}\) is a product of triples \((E_{t},P,G)\times(E^{\prime}_{t},P^{\prime},G^{\prime})\), where \(P\)(resp. \(P^{\prime}\)) is a point of \(E_{t}[m]\)(resp. \(E^{\prime}_{t}[m]\)) and \(G\)(resp. \(G^{\prime}\)) is a cyclic subgroup of \(E_{t}[m]\)(resp. \(E^{\prime}_{t}[m]\)). Fix a \(j\)-invariant \(x\), the degree of \(j(E_{t,i})\)(resp. \(j(E^{\prime}_{t,i})\)) is the number of points on \(C^{\prime}\) such that \(j(E_{t}/G)=x\). As long as \(E_{t}\) and \(E^{\prime}_{t}\)) is not CM, there are \(m+1\)\(j\)-invariants which are \(m\)-isogenous to \(x\) and there are \(d_{E}\) points on \(C\) mapping to each of those \((m+1)\)\(j\)-invariants. Hence the degree \(\deg_{C}(j(E_{t,i}))\)(resp. \(\deg_{C}(j(E^{\prime}_{t,i}))\)) for each \(1\leq i\leq m+1\) is \((m+1)d_{E}\)(resp. \((m+1)d_{E^{\prime}}\)).
The argument above, together with the fact that if \(f\) and \(g\) are functions on a curve \(X\) then
\[\deg(f+g)\leq\deg(f)+\deg(g)\]
and
\[\deg(fg)\leq\deg(f)+\deg(g),\]
yields that
\[\deg_{C_{\tilde{H}_{\Delta}}}(F)\leq\deg_{C^{\prime}}(F)\leq(m+1)^{2}d.\]
Hence we get an upper bound on the degree of \(\iota_{\tilde{H}_{\Delta}}\) which is
\[\deg(\iota_{\tilde{H}_{\Delta}})\leq(m(m+1)(m-1)+(m+1)^{2})d.\]
We can write the argument in the paragraph above in a lemma:
**Lemma 6.1**.: \[m(m+1)(m-1)d\leq\deg(\iota_{\tilde{H}_{\Delta}})\leq(m(m+1)(m-1)+(m+1)^{2})d.\]
Let \(S_{B,m,H_{\Delta}}\) be the set of rational points on \(C_{\tilde{H}_{\Delta}}(K)\) which are liftings of \(P_{t}\) for some \(t\in S(B)\). Recall that we prove an upper bound for the heights of points in \(S_{B,m,H_{\Delta}}\) in Proposition 5.4. Theorem 2.4 ([12, Theorem 1.8]) then applies, along with Lemma 4.4 and Lemma 6.1, yielding
\[S_{B,m,H_{\Delta}} \lesssim_{K}((\alpha+(m+1)^{2})d)^{4}((m+1)m^{24(m+1)}A^{2(m+1)}( (d+1)H(\iota))^{m+2}B^{d(m+2)})^{\frac{2d_{K}}{\alpha d}} \tag{6.1}\] \[\lesssim_{K}((m^{2}+1)(m+1)d)^{4}((m+1)m^{24(m+1)}A^{2(m+1)}((d+1 )H(\iota))^{m+2}B^{d(m+2)})^{\frac{2d_{K}}{m(m-1)(m+1)d}}\] (6.2) \[\lesssim_{K}(m^{3}d)^{4}(m+1)^{\frac{2d_{K}}{m(m-1)(m+1)d}}A^{ \frac{4d_{K}}{m(m-1)d}}m^{\frac{48d_{K}}{m(m-1)d}}((d+1)H(\iota)B)^{\frac{2d_ {K}(m+2)}{m(m-1)(m+1)}} \tag{6.3}\]
The terms in (6.3) other than \((m^{3}d)^{4}\) are bounded above by an absolute constant. The argument requires optimizations on the choice of \(m\), which we will prove in the following lemma.
**Lemma 6.2**.: _Recall that \(m\) is a prime between \(2(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\) and \(4(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\). There is an absolute constant \(A_{0}\) such that_
\[(m+1)^{\frac{2d_{K}}{m(m-1)(m+1)d}}A^{\frac{4d_{K}}{m(m-1)d}}m^{\frac{48d_{K}} {m(m-1)d}}((d+1)H(\iota)B)^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)}}\leq A_{0}.\]
Proof.: Once we write
\[(m+1)^{\frac{2d_{K}}{m(m-1)(m+1)d}}\]
as
\[e^{(\frac{2d_{K}}{m(m-1)(m+1)d})\log(m+1)},\]
it is easy to see that for \(m\geq 2\) we have
\[(m+1)^{\frac{2d_{K}}{m(m-1)(m+1)d}}\ll e^{\frac{2d_{K}}{d}}\leq e^{2d_{K}}.\]
This is because \(\frac{\log(m+1)}{m(m-1)(m+1)}\) is bounded above by \(1\). A similar argument shows that
\[m^{\frac{48d_{K}}{m(m-1)d}}\ll e^{\frac{48d_{K}}{d}}\leq e^{48d_{K}}\]
which also contributes as a constant independent of \(m\) and \(d\).
It is left to consider \(((d+1)H(\iota)B)^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)}}\) which plays an important role in the optimization process. We make the optimization by choosing suitable \(m\) in terms of \(B\), \(d\), and \(H(\iota)\). The following inequalities, together with Proposition 4.3, allows one to take \(m\) to be any prime between \(2(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\) and \(4(\log(d+1)+\log H(\iota)+\log B)^{\frac{1}{2}}\), so that
\[((d+1)H(\iota)B)^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)}} =e^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)}}(\log(d+1)+\log H(\iota)+ \log B)\] \[=e^{\frac{2d_{K}(\log(d+1)+\log H(\iota)+\log B)}{(m-1)(m+1)}}\cdot e ^{\frac{4d_{K}(\log(d+1)+\log H(\iota)+\log B)}{m(m-1)(m+1)}}\] \[\leq e^{N_{1}d_{K}}\cdot e^{N_{2}d_{K}}\]
where \(\frac{2(\log(d+1)+\log H(\iota)+\log B)}{(m-1)(m+1)d}\) and \(\frac{4(\log(d+1)+\log H(\iota)+\log B)}{m(m-1)(m+1)d}\) are bounded above by some absolute constant \(N_{1}\) and \(N_{2}\).
We have the following proposition as a conclusion of the case.
**Proposition 6.3**.: _The number of points in \(|S(B)|\) that comes from the modular diagonal quotient surface is bounded up by_
\[S_{B,m,H_{\Delta}} \lesssim_{K}d^{4}((\log(d+1)+\log H(\iota)+\log B))^{6}\] \[\lesssim_{K}d^{4+\epsilon}(\log H(\iota)+\log B)^{6}.\]
Proof.: The proposition follows from the inequality (6.3) and Lemma 6.2.
**Case 2: Contributions from maximal parabolic quotient surfaces.** Recall that in this case, we have the following commutative diagram.
The degree of the covering space \(q\), denoted by \(\beta\), is equal to the index of \(H_{\mathrm{p}}\) inside the Galois group \(G\), similar to the previous case. The index of \(H_{\mathrm{p}}\) as a product of maximal parabolic subgroups is equal to
\[\beta=[G:H_{\mathrm{p}}]=\frac{1}{4}(m+1)^{2}.\]
One notice that the degree of \(\iota_{H_{\mathrm{p}}}\) satisfies \(\deg(\iota_{H_{\mathrm{p}}})=\beta d\). Let \(S_{B,m,H_{\mathrm{p}}}\) be the number of rational points in \(C_{H_{\mathrm{p}}}(K)\) that lift \(P_{t}\) for some \(t\in S(B)\). Applying Proposition 5.4 and Theorem 2.3, we get the following inequality
\[S_{B,m,H_{\mathrm{p}}}\lesssim_{K}((m+1)^{2}d)^{4}(m^{24}(d+1)^{2}H(\iota)^{2} B^{2d})^{\frac{8dK}{(m+1)^{2d}}}\]
which allows us to make the same optimization as in case 1. By taking
\[m\sim(\log B+\log(d+1)+\log H(\iota))^{\frac{1}{2}},\]
we get an upper bound of the total contribution to \(|S(B)|\) from the maximal parabolic surfaces:
**Proposition 6.4**.: \[S_{B,m,H_{\mathrm{p}}} \lesssim_{K}d^{4}(\log B+\log(d+1)+\log H(\iota))^{4}\] \[\lesssim_{K}d^{4+\epsilon}(\log B+\log H(\iota))^{4}.\]
Proof of Theorem 1.1.: It is easy to see that the contribution from case 1 dominates that of case 2. Theorem 1.1 then follows from Proposition 6.3 and 6.4.
### Proof of Theorem 1.3
Recall that in Theorem 1.1, \(H(t)\) is defined as the height of \(t\) as an element of \(K\). Instead, if we calculate the height of \(t\) as the height of a point on \(X(1)\times X(1)\), which we have been calling \(H(P_{t})\). Assume that \(H(P_{t})\leq B\). In this section, we prove an _uniform_ bound on the number of points \(t\) such that \(E_{t}\) and \(E^{\prime}_{t}\) are geometrically isogenous, which only depends on \(K\), \(B\), and the _degree_ of the parametrize family.
We need a slightly modified version of Proposition 5.4, which we state as a corollary.
**Corollary 6.5**.: _Fix \(m\in\mathbb{N}\). Let \(t\in K\) be a rational point such that \(t\in S(B)\) for some \(B\). Let \(P_{t}\) denote the point on \(C\) parametrized by \(t\), and denote by \(\tilde{P}_{t}\) a lifting of \(P_{t}\) to one of the covers \(C_{H}\) in Lemma 3.11. Let \(H(\iota_{H}(\tilde{P}_{t}))\) denote the projective height of \(\tilde{P}_{t}\) with respect to \(\iota_{H}\)._
_If \(H=\tilde{H}_{\Delta}\), then_
\[H(\iota_{H}(\tilde{P}_{t}))\leq(m+1)m^{24(m+1)}A^{2(m+1)}B^{(m+2)}.\]
_If \(H=H_{\mathrm{p}}\), then_
\[H(\iota_{H}(\tilde{P}_{t}))\leq m^{24}B^{2}.\]
Proof.: The proof is the same as Proposition 5.4, except that we have
\[H(j(E_{t}))H(j(E^{\prime}_{t}))\leq B\]
instead of Lemma 5.3.
Proof of Theorem 1.3.: Let \(S^{\prime}_{B,m,H}\) be the set of rational points on \(C_{H}(K)\) which are preimages of \(P_{t}\) for some \(t\in S^{\prime}(B).\) A similar argument to the proof of Theorem 1.1
yields
\[S^{\prime}_{B,m,\tilde{H}_{\Delta}} \lesssim_{K}((\alpha+(m+1)^{2})d)^{4}((m+1)m^{24(m+1)}A^{2(m+1)}B)^{( m+2)\frac{2d_{K}}{\alpha d}} \tag{6.4}\] \[\lesssim_{K}((m^{2}+1)(m+1)d)^{4}((m+1)m^{24(m+1)}A^{2(m+1)}B^{(m+ 2)})^{\frac{2d_{K}}{\alpha(m-1)(m+1)d}}\] (6.5) \[\lesssim_{K}(m^{3}d)^{4}(m+1)^{\frac{2d_{K}}{m(m-1)(m+1)d}}A^{ \frac{4d_{K}}{m(m-1)d}}m^{\frac{4d_{K}}{m(m-1)d}}B^{\frac{2d_{K}(m+2)}{m(m-1)( m+1)d}} \tag{6.6}\]
when \(H=\tilde{H}_{\Delta}\), and
\[S^{\prime}_{B,m,H_{\mathrm{p}}}\lesssim_{K}((m+1)^{2}d)^{4}(m^{24}B^{2})^{ \frac{8d_{K}}{(m+1)^{2}d}}. \tag{6.7}\]
We choose \(m\) to be a prime between \(2(\log B)^{\frac{1}{2}}\) and \(4(\log B)^{\frac{1}{2}}\) as to control the growth of the \(B\)-power factors in both 6.6 and 6.7, such that
\[B^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)d}} =e^{\frac{2d_{K}(m+2)}{m(m-1)(m+1)d}}\log B\] \[=e^{\frac{2d_{K}(\log B)}{(m-1)(m+1)}}\cdot e^{\frac{4d_{K}(\log B )}{m(m-1)(m+1)}}\] \[\leq e^{N^{\prime}_{1}d_{K}}\cdot e^{N^{\prime}_{2}d_{K}}\]
and
\[B^{\frac{16d_{K}}{(m+1)^{2}d}}=e^{\frac{16d_{K}}{(m+1)^{2}d}\log B}\leq e^{N^ {\prime}_{3}d_{K}}\]
for some absolute constant \(N^{\prime}_{1}\), \(N^{\prime}_{2}\) and \(N^{\prime}_{3}\).
|
2308.07998 | Electrostatic Steering of Thermal Emission with Active Metasurface
Control of Delocalized Modes | We theoretically describe and experimentally demonstrate a
graphene-integrated metasurface structure that enables electrically-tunable
directional control of thermal emission. This device consists of a dielectric
slab that acts as a Fabry-Perot (F-P) resonator supporting long-range
delocalized modes bounded on one side by an electrostatically tunable
metal-graphene metasurface. By varying the Fermi level of the graphene, the
accumulated phase of the F-P mode is shifted, which changes the direction of
absorption and emission at a fixed frequency. We directly measure the
frequency- and angle-dependent emissivity of the thermal emission from a
fabricated device heated to 250$^{\circ}$. Our results show that electrostatic
control allows the thermal emission at 6.61 $\mu$m to be continuously steered
over 16$^{\circ}$, with a peak emissivity maintained above 0.9. We analyze the
dynamic behavior of the thermal emission steerer theoretically using a Fano
interference model, and use the model to design optimized thermal steerer
structures. | Joel Siegel, Shinho Kim, Margaret Fortman, Chenghao Wan, Mikhail A. Kats, Phillip W. C. Hon, Luke Sweatlock, Min Seok Jang, Victor Watson Brar | 2023-08-15T19:16:20Z | http://arxiv.org/abs/2308.07998v3 | # Electrostatic Steering of Thermal Emission with Active Metasurface Control of Delocalized Modes
###### Abstract
**Abstract** We theoretically describe and experimentally demonstrate a graphene-integrated metasurface structure that enables electrically-tunable directional control of thermal emission. This device consists of a dielectric slab that acts as a Fabry-Perot (F-P) resonator supporting long-range delocalized modes bounded on one side by an electrostatically tunable metal-graphene metasurface. By varying the Fermi level of the graphene, the accumulated phase of the F-P mode is shifted, which changes the direction of absorption and emission at a fixed frequency. We directly measure the frequency- and angle-dependent emissivity of the thermal emission from a fabricated device heated to \(250^{\circ}\). Our results show that electrostatic control allows the thermal emission at \(6.61\,\mathrm{\SIUnitSymbolMicro m}\) to be continuously steered over \(16^{\circ}\), with a peak emissivity maintained above \(0.9\). We analyze the dynamic behavior of the thermal emission steerer theoretically using a Fano interference model, and use the model to design optimized thermal steerer structures.
Metasurface, Thermal emission, Graphene, Plasmon The mid infrared (MIR) is an important band for applications ranging from free-space laser communications[1] to chemical sensing [2; 3]. An optimal MIR source for these applications would be narrowband, and also offer high speed directional control, such that the beam can be rastered over a range of angles, or have a controllable focal point. Typically, such beam-steering is achieved by reflecting a beam using mechanical devices such as gimbal-mounted mirrors[4], optical phased arrays of antenna[5; 6], or liquid crystal-based devices[7]. While each of these techniques have their own set of advantages and disadvantages, one limitation common to them all is that they require an external source of light, such as a quantum cascade laser.
An alternative source of MIR light is one that can be found everywhere, thermal radiation. Any material at a non-zero temperature will emit radiation over a broad range of frequencies which, at moderate temperatures (\(0\)-\(700\,^{\circ}\mathrm{C}\)), is peaked in the MIR. Though thermal emission is typically viewed as incoherent, isotropic, and broadband, recent advances in nanoengineering have demonstrated that it is possible to engineer the emissivity of a structured material to create narrowband[8] directional[9] emissions that exhibits coherence. These include metasurfaces composed of non-interacting, localized resonator elements tuned to specific wavelengths, such as metallic nanoantennas[10] or semiconducting nanostructures that exhibit sharp quasi bound-state-int the continuum resonances[11; 12]. To achieve coherent directional emission, meanwhile, structures that support long-range delocalized modes can be utilized. These include surface waves that are out-coupled via gratings[13; 14; 9], F-P cavities[15], photonic crystals[16; 17; 18], epsilon near zero modes[19] and delocalized modes formed by coupled resonators[20; 21; 22]. In all of these demonstrated devices, heating is all that is required to produce the desired light as the relevant optical modes are excited thermally, thus providing an elegant source of MIR radiation.
Imparting tunability into such devices - which could allow for dynamic beam control and frequency shifting - requires the integration of materials with variable optical properties. Materials with temperature-dependent phases and/or indices, such as GST[23; 24; 25], VO\({}_{2}\)[26; 27; 28; 29], or Si[30] have been utilized to create metasurfaces that control the magnitude and phase of scattered light in reflection or transmission geometries, but such materials are unsuitable for thermal emission devices that operate at high, constant temperatures. Alternatively, materials with indices that depend on carrier density, including graphene, III-V quantum wells and indium tin oxide (ITO), can be utilized to bestow electrostatic tunability on metasurfaces, and devices that control phase, frequency, and intensity of reflected light have been demonstrated[31; 32; 33; 34; 35; 36]. These materials are also chemically and phase stable at high temperatures, which has enabled them to be integrated within thermal-control metasurfaces to electrostatically tune the intensity and
frequency of incandescent light in the mid-IR[37, 38, 39]. Unfortunately, such materials also introduce ohmic loss which can, in some geometries, suppresses formation of the long-range delocalized modes that are necessary for coherent, directional thermal emission. As such, dynamic angular tuning of thermal emission is an outstanding problem in the field of thermal metasurfaces.
In this work, we theoretically describe and experimentally demonstrate a thermal emission device that can be tuned electrostatically to control the directionality of thermal emission within a narrow bandwidth. We show experimentally that by using a tunable graphene-integrated metasurface as a boundary for a delocalized F-P cavity mode, the thermal emission from a surface at 6.61 um (1508 cm-1) can be continuously steered by \(\pm\) 16\({}^{\circ}\) by changing the carrier density of the graphene sheet. Theoretical calculations, meanwhile, show that an optimized geometry using real materials could achieve \(\pm\) 60\({}^{\circ}\) of continuous tuning.
For dynamic thermal emission steering, we utilize an electrically tunable F-P resonance of a SiN\({}_{\rm x}\) dielectric layer sandwiched by a gold back reflector and a graphene-based active metasurface as illustrated in Fig. 1(a). The graphene metasurface consists of 30 nm thick, 1 um wide gold strips spaced 40 nm apart on top of HfO\({}_{2}\) (5 nm)/graphene/Al\({}_{2}\)O\({}_{3}\) (30 nm) trilayer, sitting on the 2 um think SiN\({}_{\rm x}\) membrane with the 100 nm gold back reflector that also serves as a back gate electrode. The gaps between the gold strips are filled with a bilayer of 30 nm gold and 100 nm SiO\({}_{\rm x}\). The sub-wavelength period of the structure suppresses far-field diffraction except for the zeroth order.
The working principle of our device is illustrated in Fig. 1(b). The graphene-based metasurface covering the top surface of the SiN\({}_{\rm x}\) membrane acts as a partially reflecting mirror to form a vertical F-P cavity. By applying an electrostatic potential (\(V_{G}\)) across the dielectric layers, the Fermi level of graphene (\(E_{F}\)) is modulated and so are the complex reflection and transmission coefficients of the top graphene metasurface. Consequently, the condition for the resonance shifts, causing a shift in the peak emission angle (\(\theta\)) for a given frequency. These changes can be qualitatively understood by treating the top metasurface as a two-dimensional sheet with an effective surface admittance, which is justified since the metasurface thickness is about two orders of magnitude shorter than the wavelength of the free space light[6, 31, 40]. In this model, the subwavelength metallic stips with narrow gaps make the overall optical response of the graphene metasurface to be highly capacitive (i.e. large imaginary impedance) at a low carrier concentration. As the conductivity of graphene raises with increasing \(E_{F}\), the metasurface exhibits a reduced, but still high, capacitance and also acquires a larger conductance, changing the reflection/transmission characteristics. The quantitative surface admittance model for the graphene metasurface is discussed in detail in Supplementary Notes 1, 2, and 3.
Recognizing the emissivity \(\epsilon(\omega,\theta)\) of a reciprocal object is equal to its absorptivity \(\alpha(\omega,\theta)\)[41], one can understand the mechanism of the directional shift in thermal emission more intuitively by analyzing the absorption process. Since the transmission channel is blocked by the back reflector,
\[\epsilon=\alpha=1-|r_{\rm tot}|^{2}=1-|r_{\rm direct}+r_{\rm FP}|^{2}, \tag{1}\]
where \(r_{\rm tot}\) is the total reflection, which can be decomposed into the direct reflection from the top surface (\(r_{\rm direct}\)) and the resonant reflection due to the F-P interference formed by multiple reflections inside the dielectric layer (\(r_{\rm FP}\)). The interplay between \(r_{\rm FP}\) and \(r_{\rm direct}\), both of which are dependent on \(E_{F}\), determines the overall absorption (and thus the emission) of the device. The absorption peak occurs when \(r_{\rm FP}\) and \(r_{\rm direct}\) destructively interfere with each other by having similar amplitudes and a \(\pi\) phase difference.
We first theoretically investigate the behavior of the proposed device using full-field electromagnetic simulations based on the finite element method as summarized in Fig. 2. The dependence of \(r_{\rm direct}\) on \(\theta\) and \(E_{F}\) for TM polarized light is shown in Fig.2(c). \(r_{\rm direct}\) can be obtained by simulating the reflection by the graphene metasurface sitting on a semi-infinite SiN\({}_{\rm x}\) layer without a back reflector. Since the top graphene metasurface does not support any distinctive resonance around the target frequency of \(\omega=1498\) cm\({}^{-1}\), the direct reflectance, \(R_{\rm direct}=|r_{\rm direct}|^{2}\), exhibit a generic weak dependence on \(\theta\) within the range of 0\({}^{\circ}\) to 50\({}^{\circ}\). As the carrier density of graphene increases, the metasurface becomes less capacitive, leading to better impedance matching as elaborated in (Supplementary Notes 1 and 3). Consequently, \(R_{\rm direct}\) monotonically decreases with increasing \(E_{F}\). The phase of the direct reflection, \(\phi_{\rm direct}=\arg\{r_{\rm direct}\}\), remains nearly constant round 0.9\(\pi\) within \(\theta\in(0^{\circ},50^{\circ})\) and \(E_{F}\in(0.2,0,65)\)eV.
Unlike \(r_{\rm direct}\), \(r_{\rm FP}\) shows a strong dependence on both \(\theta\) and \(E_{F}\) due to its resonant nature. The F-P resonance occurs when the out-of-plane wavevector inside the dielectric, \(k_{\rm out}=nk_{0}\sqrt{1-\sin^{2}\theta}\), satisfies the constructive interference condition:
\[2k_{\rm out}h+\phi_{\rm top}+\phi_{\rm bottom}=2\pi m, \tag{2}\]
where \(k_{\rm out}h\) is the phase accumulation associated with vertical wave propagation across the dielectric layer, \(\phi_{\rm top}\) and \(\phi_{\rm bottom}\) are the reflection phase from the top and bottom surfaces, respectively, and \(m\) is an integer. \(\phi_{\rm bottom}\sim\pi\) does not dependent on \(E_{F}\) since the bottom surface is a mere gold back reflector, which behaves like a perfect electric conductor at mid-infrared frequencies. \(\phi_{\rm top}\), in principle, could depend on \(E_{F}\) for metasurfaces with an admittance comparable to the surrounding medium, but in our device the admittance is large and, thus, the dependence of \(\phi_{\rm top}\) is weak for \(E_{F}\in(0.2,0.65)\) eV. (see Supplementary Note 1 for a detailed analysis). As a result, at a fixed frequency, the resonance angle \(\theta_{\rm FP}\) slightly
decreases from 34\({}^{\circ}\) to 28\({}^{\circ}\) when \(E_{F}\) increases from 0.2 eV to 0.65 eV as indicated as a blue dashed curve in Fig.2(a); And, at a fixed \(\theta\), the resonance frequency \(\omega_{\rm FP}\) slightly blueshifts with increasing \(E_{F}\). The F-P resonance becomes weaker with increasing \(E_{F}\) as the top graphene metasurface becomes less reflective and more absorptive, raising both the radiative and dissipative decay rate of the resonant mode. However, while \(\phi_{\rm top}\) shows only a small dependence on \(E_{F}\), the overall phase shift due to the F-P resonance (\(\phi_{\rm FP}\)) includes phase accumulated while passing into and out of the F-P cavity, through the complex transmission coefficients \(t_{\rm in}\) and \(t_{\rm out}\), which show considerably more dependence on \(E_{F}\). (see Supplementary Note 3)
Since the amplitude of \(r_{\rm FP}\) is similar to that of \(r_{\rm direct}\) near the broad F-P resonance, what mainly determines the overall absorption is their phase difference, \(\phi_{\rm FP}-\phi_{\rm direct}\). We note that the Fano interference between a non-resonant and) a resonant scattering channel has been widely adopted to create a sharp resonant response[42; 43]. The dependence of \(\phi_{\rm FP}-\phi_{\rm direct}\) on \(E_{F}\) and \(\theta\), which is dominated by \(\phi_{\rm FP}\) due to the near constant \(\phi_{\rm direct}\approx 0.9\pi\), are plotted in Fig. 2(b). \(\phi_{\rm FP}\) monotonically decreases with \(\theta\) because the propagation phase across the dielectric layer, \(k_{\rm out}h\), decreases as \(k_{\rm out}\) shortens. \(\phi_{\rm FP}\) also decreases with \(E_{F}\) as the capacitive phase shift of the top graphene metasurface reduces. As a result, the condition for the Fano resonance, \(\phi_{\rm FP}-\phi_{\rm direct}=\pi\), shifts from \(\theta_{\rm res}=31^{\circ}\) to \(0^{\circ}\) as \(E_{F}\) alters from 0.2 eV to 0.6 eV. This change in the phase matching condition drives an overall change in the angular-dependent absorptivity/emissivity, shown in Fig.2(d), and thus allows the device to thermally emit at an angle that can be tuned by varying \(E_{F}\).
In order to experimentally verify the possibility of active thermal emission steering, we fabricated the proposed device using e-beam lithography over a \(4\times 4\) mm\({}^{2}\) area (see Methods), heated it to 250 \({}^{\circ}\)C, and measured its angle-dependent thermal emission spectra while varying the \(E_{F}\) by applying different gate voltages \(V_{G}\). A polarizer was used to accept only TM polarized emission, and the acceptance angle of the emitted light was 3\({}^{\circ}\). The emissivity of the structure is calculated by normalizing the emitted radiation of the device to the emitted radiation of a reference carbon nanotube blackbody[44].
The measured surface normal emissivity spectra for \(\theta=0^{\circ}\) at \(V_{G}=560\), 0 and \(-560\) V, shown in Fig.3(a), exhibit a well-defined resonance peak at around 1.500 cm\({}^{-1}\) that blueshifts as the Fermi level of graphene increases, indicating that the thermal emission peaks are electrostatically tunable with minor variation in the intensity. The measured emissivity spectra also shows a strong angular dependence as shown in Fig.3(b). At a constant doping level (\(V_{G}=-560\) V), the emission peak shifts from 1508 cm\({}^{-1}\) to 1543 cm\({}^{-1}\) as \(\theta\) changes from \(0^{\circ}\) to \(30^{\circ}\). There are also higher order features present around 2400 cm\({}^{-1}\) (see Supplementary Note 4) that show similar but more limited shifting. Finally, Fig.3(c) demonstrates the dynamic thermal emission steering by showing how the emission angle is modulated by altering the doping
Figure 1: The schematic of the fabricated thermal steering device. The zoomed view shows the geometry of a graphene-Au slit metasurface unit cell. (b) Schematic view of electrically tunable directional thermal emission by the graphene metasurface on the delocalized F-P photonic resonator. The emission angle of the structure is controlled by the Fermi level and incident angle-dependent resonant absorption condition. The inset shows decomposed total reflection into two reflection channels: direct reflection \(r_{\rm direct}\) and F-P reflection \(r_{\rm FP}\). (c) scanning electron microscopy image of graphene metasurface on top of SiN\({}_{x}\) membrane. The width (w) and gap (g) of Au slit array are 1 μm and 40 nm, respectively.
level of graphene at a fixed target frequency \(\omega=1508\) cm\({}^{-1}\). At \(V_{G}=-560\) V, we observe that the emission peak is most intense at normal incidence and decreases in intensity as the angle is increased. As the applied gate voltage increases to \(560\,\mathrm{V}\), the lobe shifts from normal incidence to increasing angles, up to \(16^{\circ}\), allowing for continuous tuning in that range.
These experimental results can be compared to simulated emissivity spectra shown in Figures 3(d-f). In these simulations, the value of \(E_{F}\) at \(V_{G}=-560\) V was chosen as a fitting parameter and found to be \(-0.62\) eV, indicating that the sample is heavily hole-doped, which is consistent with previous studies of graphene grown and transferred using similar procedures[45]. Using this initial value of \(E_{F}\), the Fermi energies at other gate voltages were derived with a simple capacitance model. The overall qualitative behavior of the simulations is consistent with our experimental finding, but the emission lobes are broader and the change of emission angle of the emitter is smaller in our experiment than was theoretically predicted. The likely sources of this inconsistency are the metastructure geometric and material parameter variations across the full 4 x 4 mm\({}^{2}\) device (see Subsections A and B in Supplementary Note 5), and carrier density variation during the heating process due to the temperature dependence of the SiN\({}_{\mathrm{x}}\), Al\({}_{2}\)O\({}_{3}\), and HfO\({}_{2}\) dielectric properties[46; 47; 48; 49; 50; 51]. The estimated carrier density is also affected by substrate and interface charge traps, which can act to decrease the overall doping range (see Methods). We also note that the modulation depth at \(\theta\approx 0^{\circ}\) is predicted to be larger than what is observed experimentally, and we attribute this mostly to decreases in the magnitude of the graphene carrier density due to the filling of charge traps, as well as small potential misalignment of the heating stage. (see Methods) The intensity of emission at large angles can also be reduced due to ellipsoidal elongation of the measurement area which, for small device areas, can extend the active zone to include some low emissivity, unpatterned gold areas.
To further explore the potential of the proposed thermal steerer device, we investigate the maximum realizable emission angle under the limitation of realistic geometric and material parameters. The Fermi level of graphene is assumed to be electrostatically tunable between \(0\,\mathrm{eV}\) and \(0.6\,\mathrm{eV}\), considering typical dielectric strength of SiN\({}_{\mathrm{x}}\) and numerical optimizations of the geometric parameters of the device were performed to maximize the angle tunability. To prevent performance degradation due to non-local effects (see Supplementary Note 5 for more discussion), we set the minimum gap width to \(30\,\mathrm{nm}\) and carried out simulations in the frame of classical electrodynamics. Figure 4(a) shows the structure of the optimized device. The gap and width of Au slit array are \(30\,\mathrm{nm}\) and \(740\,\mathrm{nm}\), respectively. The HfO\({}_{2}\) is thinned to \(1\,\mathrm{nm}\) which is achievable smallest value that could avoid quantum tunneling effect. The bilayer Au/SiO\({}_{\mathrm{x}}\) area eliminated to enhance interaction between graphene and Au slit array. The optimization results show that it is possible to achieve \(\sim 60^{\circ}\) thermal emission angle steering with unity peak emissivity (Fig.4(b)). The achievable
Figure 2: Two distinct angular reflection spectra (a) \(|r_{\mathrm{FP}}|^{2}\) and (c) \(|r_{\mathrm{direct}}|^{2}\). The blue dotted line indicates the F-P resonance condition. (b) The phase difference between two reflection coefficients (\(r_{\mathrm{FP}}-r_{\mathrm{direct}}\)). (d) Calculated emissivity angular spectra. The black dotted line indicates the resonant absorption condition. The angular spectra are calculated for frequency \(\omega=1498\) cm\({}^{-1}\).
performance is greater than most metasurface-based electrically tunable beam steering devices[52] and is comparable to state-of-the-art MEMS-based beam steering device where field of view[53]. The improvements in the optimized structure in comparison to the experimentally measured sample are due to three main effects. First, the optimized structure utilized a smaller, \(30\,\mathrm{nm}\) spacing between the gold strips. This acts to increase the electric field concentration within the graphene and minimize stray fields connecting the gold strips, allowing more interaction with the graphene and a stronger effect of the graphene on the metasurface properties. Second, a thinner HfO\({}_{2}\) layer is used in the optimized structure, which brings the graphene closer to the gold and also increases the electric field intensity within the graphene sheet (see Supplementary Notes 2 and 5). And, third, in the optimized structure we assume a greater range of \(E_{F}\) tunability, which is consistent with the potential properties of the dielectrics, but could not be achieved in our experiments due to our methods of contacting the sample (i.e. wirebonding) which weakened the dielectric strength and restricted the range of \(V_{G}\).
## Conclusion
In conclusion, we have demonstrated a thermal emitter that can continuously change the angle of emission in the mid-IR for a designated frequency. We show that by including a graphene-metal metasurface as a boundary, a delocalized F-P optical mode can be tuned to exhibit resonances with angular and frequency dependencies that depend on the carrier density of graphene, which can be tuned electrostatically. The net result is a surface that has an emissivity that is strongly angular dependent and tunable. \(16^{\circ}\) of thermal emission steering at
Figure 4: (a) Schematic of structure to obtain potentially achievable maximum emission angle steering performance. The bilayer \(\mathrm{Au/SiO_{x}}\) is eliminated and geometry parameters are modified. The angular emission spectra of the optimized device for Fermi level of \(0\) and \(0.6\,\mathrm{eV}\) at \(\omega=1614\mathrm{cm^{-1}}\).
Figure 3: Comparison between simulated and measured emission spectrum for various conditions. (a,d) Frequency emission spectra as a function of applied gate voltages for normal incidence (b, e) and incident angles at a constant applied voltage \(-560\,\mathrm{V}\). (c,f) The angular emission spectra at the measured (calculated) frequency of \(1508\) (\(1498\)) \(\mathrm{cm^{-1}}\). Experimental measurements were obtained for \(0^{\circ}<\theta<30^{\circ}\) and are mirrored for visual clarity.
6.61 um was demonstrated experimentally, and we outline design strategies that could increase the tunability to almost 60\({}^{\circ}\). This work lays the foundation for next generation beamsteering devices that do not require an external lightsource, and could be broadly applicable for remote sensing and thermal camouflage applications.
## Methods
### Fabrication of Device
2 um thick, 5 mm x 5 mm SiN\({}_{\mathrm{x}}\) membranes on a 200 um thick Si frame were purchased from Norcada. Metal deposition of the back-reflector consisted of a 2.5 nm chromium adhesion layer and 100 nm of gold. Atomic Layer Deposition (a Fiji G2 ALD) was used to grow a 30 nm film of Al\({}_{2}\)O\({}_{3}\) on the top of the SiN\({}_{\mathrm{x}}\) membrane. Once the Al\({}_{2}\)O\({}_{3}\) was grown, a prepared graphene sheet was transferred on top of the Al\({}_{2}\)O\({}_{3}\) film. Graphene was purchased from Groltltex and was grown on a Cu foil. To remove the foil, first a protective layer of PMMA (950k, A4, MicroChem Corp.) was added on top of the graphene. The Cu foil was etched away with FeCl\({}_{3}\) (CE-100, Transene) then the graphene/PMMA stack was rinsed in a series of deionized water baths until transfer to the prepared membranes. Once transferred, the PMMA was removed by soaking in 60 \({}^{\circ}\)C acetone for 1 h. After the graphene transfer, a 5 nm film of HfO\({}_{2}\) was grown via atomic layer deposition. To prepare the SiN\({}_{\mathrm{x}}\) membranes for the next steps, the Si frame of the sample was glued to a carrier Si chip with PMMA (950k, A8, MicroChem Corp.). The prepared substrate was then coated with a negative tone hydrogen silesquioxane resist (HSiQ, 6%, DisChem Inc.) at 100 nm. The sample was then exposed and patterned using the Elionix ELS G-100, an electron beam lithography tool. After exposure, the samples were developed in MF-321 for 90 s, with a 30 s rinse in DI water and then a 30 s rinse in IPA. The development process converts the exposed HSiQ to SiO\({}_{\mathrm{x}}\). For metal deposition of the top, a metal mask was placed above the substrate to create electrically disconnected regions. The deposition consisted of a 2.5 nm chromium adhesion layer and 30 nm of gold. Following these processing steps, the graphene was found to be heavily hole-doped, similar to what has been observed in previous works[45, 37], Gate-dependent resistivity measurements showed an increase in resistance for positive gate bias, but no maximum resistance was observed that would indicate charge neutrality. These measurements also exhibited hysteresis, consistent with what has been observed elsewhere, and indicative of surface, interface, and substrate charge traps that can be populated with charge as \(V_{G}\) is changed. At high biases, these traps can screen the applied gating field without doping the graphene, leading to deviations from the simple capacitance model that we use to estimate the graphene carrier density for a given \(V_{G}\)[54, 55, 56].
### Thermal Emission Measurements
The emission measurements were performed using a Bruker Vertex 70 FTIR, where thermal emission from a heated sample was used as the lightsource of the interferometer. The device was mounted on a rotation stage, and thermal emission from the device is collected by the aperture in the FTIR[44]. A carbon-nanotube source was used as our blackbody reference measurement. The finite size of the aperture creates a 3\({}^{\circ}\) acceptance angle, and there is also some uncertainty in the overall angle due to mechanical play in the stage holder and sample tilting within the sample holder. We estimate this uncertainly to be \(\leq\) 3\({}^{\circ}\) based on measurements with an alignment laser reflected off of an unpatterned area of the sample surface.
### Optical Simulations
The frequency-dependent dielectric functions of Al\({}_{2}\)O\({}_{3}\), Cr, Au and SiO\({}_{\mathrm{x}}\) were taken from the Palik data[57]. The dielectric functions of HfO\({}_{2}\) and SiN\({}_{\mathrm{x}}\) were obtained from infrared ellipsometry[31]. Heat-induced dielectric function change of SiN\({}_{\mathrm{x}}\) is corrected through the higher-order F-P resonance peak which is insensitive to Fermi level modulation (see Supplementary Note 4). The graphene was modeled as a layer with zero thickness, and its optical conductivity was calculated by Kubo formula [58]. The carrier mobility of graphene is assumed to be 300 cm\({}^{2}\)/Vs which is comparable to a previously reported value [31]. The reflection/transmission coefficients and absorption spectrum of the proposed structure were calculated by full-wave simulation with the finite element method.
## Acknowledgements
J.S. and V.W.B were supported by the Gordon and Betty Moore Foundation through a Moore Inventors Fellowship. M.F. was supported by Office of Naval Research award N00014-20-1-2356. This work was also supported by the National Research Foundation of Korea (NRF) grants funded by the Ministry of Science, ICT and Future Planning (NRF-2022R1A2C2092095, S.K. and M.S.J.) and by the Ministry of Education (NRF-2022R1I1A1A01065727, S.K.).
## Author Contributions
These authors contributed equally: Joel Siegel, Shinho Kim.
## Conflict of Interests
The authors declare no conflicts of interests. |
2309.01018 | Waste Factor: A New Metric for Evaluating Power Efficiency in any
Cascade | In this paper, we expand upon a new metric called the Waste Factor ($W$), a
mathematical framework used to evaluate power efficiency in cascaded
communication systems, by accounting for power wasted in individual components
along a cascade. We show that the derivation of the Waste Factor, a unifying
metric for defining wasted power along the signal path of any cascade, is
similar to the mathematical approach used by H. Friis in 1944 to develop the
Noise Factor ($F$), which has since served as a unifying metric for quantifying
additive noise power in a cascade. Furthermore, the mathematical formulation of
$W$ can be utilized in artificial intelligence (AI) and machine learning (ML)
design and control for enhanced power efficiency. We consider the power usage
effectiveness (PUE), which is a widely used energy efficiency metric for data
centers, to evaluate $W$ for the data center as a whole. The use of $W$ allows
easy comparison of power efficiency between data centers and their components.
Our study further explores how insertion loss of components in a cascaded
communication system influences $W$ at 28 GHz and 142 GHz along with the data
rate performance, evaluated using the consumption efficiency factor (CEF). We
observe CEF's marked sensitivity, particularly to phase shifter insertion loss
changes. Notably, CEF variations are more prominent in uplink transmissions,
whereas downlink transmissions offer relative CEF stability. Our exploration
also covers the effects of varying User Equipment (UE) and Base Station (BS)
deployment density on CEF in cellular networks. This work underscores the
enhanced energy efficiency at 142 GHz, compared to 28 GHz, as UE and BS numbers
escalate. | Mingjun Ying, Dipankar Shakya, Hitesh Poddar, Theodore S. Rappaport | 2023-09-02T20:58:25Z | http://arxiv.org/abs/2309.01018v3 | # Waste Factor: A New Metric for Evaluating Power Efficiency in any Cascade+
###### Abstract
In this paper, we expand upon a new metric called the Waste Factor (\(W\)), a mathematical framework used to evaluate power efficiency in cascaded communication systems, by accounting for power wasted in individual components along a cascade. We show that the derivation of the Waste Factor, a unifying metric for defining wasted power along the signal path of any cascade, is similar to the mathematical approach used by H. Friis in 1944 to develop the Noise Factor (\(F\)), which has since served as a unifying metric for quantifying additive noise power in a cascade. Furthermore, the mathematical formulation of \(W\) can be utilized in artificial intelligence (AI) and machine learning (ML) design and control for enhanced power efficiency. We consider the power usage effectiveness (PUE), which is a widely used energy efficiency metric for data centers, to evaluate \(W\) for the data center as a whole. The use of \(W\) allows easy comparison of power efficiency between data centers and their components. Our study further explores how insertion loss of components in a cascaded communication system influences \(W\) at 28 GHz and 142 GHz along with the data rate performance, evaluated using the consumption efficiency factor (CEF). We observe CEF's marked sensitivity, particularly to phase shifter insertion loss changes. Notably, CEF variations are more prominent in uplink transmissions, whereas downlink transmissions offer relative CEF stability. Our exploration also covers the effects of varying User Equipment (UE) and Base Station (BS) deployment density on CEF in cellular networks. This work underscores the enhanced energy efficiency at 142 GHz, compared to 28 GHz, as UE and BS numbers escalate.
Waste Factor, Consumption Efficiency Factor, Energy Efficiency, Cascaded System, Data Center, AI/ML.
## I Introduction
As the evolution of telecommunications progresses from 5G towards 6G, there is an escalating demand for energy efficiency in both wired and wireless communication systems. Currently, these systems account for approximately 2-3% of the global energy demand, a figure projected to exceed 20% by 2030 [1, 2]. This rise is primarily due to the forthcoming 5G and 6G networks and edge computing, which promise ultra-wide bandwidths and increased data rates. These advancements, however, intensify the challenge of managing energy efficiency, especially within the context of resource-limited Internet of Things (IoT) [4, 5]. The critical need to reduce energy consumption in wireless networks also stems from the urgency to limit greenhouse gas emissions and mitigate climate change as machine learning (ML) and artificial intelligence (AI) threaten to expand power consumption from their computational burden.
Wasted power is becoming a significant enemy of the planet, echoing a parallel from the past. Over 80 years ago, noise was the primary adversary to wireless communication. It was H. Friis who developed the Noise Factor and Noise Figure in dB, a unifying metric to assess additive noise power in a cascade [6]. Today, Waste Factor (\(W\)) emerges as an analogous tool to assess wasted power in a cascade system, the current adversary to our environment and planet's energy resources.
The advancements in massive MIMO, network slicing, renewable energy-powered BS, and energy harvesting technologies have significantly contributed to reducing energy consumption in 5G networks [7, 8]. Furthermore, artificial intelligence (AI) and machine learning (ML) techniques, including reinforcement learning, may be instrumental in maintaining a balance between quality of service (QoS) and energy consumption [9]. Despite these strides, a glaring gap persists in existing research and design - the lack of a comprehensive theoretical framework to measure and compare power efficiency across diverse wireless system architectures [10, 11, 12].
Waste Factor \(W\) and Consumption Efficiency Factor (CEF) [2, 3, 11, 12] aim to fill this gap. By providing a standardized metric for comparing power consumption and energy efficiency across various system designs, W and CEF can guide engineers and product designers towards more sustainable and energy-efficient solutions, towards minimal power consumption in future wireless network design, particularly those operating at sub-THz frequencies [2]. Simulations in [2] also demonstrate that reducing cell size and increasing carrier frequency and bandwidth lead to lower energy expends per bit, confirming the relevance and utility of \(W\) in achieving energy-efficient designs. In this paper, we further extend and apply the theoretical constructs of W and CEF, making the following contributions:
* While the concept of Waste Factor was first introduced in [2] and [3], we provide here for the first time a detailed mathematical derivation of \(W\) as well as intuitive analogies to Noise Factor based on H. Friis original mathematical derivations in [6].
* We extend Waste Factor to generalized communication systems, using the case of energy consumption in data centers as a primary example. We also illustrate the effectiveness of \(W\) through an example comparing the power waste of two data centers.
* We explore the impact of varying component efficiency for a TX and RX cascade, particularly the phase shifter's insertion loss on CEF at 28 GHz and 142 GHz.
* We analyze the influence of user equipment (UE) and base station (BS) geographic density on network CEF.
* While not covered here, we note that \(W\) could be used in AI/ML applications to optimize power efficency.
The structure of this paper is as follows. Section II derives the Waste Factor, drawing an analogy between its mathematical form and that of the Noise Factor. Section III uses \(W\) to analyze data centers. In Section IV, we apply \(W\) to analyze the energy efficeiency of future communication systems. Section V discusses potential future research directions using Waste Factor. Finally, Section VI concludes the paper.
## II Introduction to F and W
In this section we show the duality of two parameters, \(F\) and \(W\), that can be used to evaluate noise and power waste of communication systems, respectively.
### _Noise Factor_
Noise factor (\(F\)) defines to the degradation of signal-to-noise ratio (SNR) in a cascade. Specifically, the noise factor \(F\) is defined as the ratio of the input SNR to output SNR, which is expressed as \(F=SNR_{i}/SNR_{o}\). \(F\) in dB is the noise figure (NF) and a value of 0 dB indicates no added noise and no degradation in SNR along a device or cascade. Friis's formula widely used to calculate the overall \(F\) of cascaded devices, where each device has its own individual \(F\) and power gain, \(G\). Once the total \(F\) is calculated, it can be used to determine the overall NF of the entire cascade. Based on [6], \(F\) for the cascaded system is
\[F=F_{1}+\frac{(F_{2}-1)}{G_{1}}+\frac{(F_{3}-1)}{G_{1}G_{2}}+\ldots+\frac{(F_ {N}-1)}{\prod_{i=1}^{N-1}G_{i}}, \tag{1}\]
where \(F_{i}\) represents the noise factor of the i-th device, and \(G_{i}\) represents its power gain (linear, not in dB).
### _Waste Factor (\(W\))_
The Waste Factor \(W\) characterizes power efficiency of a cascaded system by considering the power wasted by components along a cascade. Akin to \(F\), the power wasted by a device/cascade can also be examined by observing the progressive power waste, based on the output power at each stage, as the signal propagates down the cascade. Such formulation provides an intuitive way to understand power waste at each stage of the cascade and allows \(W\) to compare the power efficiency of two devices/systems through wasted power. For analysis, the power consumed (\(P_{consumed}\)) is split into three principal components [2, 3, 12]:
* Signal path power (\(P_{signal}\)): Power delivered to the device/cascade output (e.g., power amplifier output to matched load).
* Non-signal path power (\(P_{non-signal}\)): Power consumed by devices on the path to facilitate signal transfer in the cascade (e.g., standby power drawn by an amplifier).
* Non-path power (\(P_{non-path}\)): Power consumption of components that do not contribute to the signal and are not along the cascade (e.g., oscillators, displays, etc.).
Thus, we have
\[P_{consumed}=(P_{signal}+P_{non-signal})+P_{non-path}. \tag{2}\]
Fundamentally, \(W\) is defined as the ratio of power consumed by the signal path components (\(P_{consumed,\,path}=P_{signal}+P_{non-signal}\)) to the useful signal power (\(P_{signal,out}\)) delivered along the cascaded communication system (\(W=P_{consumed,\,path}/P_{signal,out}\)) [2]. Since \(W\) is based on the useful signal power output, it is referred to the output.
The formulation of \(W\) for a cascaded system is illustrated through a simple cascade of two devices in Fig. 1, neglecting \(P_{non-path}\) from auxiliary components. Here, we define
\[P_{consumed}=W\times P_{signal,out}. \tag{3}\]
Now, we can define the power consumed at device 1's terminal as:
\[P_{consumed,D1}=W_{1}P_{signal\,1}, \tag{4}\]
Here, \(P_{consumed,D1}\) denotes the total power consumption at the output terminal of device 1. This comprises both the useful signal power transmitted to the subsequent device and the power wasted by device 1 itself. When \(P_{source,out}\) is subtracted, we arrive at the standalone power consumption of device 1.
\[P_{consumed\,1}=W_{1}P_{signal\,1}-P_{source,out}, \tag{5}\]
similar to that, the power consumption of device 2 alone is
\[P_{consumed\,2}=W_{2}P_{signal\,2}-P_{signal\,1}. \tag{6}\]
Intuitively, the total power consumed can be expressed as in (7). This total power consumption is the sum of power consumed by each device and the power input to the system.
\[P_{consumed}=P_{consumed\,1}+P_{consumed\,2}+P_{source,out}. \tag{7}\]
Also, we know that the output of the system is
\[P_{signal,out}=P_{signal\,2}=G_{2}P_{signal\,1}, \tag{8}\]
based on (4)-(8), we have
\[\begin{split} P_{consumed}&=W_{2}P_{signal\,2}+(W _{1}-1)P_{signal\,1}\\ &=(W_{2}+\frac{(W_{1}-1)}{G_{2}})P_{signal,out}.\end{split} \tag{9}\]
Since (9) is equal to (3), then we have the power waste factor for the cascaded system
\[W=(W_{2}+\frac{(W_{1}-1)}{G_{2}}). \tag{10}\]
Based on (10), \(W\) for a cascaded system with \(N\) devices can be generalized to (11). It is noteworthy that the mathematics below bear a striking similarity to the Noise Factor in (1) [6].
Fig. 1: A general cascade communication system composed of two devices.
\[W=\{W_{N}+\frac{(W_{N-1}-1)}{G_{N}}+\frac{(W_{N-2}-1)}{G_{N}G_{N-1}}+\ldots+\frac{ (W_{1}-1)}{\prod_{i=2}^{N}G_{i}}\}. \tag{11}\]
### _Analogies between \(F\) and \(W\)_
The analogous mathematical formulation of \(F\) and \(W\) is immediately visible from (1) and (11). There are, however, important characteristics of each metric to keep in mind.
As the noise figure is a measure of the degradation of the SNR caused by the components in a cascaded system, it quantifies the amount of noise added to the signal at the input of the cascade. Therefore, \(F\) is referred to the cascade input and (1) increases from device 1 to N (source to sink). On the other hand, \(W\) is a measure of the power efficiency of a cascaded system. It quantifies the amount of power consumed by the cascade to transmit or receive a signal. Since \(W\) is related to the power consumed by the cascade, it is referred to the cascade output and (11) increases from device N to 1 (sink to source).
A higher \(W\) intuitively signifies more power wasted. The value of \(W\) is always equal to or greater than 1, with \(W=1\) signifying that all power supplied to a cascaded component or network is fully utilized in the signal output (optimal, no power wasted). Conversely, \(W\rightarrow\infty\) indicates that no power is contributed to the signal output, and all power is squandered (e.g., a perfect dummy load or an entirely lossy channel). The aspects of the \(F\) and \(W\) in communication systems are comprehensively summarized in Table I.
In conclusion, both \(F\) and \(W\) are useful in the analysis of communication systems. \(F\) is a well-established metric that provides a measure of the degradation of the signal-to-noise ratio, and \(W\) is a new metric that provides a measure of the power efficiency of a system. Both metrics are important for communication systems. With the increasing importance of energy efficiency in the industry, \(W\) can become a vital metric for enabling green communications.
## III Adoption of Waste Factor
In communication systems, effective use of \(W\) is vital for more energy-efficient solutions. Specifically, the potential of \(W\) to apply across communication systems and shed light on energy consumption needs exploration. We extend the theoretical framework to all data systems, including data centers, analyzing superposition of power (2) to understand energy consumption, akin to the consumption factor analysis in [11, 12]. This method helps differentiate power for information conveyance from other processes off the cascaded path.
\[\begin{split} W&=W_{sink}+\frac{1}{G_{RX}}\left( \frac{1}{G_{channel}}-1\right)\\ &+\frac{1}{G_{RX}G_{channel}}\left(W_{source}-1\right).\end{split} \tag{12}\]
Using \(W\), we aim to measure power wastage in devices on a cascaded signal path, assisting engineers in identifying power wastage hotspots for improved efficiency, potentially with AI/ML. Section III examines the use of waste factor concepts for "ancillary functions" in data centers, like cooling, lighting, and non-path components. Based on (11), we generalize \(W\) in (12) for wireless systems with a source, sink, and communication channel, as seen in [12].
### _Generalized Waste Factor for data center_
A generalized formulation of \(W\) can be implemented by taking a data center as an example. The power usage effectiveness (PUE) is a measure of energy efficiency in data centers, which is the ratio between the amount of energy consumed by computing equipment for data operations to the total overhead energy used for supporting equipment, including cooling [14].
\[PUE=\frac{\text{Computing equipment energy}}{\text{Auxiliary equipment energy}}. \tag{13}\]
Using (12) we elucidate our approach to extend the existing mathematical framework for \(W\) to all energy-consuming systems within the communication process, ultimately establishing a generalized waste factor that serves as a metric for system energy efficiency. By doing so, we strive to enhance the applicability of \(W\) to a wide range of power consumers, including data centers and other complex systems, in order to quantify energy efficiency and reduce overall power consumption. Here we study a data center as an example to explain how \(W\) can be applied to a generalized system.
In the process of data transmission within a data center, the major power consumption is attributed to servers, network switches and computing equipment, while other power consumption is associated with cooling systems, power distribution units (PDUs), and other auxiliary equipment. According to Barroso and Holzle's findings [13], server and networking equipment account for about 60-70% of the overall power consumption in a data center. Cooling systems contribute about 30-40% of the total power consumption, while the remainder is consumed by PDUs and other auxiliary equipment.
Total data center power consumption can be modeled as:
\[P_{consumed}=P_{info}+P_{non-info}+P_{aux}, \tag{14}\]
where \(P_{info}\) is the sum of all powers of each component (e.g. routers, switches, processors, and other network equipments that carry or store information) that has been used for carrying information in the system, \(P_{non-info}\) is the power used by the other components but not directly involved in data transmission (e.g. servers, storage devices, firewalls), and \(P_{aux}\) is the power used by the cooling systems, PDUs, and other auxiliary equipment apart from the data transmission and access/storage. Based on [14], the PUE is a common measure of energy efficiency in data centers. PUE assesses the ratio between the amount of energy consumed by computing equipments and the total overhead energy used for supporting equipments, including cooling. Using (14), we notice that
\[PUE=\frac{P_{info}+P_{non-info}}{P_{aux}}. \tag{15}\]
Let us consier the power efficiency for all data transmissions in the data center as \(\eta\)
\[\eta=\frac{P_{info}}{P_{info}+P_{non-info}}. \tag{16}\]
The power consumption for data processing can be derived considering the data center as a single component
\[P_{consumed}=P_{info}\overline{W}+P_{aux}, \tag{17}\]
where \(\overline{W}\) represents the Waste Factor for the data center, and
using (15), (16), and (17), we find
\[\overline{W}=\eta^{-1}=\frac{P_{aux}PUE}{P_{info}}. \tag{18}\]
Based on (10), here we consider a communication scenario between two data centers. Also, \(W\) for the communication channel (e.g. cable or connecgtor loss) is obtained by treating it as a passive attenuator with less than unity gain [2, 3], then we determine the \(W\) as
\[\begin{split} W&=\overline{W}_{sink}+\frac{1}{G_{ RX}}\left(\frac{1}{G_{channel}}-1\right)\\ &+\frac{1}{G_{RX}G_{channel}}\left(\overline{W}_{source}-1 \right).\end{split} \tag{19}\]
### _Comparison of data centers power waste using \(W\)_
Two data centers: Data Center A and Data Center B, each with different power consumption levels. Despite its wide adoption, the PUE cannot serve as a universal standard for energy efficiency comparison. PUE measures a data center's energy efficiency by comparing total facility power to auxiliary equipment power. However, it doesn't distinguish between power used for actual data transmission and other uses. When two data centers have the same PUE, we can't tell which is more efficient at data transmission. However, \(W\) provides a more detailed insight, differentiating between power that carries information and power that doesn't. So, for a nuanced energy efficiency comparison, especially between data centers with identical PUEs, incorporating an additional metric like \(W\) is essential. To make the point that \(W\) gives more insight than PUE, we consider two data centers with equal PUE but different architectures. For Data Center A, the power allocation is as follows: \(P_{infoA}=140\) kWh for information transmission, \(P_{non-infoA}=40\) kWh for non-data transmission components, and \(P_{auxA}=150\) kWh for auxiliary equipment. In comparison, Data Center B allocates \(P_{infoB}=60\) kWh of power for information transmission, \(P_{non-infoB}=30\) kWh for non-data transmission components, and \(P_{auxB}=75\) kWh for auxiliary equipment.
Here, we assume that Data Center A is a larger facility with more equipment, hence a higher total energy consumption. Conversely, Data Center B is smaller and uses less total energy. If we were to simply compare their total energy use, it might seem that Data Center B is more efficient. But the Waste Factor pains a different picture. First, we calculate the power efficiency for data transmission in (16) computing equipment (\(\eta\)) for both data centers:
\[\begin{split}\eta_{A}&=\frac{P_{infoA}}{P_{infoA}+P _{non-infoA}}=\frac{140}{140+40}\approx 0.778,\\ \eta_{B}&=\frac{P_{infoB}}{P_{infoB}+P_{non-infoB}}= \frac{60}{60+30}\approx 0.667,\end{split}\]
where \(\eta_{A}\) and \(\eta_{B}\) denote the power efficiency of overall data transmission in Data Center A and Data Center B, respectively. Next, we calculate the PUE for both data centers using (15):
\[\begin{split} PUE_{A}&=\frac{P_{infoA}+P_{non-infoA} }{P_{auxA}}=\frac{140+40}{150}=1.2,\\ PUE_{B}&=\frac{P_{infoB}+P_{non-infoB}}{P_{auxB}} =\frac{60+30}{75}=1.2,\end{split}\]
where \(PUE_{A}\) and \(PUE_{B}\) denote the PUE of Data Center A and Data Center B, respectively. Finally, the Waste Factor (\(\overline{W}\)) for both data centers is found using (18):
\[\overline{W}_{A}=\frac{P_{auxA}PUE_{A}}{P_{infoA}}=\frac{150\times 1.2}{140} \approx 1.286,\]
\[\overline{W}_{B}=\frac{P_{auxB}PUE_{B}}{P_{infoB}}=\frac{75\times 1.2}{60}=1.5,\]
The Waste Factors of both data centers (\(\overline{W}_{A}\) and \(\overline{W}_{B}\)) allow us to identify the more energy-efficient one, Data Center A, is more energy efficient due to its lower Waste Factor (\(\overline{W}_{A}<\overline{W}_{B}\)). The PUE metric, despite being a common measure, only accounts for the energy consumed by computing and supporting equipment in the whole, neglecting variations in operational conditions, equipment type, and workload characteristics. This necessitates more comprehensive metrics, such as the Waste Factor. By introducing the generalized \(W\) as an energy efficiency metric, we can calculate the \(W\) for complex systems like data centers using the proposed formula (19). This approach extends the applicability of the Waste Factor to a variety of power consumers, including data centers and other systems with significant energy consumption.
## IV Evaluating performance with CEF
The CEF is defined as the maximum data rate delivered by the communication system to the total power consumed. Using \(W\), the CEF can be derived as [2, 12]:
\[\text{CEF}=\frac{R_{max}}{W\times P_{signal,out}}, \tag{20}\]
where \(R\) is the data rate in bps, and \(R_{max}\) is the maximum data rate supported by the communication system. To analyze the
impact of component efficicacy on \(W\), and ultimately CEF, here we assume a transceiver and receiver structure illustrated by Fig. 2 with analog beamforming at both TX and RX. The simulation is conducted for mmWave (28 GHz) and sub-THz (142 GHz) wireless communication systems, and the detailed simulation parameters for the comparison of two communication systems are summarized in Table II, and the selection of parameters from the table as simulation parameters is mainly based on [2]. In the following simulations, we employ the CEF owing to its comprehensive encapsulation of system performance, encompassing aspects of both power consumption and data rate. This metric not only assists in the optimization of system efficiency, but also affords a convenient benchmark for comparison across disparate systems or configurations.
We emphasize here that the low-noise amplifier (LNA) is not considered as an on-path component for power consumption analysis. The reason being that the DC power consumed by the LNA is independent of the input signal power. In other words, the gain and efficiency of the LNA remain relatively constant within its linear operating range, and do not vary significantly with input power. Since the input power to the LNA is typically weak, coming from the RX antenna, the LNA's gain of over 30 dB amplifies the signal to a level that can be processed by the subsequent circuitry. In our analysis, we classify the power consumed by the LNA differently. Since its consumption is consistent, we categorize it as non-path power when calculating the system's total power consumption. Separately, for an accurate representation of its operational efficiency, we factor in the LNA's Figure of Merit (FoM). This approach allows us to estimate the system's overall power consumption accurately, keeping our model reasonably straightforward.
### _The Impact of Different Component Efficiency on CEF_
Fig. 3(a), considers variation in the phase shifters (PS) insertion loss as an example, and the results indicate sensitivity of CEF to the PS performance. Moreover, we observed that the changes in CEF with varying PS insertion loss were more pronounced in both the 28 GHz and 142 GHz uplink transmissions, while the changes in the downlink transmissions were relatively stable in subsequent regions. Furthermore, the CEF for both uplink and downlink transmissions at 142 GHz was higher than that at 28 GHz, implying that sub-THz systems require less RF power to achieve the same SNR as mmWave systems due to greater double directional antenna gains.
### _The Impact of Different Number of UE and BS on CEF_
Expanding on the work in [2], we explore the impact of varying UE and BS densification on CEF through simulation. We observed from Fig. 3(b) that the 142 GHz scenario yields significantly larger uplink CEF, as compared to other scenarios, under varying UE numbers. Furthermore, with the increase in UE numbers, the rate of reduction in uplink CEF for 142 GHz is evidently lower than other scenarios, which indicates that sub THz communication exhibits good energy efficiency.
From Figure 3(c), the CEF trend for 28 GHz uplink and downlink transmissions is notably similar and significantly higher than that at 142 GHz, in line with findings in [2]. Additionally, an increase in BS density may enhance CEF during downlink transmissions at 142 GHz. These insights not only highlight strategies for optimizing transmission capacity across different frequency bands but also suggest AI/ML driven approaches can be employed to fine-tune parameters like BS densification. This can further enhance signal reliability, reduce interference, and inform spectrum allocation policies for optimal bandwidth utilization, ensuring system integrity.
## V Conclusions and Future Directions
In this study, we introduced the \(W\), and showed its use in evaluating power efficiency across different wireless architectures. \(W\) when referred to the output of a cascaded system, is similar to \(F\), and offers a simple mathematical formulation for assessing wasted power. Such a formluation lends itself well to theory and to AI/ML applications for real-time control to improve power efficiencies in communication systems. Using \(W\), we assessed power efficiency of data centers through the PUE and compare them. Our analysis of a cascaded BS-UE communication system showed how system parameters impact \(W\) and data rate performance at 28 GHz and 142 GHz. We found that changes in phase shifter insertion loss affected CEF, especially in uplink transmissions. By examining different UE and BS densification, energy efficiency improved at 142 GHz, which can help in making better decisions for spectrum allocation and bandwidth use.
Looking ahead, as IoT devices become more common in wireless systems, there will be a greater need for energy-efficient solutions. RF energy harvesting at mmWave and sub-THz frequencies is an emerging area. With the growth of small cells and the introduction of Reconfigurable Intelligent Surface in 6G, designing energy-efficient systems will become even more important. \(W\) can be a useful tool in these designs. This paper provides a framework for understanding energy efficiency in communication systems. It offers valuable insights for improving next-generation communications and points to areas for future research to achieve greener communication methods particularly through AI/ML and in improving the power efficient designs of computing systems.
Fig. 2: Architecture of the TX and RX considered for power analysis. |
2302.14001 | Charmonium production in a thermalizing heat bath | Using the Remler formalism for the creation of composed particles, we study
charmonium production both in thermalized and thermalizing boxes, which contain
charm and anticharm quarks. The thermalizing box studies include the lowering
of the box temperature, the spatial diffusion of charm and anticharm quarks,
which are initially confined in the central region, as well as the combination
of both, what imitates heavy-ion collisions. Comparing numerical and analytical
results we demonstrate that the rate of the original Remler formalism has to be
supplemented by two rates to obtain, for $t\to \infty$, results, which are
consistent with the statistical model predictions: i) a rate, which takes into
account the temperature dependence of the Wigner density of the quarkonium
during the expansion and, in the case that a heavy quark potential is not
implemented in the Monte Carlo approach, ii) a rate which comes from the change
of the relative distance between the heavy quark and antiquark. These results
provide the basis for future applications of the Remler formalism to heavy-ion
collisions. | Taesoo Song, Joerg Aichelin, Elena Bratkovskaya | 2023-02-27T17:44:39Z | http://arxiv.org/abs/2302.14001v2 | # Charmonium production in a thermalizing heat bath
###### Abstract
Using the Remler formalism for the creation of composed particles, we study chamonium production both in thermalized and thermalizing boxes, which contain charm and anticharm quarks. The thermalizing box studies include the lowering of the box temperature, the spatial diffusion of charm and anticharm quarks, which are initially confined in the central region, as well as the combination of both, which imitates heavy-ion collisions. Comparing numerical and analytical results we demonstrate that the rate of the original Remler formalism has to be supplemented by two rates to obtain, for \(t\to\infty\), results, which are consistent with the statistical model predictions: i) a rate, which takes into account the temperature dependence of the Wigner density of the quarkonium during the expansion and, in the case that a heavy quark potential is not implemented in the Monte Carlo approach, ii) a rate which comes from the change of the relative distance between the heavy quark and antiquark. These results provide the basis for future applications of the Remler formalism to heavy-ion collisions.
## I Introduction
Quarkonium is a bound state of a heavy quark and its antiquark. Since the object is flavorless, it is often called hidden heavy flavor meson. Several decades ago Matsui and Satz suggested the suppression of quarkonium production in heavy-ion collisions as a signature for the formation of a quark-gluon plasma (QGP), because the color screening, which exists only in a deconfined phase, prevents the heavy quark pair from forming a bound state [1].
Quarkonium suppression was indeed later observed at the Super Proton Synchrotron (SPS) and the Relativistic heavy-ion Collider (RHIC) [2; 3]. As the collision energy increases, however, more and more heavy quarks are produced and the possibility of regeneration of quarkonia emerges also because the \(c\) and \(\bar{c}\) from different primary vertices may form a quarkonium. In fact, the nuclear modification factor of \(J/\psi\), which measures the normalized ratio of \(J/\psi\) produced in heavy-ion reactions as compared to pp collisions, is larger at the Large Hadron Collider (LHC) than at RHIC and larger in mid-rapidity than in forward and backward rapidities, though the temperature is higher at LHC and in the mid-rapidity region [4]. Therefore the study of quarkonium production in heavy-ion collisions is not simply focused on the dissociation in a hot dense nuclear matter but also on regeneration in a QGP [5; 6; 7; 8] or when the QGP hadronizes.
In the seventies Remler devised a formalism to study the production of composite particles in heavy-ion collisions by using the Wigner representation of density operators [9; 10; 11], which was successfully applied to deuteron production in heavy-ion collisions [9; 12].
Recently an attempt has been made to use the Remler formalism to study quarkonium production in heavy-ion collisions [13; 14] as well as in p+p collisions [15]. A distinguished feature of this approach is that the temperature and therefore in a QGP time dependence of the Wigner density of the quarkonium is taken into account. Such a dependence is predicted by lattice gauge calculations [16], which show that below the dissociation temperature (above which a \(J/\psi\) is not stable anymore) the root-mean-square (rms) radius of a \(J/\psi\) depends on the temperature.
In this study we apply the Remler formalism for quarkonium production in a thermal box by using Monte-Carlo methods. Contrary to simulations of heavy-ion collisions, box simulations have the advantages that everything is controllable and analytic solutions are available. The equilibration of the quarkonia in the box is achieved by scattering with virtual particles at a given temperature. The asymptotic distribution can then be compared with statistical model predictions. The statistical model, which has successfully described the particle production in heavy-ion collisions, provides reliable solutions for the quarkonium production [17; 18].
We first briefly review the Remler formalism in Sec. II and present the solutions for quarkonium production in a thermalized box in Sec. III. In the next section we carry out box simulations for four different initial conditions and present our results. In Sec. V the necessity of an additional term, which is responsible for spatial diffusion,
is explained. Finally, a summary is given in Sec. VI.
## II Remler formalism
In the Remler formalism the multiplicity of a quarkonium state \(\Phi\), \(P_{\Phi}\), in a \(N\)-body system, composed of heavy (anti)quarks, is asymptotically (for \(t\to\infty\)) obtained by
\[P_{\Phi}(t\to\infty)={\rm Tr}[\rho_{\Phi}\rho_{N}(t\to\infty)], \tag{1}\]
where \(\rho_{\Phi}\) (which is assumed to be time independent) and \(\rho_{N}\) are, respectively, the density operator of the quarkonium and of the \(N\) heavy (anti)quarks.
In practice we cannot solve the time evolution of the quantal N-body density matrix without approximations. In the past it turned out that very satisfying results are obtained, if one does not study the time evolution of the density matrix itself but that of the Wigner density distribution, the Fourier transform of the density matrix, and approximates, guided by the mathematical work on the solution of the quantal Vlasov equation, the quantal Wigner density distribution by an ensemble of classical phase space distributions of point like particles. Averaging over this ensemble, one can calculate observables, which agree remarkable well with the experimental results. This procedure is the background of the Boltzmann-Uehlung -Uhlenbeck (BUU) (cf. [19; 20]) and the Vlasov-Uehling-Uhlenbeck (VUU) [21] approach, which are widely used to describe the results of heavy-ion collisions with center of mass energies between few GeV and several TeV (cf. [7; 22; 23]).
In these approaches the particles scatter and move on curved trajectories due to a mean potential, created by the fellow particles. If this potential is absent or neglected one talks about a cascade approach. If heavy (anti)quarks are described in the cascade mode, where only scattering and free streaming but no potential interaction is present, they will have diverging trajectories. Therefore, even if the heavy (anti)quarks are initially confined to a space region, asymptotically we will find
\[\lim_{t\to 0}P_{\Phi}=0, \tag{2}\]
because two and more body correlations are lost in this approach. To overcome this problem one introduces a rate
\[\Gamma=\frac{dP_{\Phi}}{dt}={\rm Tr}\!\left(\frac{d\rho_{\Phi}}{ dt}\rho_{N}\right)+{\rm Tr}\!\left(\rho_{\Phi}\frac{d\rho_{N}}{dt}\right) \tag{3}\] \[={\rm Tr}\!\left(\frac{d\rho_{\Phi}}{dt}\rho_{N}\right)-i{\rm Tr} \!\left(\rho_{\Phi}[H,\rho_{N}]\right)\equiv\Gamma_{\rm local}+\Gamma_{\rm coll}.\]
The Hamiltonian is decomposed into
\[H = \sum_{i}K_{i}+\sum_{i<j}V_{ij}\] \[= H_{1,2}+H_{N-2}+\sum_{i\geq 3}(V_{1i}+V_{2i}),\]
where \(K_{i}\) and \(V_{ij}\) are, respectively, the kinetic and interaction terms and
\[H_{1,2} = K_{1}+K_{2}+V_{12},\] \[H_{N-2} = \sum_{i\geq 3}K_{i}+\sum_{i>j\geq 3}V_{ij}. \tag{5}\]
Using the cyclic property of traces,
\[\Gamma_{\rm coll}=-i{\rm Tr}\!\left(\rho_{\Phi}[H,\rho_{N}]\right)=i{\rm Tr} \!\left(\rho_{N}[H,\rho_{\Phi}]\right)\!, \tag{6}\]
and supposing for simplicity that \(\Phi\) contains the particles 1 and 2 we obtain
\[[H_{N-2},\rho_{\Phi}]=0,\ \ \ \ \ [H_{1,2},\rho_{\Phi}]=0, \tag{7}\]
because \(H_{N-2}\) does not affect \(\rho_{\Phi}\) and \(\rho_{\Phi}\) is a eigenfunction of \(H_{1,2}\). The collision term in Eq. (3) is then simplified to [12; 13]
\[\Gamma_{\rm coll}=-i\sum_{i\geq 3}{\rm Tr}\!\left(\rho_{\Phi}[V_{1i}+V_{2i},\rho_ {N}]\right)\!. \tag{8}\]
For a S-state the density operator of the quarkonium, expressed in Wigner representation, is approximated by
\[\rho_{\Phi}\to W_{S}(r,p)=8\exp\bigg{[}-\frac{r^{2}}{\sigma^{2}}-\sigma^{2}p^ {2}\bigg{]}, \tag{9}\]
where \(r\) and \(p\) are, respectively, relative distance and relative momentum in the center-of-mass frame. The width \(\sigma\) is given by the rms radius of the quarkonium.
The classical phase space distribution of point like particles can be expressed as
\[\rho_{N}\approx\prod_{i=1}^{N}h^{3N}\delta(r_{i}-r_{i}^{*}(t))\delta(p_{i}-p_{i }^{*}(t)) \tag{10}\]
where \(r_{i}^{*}(t)\) and \(p_{i}^{*}(t)\) is the phase space trajectory of particle \(i\). The time derivative of the density matrix is then given by
\[\frac{d\rho_{N}}{dt}\approx\sum_{i}v_{i}\cdot\nabla_{r}\rho_{N}\] \[+\sum_{i>j}\sum_{\nu}\delta(t-t_{ij}(\nu))\{\rho_{N}(t+\varepsilon )-\rho_{N}(t-\varepsilon)\}\]
where \(t_{ij}(\nu)\) is the time for the \(\nu\) th collision between the particles \(i\) and \(j\). The first term implies free streaming between instant scatterings, which are described by the second term. The second term in Eq. (II) is equivalent to Eq. (8). Therefore,
\[\Gamma_{\rm coll}(t) \approx \sum_{i=1,2}\sum_{j\geq 3}\sum_{\nu}\delta(t-t_{ij}(\nu))\] \[\times \int d^{3}r_{1}d^{3}p_{1}...d^{3}r_{N}d^{3}p_{N}(2\pi)^{3N}\] \[\times \rho_{\Phi}(r_{1},p_{1};r_{2},p_{2})\{\rho_{N}(t+\varepsilon)- \rho_{N}(t-\varepsilon)\},\]
and we obtain for the multiplicity at time t'
\[P_{\phi}(t^{\prime})-P_{\phi}(0)\approx\int_{0}^{t^{\prime}}dt\{\Gamma_{\rm local }(t)+\Gamma_{\rm coll}(t)\}. \tag{13}\]
We note that \(\Gamma_{\rm local}(t)\) contributes only if \(\sigma\) in Eq. (9) changes with time [13].
## III Quarkonium in thermal box
The multiplicity of a quarkonium \(\Phi\) can be obtained in the coalescence approach by projecting the phase space distribution of the charmed quarks onto the Wigner density of the \(\Phi\) state [24; 25]
\[N_{\Phi}=\frac{d}{d_{1}d_{2}}\int\frac{d^{3}p_{1}d^{3}p_{2}d^{3} r_{1}d^{3}r_{2}}{(2\pi)^{6}} \tag{14}\] \[\times f_{Q}(r_{1},p_{1})f_{\bar{Q}}(r_{2},p_{2})W_{S}(r,p)\] \[=\frac{d}{d_{1}d_{2}}\int\frac{d^{3}Pd^{3}pd^{3}Rd^{3}r}{(2\pi)^{6}}\] \[\times f_{Q}(r_{1},p_{1})f_{\bar{Q}}(r_{2},p_{2})W_{S}(r,p),\]
where \(f_{Q}(r_{1},p_{1})\) and \(f_{\bar{Q}}(r_{2},p_{2})\) are, respectively, the heavy quark (Q) and heavy antiquark (\(\bar{Q}\)) distribution functions, \(r=r_{1}-r_{2}\), \(R=(r_{1}+r_{2})/2\), \(p=(p_{1}-p_{2})/2\) and \(P=p_{1}+p_{2}\), and \(d_{1}\), \(d_{2}\) and \(d\) is, respectively, the spin-color degeneracy of \(Q\), \(\bar{Q}\) and quarkonium. \(r\) and \(p\) in the Wigner functions are the distance and the relative momentum of the heavy quark pair in their center-of-mass frame. Since we are studying heavy quarks with a mass much larger than the temperature, we can safely use the Galilean transformation instead of the Lorentz transformation.
We assume an uniform distribution of \(Q\) and \(\bar{Q}\) in space,
\[(2\pi)^{3}\frac{dN_{S}}{Vd^{3}P}=\frac{d}{\pi^{3}d_{1}d_{2}}\int d ^{3}pd^{3}rf_{Q}(p_{1})f_{\bar{Q}}(p_{2})e^{-\frac{r^{2}}{\sigma^{2}}-\sigma^{ 2}p^{2}}\] \[=\bigg{(}\frac{\sigma^{2}}{\pi}\bigg{)}^{3/2}\frac{d}{d_{1}d_{2}} \int d^{3}pf_{Q}(p_{1})f_{\bar{Q}}(p_{2})e^{-\sigma^{2}p^{2}}, \tag{15}\]
because
\[\int d^{3}re^{-r^{2}/\sigma^{2}}=2\pi\sigma^{3}\Gamma(3/2)=(\pi\sigma^{2})^{3 /2}. \tag{16}\]
In the nonrelativistic (or heavy quark) limit one can take a Boltzmann distribution for \(f_{Q}(p_{1})\) and \(f_{\bar{Q}}(p_{2})\)
\[\frac{1}{d_{1}d_{2}}f_{Q}(p_{1})f_{\bar{Q}}(p_{2})e^{-\sigma^{2} p^{2}}\approx e^{-E_{1}/T-E_{2}/T-\sigma^{2}p^{2}}\] \[\approx\exp\bigg{[}-\bigg{(}2m_{Q}+\frac{p_{1}^{2}}{2m_{Q}}+ \frac{p_{2}^{2}}{2m_{Q}}\bigg{)}\bigg{/}T-\sigma^{2}p^{2}\bigg{]}\] \[=\exp\bigg{[}-\bigg{(}2m_{Q}+\frac{P^{2}/2+2p^{2}}{2m_{Q}}\bigg{)} \bigg{/}T-\sigma^{2}p^{2}\bigg{]}. \tag{17}\]
Substituting Eq. (II) into Eq. (II) we obtain
\[(2\pi)^{3}\frac{dN_{S}}{Vd^{3}P}=d\exp\bigg{[}-\bigg{(}M+\frac{P ^{2}}{2M}\bigg{)}\bigg{/}T\bigg{]}\] \[\times\bigg{(}\frac{\sigma^{2}}{\pi}\bigg{)}^{3/2}\int d^{3}pe^{- \{\sigma^{2}+1/(m_{Q}T)\}p^{2}}\] \[=d\ e^{-E/T}\bigg{(}\frac{\sigma^{2}}{\sigma^{2}+1/(m_{Q}T)} \bigg{)}^{3/2} \tag{18}\]
where \(M=2m_{Q}\). Assuming \(m_{Q}\gg 1/(\sigma^{2}T)\) we find,
\[(2\pi)^{3}\frac{dN_{S}}{Vd^{3}P}\approx d\ e^{-E/T}. \tag{19}\]
Since \(\sigma^{2}=8/3\langle r^{2}\rangle\) with \(r\) being the quarkonium radius [24; 25], the assumption is justified if
\[m_{Q}\gg\frac{3}{8\langle r^{2}\rangle T}. \tag{20}\]
It is hence valid even close to the critical temperature \(T_{c}\), supposing \(\sqrt{\langle r^{2}\rangle}\sim 0.5\) fm. We note that from Eq. (II) that the charmonium abundance at \(T=200\) MeV is about 77 % of the statistical model abundance for \(m_{Q}=1.5\) GeV.
Therefore we expect that in a heat bath the statistical model and the coalescence approach yield a similar multiplicity. Strictly speaking, there must be an attractive force between the heavy quark and the heavy antiquark to form a quarkonium and the mass of the quarkonium is therefore less than twice the heavy quark mass. In this study, however, we assume that the binding energy is small and therefore the \(J/\psi\) is weakly bound in a QGP [26; 27; 28]. The inclusion of heavy quark potential in the Remler formalism [13] will be discussed in a future publication.
The coalescence model of Eq. (14) is closely related to Eq. (1) [24], since
\[P_{\Phi}={\rm Tr}[\rho_{\Phi}\rho_{N}]={\rm Tr}\bigg{(}|\Phi\rangle \langle\Phi|\] \[\times|Q_{1}\bar{Q}_{2}...Q_{N-1}\bar{Q}_{N}\rangle_{i}\rho_{ij} \langle Q_{1}\bar{Q}_{2}...Q_{N-1}\bar{Q}_{N}|_{j}\bigg{)}\] \[=\bigg{|}\langle\Phi|Q_{1}\bar{Q}_{2}...Q_{N-1}\bar{Q}_{N}\rangle_{ i}\bigg{|}^{2}\rho_{ii}, \tag{21}\]
where \(\rho_{ij}\) is the density matrix element of the \(N-\)body density matrix \(\rho_{N}\) and the eigenstates \(|\Phi\rangle\) are taken for a basis of the \(c\bar{c}\) states.
The Remler formalism starts from the same equation (1). Therefore we can test this formula in a thermal box, which is completely controllable and for which explicit and analytical solutions of Eq. (19) can be obtained. That the numerical realization of the Remler formalism gives the correct result in a box is a prerequisite for its application in numerical simulation of heavy-ion collisions.
Box Simulations
In this section the Remler formalism is tested in a box in which the heavy (anti)quarks are in thermal equilibrium as well as for three scenarios in which their initial momentum distribution and/or their initial spatial distribution does not correspond to the equilibrium distribution.
### static thermal box
We prepare a box of size \(100^{3}\) fm\({}^{3}\) in which we place charm quarks and charm antiquarks in thermal equilibrium at \(T=200\) MeV. Charm (anti)quarks scatter off artificial partons which have a thermal momentum of \(T=200\) MeV and are not affected by the scattering. The interaction rate is fixed to 1.0 c/fm. To remove boundary effects, we extended the box by 3 fm in each direction but for the analysis we exclude the extended volume.
Figure 1 compares the charmonium multiplicity as a function of time from statistical model calculations, assuming that the quarkonium mass is twice the heavy quark mass, as in Eq. (19), from Eq. (14) and from Eq. (13), which are named in the figure "statistical model", "Wigner density" and "time integration of \(\Gamma\)", respectively. The charm quark mass is assumed to be 1.5 GeV and and we take a charmonium radius of 0.5 fm. The results are an ensemble average of 200 events. The colored band indicates the statistical error. One can see a slight difference between the statistical model and the Wigner projection, which originates from the approximations in Eqs. (17) and (19).
One can see that the time integration of \(\Gamma_{\rm coll}\) does not deviate from the dashed and solid blue lines within the statistical error. The reason we will explain now.
We define the Wigner projection at \(t\),
\[{\cal W}(r_{1}^{*},r_{2}^{*},p_{1}^{*},p_{2}^{*};t)\equiv\sum_{i= 1,2}\sum_{j\geq 3}\int d^{3}r_{1}d^{3}p_{1}...d^{3}r_{N}d^{3}p_{N}\] \[\times(2\pi)^{3N}\rho_{\Phi}(r_{1},p_{1};r_{2},p_{2})\rho_{N}(t), \tag{22}\]
where \(r_{1}^{*},r_{2}^{*},p_{1}^{*},p_{2}^{*}\) characterize trajectories of (anti)cham quarks in Eq. (10) projected to quarkonium state, see Eq. (10).
Since the projection is carried out in a homogeneous box, one can separate the spatial dependence, which will be a constant time, such that
\[{\cal W}(r_{1}^{*},r_{2}^{*},p_{1}^{*},p_{2}^{*};t)={\cal W}_{p}(p_{1}^{*},p_{ 2}^{*};t){\cal W}_{r}. \tag{23}\]
Then Eq. (13) is expressed as
\[P_{\phi}(t^{\prime})={\cal W}_{p}(p_{1}^{*},p_{2}^{*};0){\cal W}_ {r}\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{1}+\varepsilon){\cal W}_{r}-{ \cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{1}-\varepsilon){\cal W}_{r}\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{2}+\varepsilon){\cal W}_{r}-{ \cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{2}-\varepsilon){\cal W}_{r}\] \[...\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t^{\prime}+\varepsilon){\cal W}_ {r}-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t^{\prime}-\varepsilon){\cal W}_{r}, \tag{24}\]
where \(t=0\) is the initial projection time where only the production term appears, which corresponds to \(P_{\phi}(0)\) in Eq. (12). We number the scatterings of heavy quarks or antiquarks by "\(i\)". \(t_{i}\) is the time of the \(i\)'th scattering in which either the charm quark or the anticharm quark is involved and both, production and destruction term, appear. We note that \({\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)\) in the above equation means \({\cal W}_{p}(p_{1}^{*}(t),p_{2}^{*}(t))\).
Since there is no scattering of charm quarks with a light quark between \(t=0\) and \(t=t_{1}-\varepsilon\),
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*};0)={\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{1}- \varepsilon). \tag{25}\]
The same applies to each following time interval between collisions:
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i}+\varepsilon)={\cal W}_{p}(p_{1}^{*},p_{ 2}^{*};t_{i+1}-\varepsilon). \tag{26}\]
Thus, most of the terms cancel and only the Wigner projection at the end of the last collision remains:
\[P_{\phi}(t^{\prime})={\cal W}_{p}(p_{1}^{*},p_{2}^{*};t^{\prime}+\varepsilon){ \cal W}_{r}, \tag{27}\]
where \(p_{1}^{*}\) and \(p_{2}^{*}\) are still the thermal momenta of the charm quark and anticharm quark. The time integration of \(\Gamma_{\rm coll}\) fluctuates around the Wigner projection for a system in thermal equilibrium at T=200 MeV.
One can see that the statistical error in Fig. 1 increases with time. The reason is as following: Whenever scattering happens, a new Wigner projection is added and the old Wigner projection is subtracted as in Eq. (24). Since the box is already in thermal equilibrium both the addition and the subtraction are random fluctuations. As a result, like for all random walks, some events deviate far away from the thermal average as time passes. That is why the statistical error increases with time, while the average value stays near the thermal equilibrium.
Figure 1: (Color online) Charmonium multiplicity as a function of time in a box of \(100^{3}\) fm\({}^{3}\) at \(T=200\) MeV calculated in the statistical model, from Eq. (14) and from Eq. (13). Charm quark mass and charmonium radius are taken as 1.5 GeV and 0.5 fm, respectively.
### cooling down of (anti)charm quarks
Now we apply the Remler formalism to charm and anticharm quarks in a box of the same size in which the initial temperature of (anti)charm quarks is 400 MeV, but their number is same as in the previous subsection. In other words, only their thermal momentum changes. Initially it is given by a thermal distribution at \(T=400\) MeV and then cools down to 200 MeV through the scattering with the artificial partons, which are assumed to have a thermal distribution with \(T=200\) MeV.
In this case Eq. (26) is still valid. The only difference is that the initial momentum distributions of \(p_{1}^{*}\) and \(p_{2}^{*}\) correspond to the thermal distribution of \(T=400\) MeV. Then they gradually change with time, through scattering with the artificial partons, to the distribution corresponding to \(T=200\) MeV. Simply expressed, though it is not quite true, Eq. (24) will be like
\[P_{\phi}(t^{\prime})={\cal W}_{p}(T=400\ {\rm MeV}){\cal W}_{r}\] \[+{\cal W}_{p}(T=399\ {\rm MeV}){\cal W}_{r}-{\cal W}_{p}(T=400\ {\rm MeV}){\cal W}_{r}\] \[+{\cal W}_{p}(T=398\ {\rm MeV}){\cal W}_{r}-{\cal W}_{p}(T=399\ {\rm MeV}){\cal W}_{r}\] \[...\] \[+{\cal W}_{p}(T=200\ {\rm MeV}){\cal W}_{r}-{\cal W}_{p}(T=201\ {\rm MeV}){ \cal W}_{r}. \tag{28}\]
However, if the radius or \(\sigma\) in the Wigner function of Eq. (9) depends on the temperature, Eqs. (25) and (26) are not valid any more:
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{i}+\varepsilon)\neq{\cal W}_{p}(p_{1}^{* },p_{2}^{*},T;t_{i+1}-\varepsilon), \tag{29}\]
because the temperature at \(t=t_{i}+\varepsilon\) is different from that at \(t=t_{i+1}-\varepsilon\). Therefore, it is necessary to add the rate \(\Gamma_{\rm local}\) as in Eqs. (3) and (13), which is expressed by [13; 14]
\[\Gamma_{\rm local}(t)={\rm Tr}\biggl{(}\frac{d\rho_{\Phi}}{d\sigma(T)}\frac{d \sigma(T)}{dt}\rho_{N}\biggr{)}. \tag{30}\]
Then Eq. (24) changes to
\[P_{\phi}(t^{\prime})={\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;0){\cal W }_{r}\] \[+{\cal W}_{r}\int_{0}^{t_{1}}dt\ (\partial{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)/ \partial\sigma)(\partial\sigma/\partial t)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{1}+\varepsilon){\cal W}_{r }-{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{1}-\varepsilon){\cal W}_{r}\] \[+{\cal W}_{r}\int_{t_{1}}^{t_{2}}dt\ (\partial{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)/ \partial\sigma)(\partial\sigma/\partial t)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{2}+\varepsilon){\cal W}_{r }-{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{2}-\varepsilon){\cal W}_{r}\] \[...\] \[+{\cal W}_{r}\int^{t^{\prime}}dt\ (\partial{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)/ \partial\sigma)(\partial\sigma/\partial t)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t^{\prime}+\varepsilon){\cal W }_{r}-{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t^{\prime}-\varepsilon){\cal W}_{r}. \tag{31}\]
Since nothing changes between \(t=0\) and \(t=t_{1}-\varepsilon\) except of the temperature dependent \(\sigma\)
\[\int_{0}^{t_{1}}dt\ (\partial{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)/ \partial\sigma)(\partial\sigma/\partial t)\] \[={\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{1}-\varepsilon)-{\cal W}_{p }(p_{1}^{*},p_{2}^{*},T;0), \tag{32}\]
ignoring the temperature change between \(t=t_{1}-\varepsilon\) and \(t=t_{1}\). Therefore, one arrives the same result as in Eq. (27):
\[P_{\phi}(t^{\prime})={\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t^{\prime}+\varepsilon) {\cal W}_{r}. \tag{33}\]
The upper panel of Fig. 2 shows the time evolution of the effective temperature, which is defined by
\[T_{\rm eff}=\frac{2}{3}\langle E_{kin}\rangle, \tag{34}\]
where \(\langle E_{kin}\rangle\) is the mean value of the (anti)charm kinetic energy. We note that due to the approximation of Eq. (34) the initial and the final effective temperatures are a bit higher than the real initial and final temperatures, which are, respectively, 400 MeV and 200 MeV. One can see that the temperature reaches its final value before \(t=100\) fm/c. The radius of quarkonium is simply
Figure 2: (Color online) (upper) Effective temperature of (anti)charm quarks as a function of time and (lower) the same as figure 1 but with and without the contributions from \(\Gamma_{\rm local}\).
modeled as
\[\sqrt{\langle r^{2}\rangle}=0.5\bigg{(}\frac{T_{\rm eff}}{0.2\ {\rm GeV}}\bigg{)}^{2}\ [{\rm fm}] \tag{35}\]
such that \(\sqrt{\langle r^{2}\rangle}=0.5\) fm at \(T=200\) MeV, a reasonable approximation to the lattice results [16]. The lower panel of Fig. 2 corresponds to Fig. 1. We have added the magenta line, which is the result if we apply only the collisional rate, \(\Gamma_{\rm coll}\), whereas the orange line is obtained if we take the sum of both rates, \(\Gamma_{\rm local}\) and \(\Gamma_{\rm coll}\). The multiplicity of charmonia starts from a lower value than in equilibrium at \(T=200\) MeV because a larger thermal momentum lowers the Wigner projection. The orange line recovers the equilibrium multiplicity at about the same time when the temperature of the box reaches its final value. One can clearly see that the inclusion of \(\Gamma_{\rm local}\) is necessary to obtain a results consistent with the assumed equilibrium.
### expanding (anti)charm quarks
Now we confine the initial (anti)charm quarks within a smaller box of size \(50^{3}\) fm\({}^{3}\) in the center of the large box, assuming a momentum distribution of the (anti)charm quarks, corresponding to \(T=200\) MeV. The density of the charm and anticharm quarks in this smaller box is therefore 8 times higher as compared to the above discussed configuration.
As time passes the charm density decreases and the number of charmonia converges to that expected for a system in equilibrium in the full volume. One can see in Fig. 3 that the multiplicity, which is given by the solid blue line, indeed converges to the number expected for a system in equilibrium. The time integration of \(\Gamma_{\rm coll}\), however, does not catch up with the decrease and remains higher. The reason is that \({\cal W}_{r}\) in Eq. (24) now depends on time and we obtain
\[P_{\phi}(t^{\prime})={\cal W}_{p}(p_{1}^{*},p_{2}^{*};0){\cal W}_ {r}(0)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{1}+\varepsilon){\cal W}_{r}( t_{1}+\varepsilon)\] \[-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{1}-\varepsilon){\cal W}_{r}( t_{1}-\varepsilon)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{2}+\varepsilon){\cal W}_{r}( t_{2}+\varepsilon)\] \[-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{2}-\varepsilon){\cal W}_{r}( t_{2}-\varepsilon)\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t^{\prime}+\varepsilon){\cal W}_ {r}(t^{\prime}+\varepsilon)\] \[-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t^{\prime}-\varepsilon){\cal W}_ {r}(t^{\prime}-\varepsilon), \tag{36}\]
Eqs. (25) and (26) are still valid, but the two terms do not cancel any longer:
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i}+\varepsilon){\cal W}_{r}( t_{i}+\varepsilon)\] \[\neq{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i+1}-\varepsilon){\cal W} _{r}(t_{i+1}-\varepsilon), \tag{37}\]
because \({\cal W}_{r}(t_{i}+\varepsilon)\) is larger than \({\cal W}_{r}(t_{i+1}-\varepsilon)\) due to the spatial diffusion of (anti)charm quarks. This is why the equilibrium distribution of Eq. (27) cannot be achieved in this scenario.
### cooling and expanding (anti)charm quarks
Now we imitate heavy-ion collisions by combining the two previous scenarios, in other words, heavy (anti)quarks are initially at a high temperature and densely populated in a small volume. As before the initial temperature is given by \(400\) MeV and the initial volume by \(50^{3}\) fm\({}^{3}\). Then they cool down and expand in space.
Therefore the projection probability first increases with time due to the momentum loss, as in Fig. 2, and
Figure 4: (Color online) Combination of figure 2 and figure 3 for the initial conditions of (anti)charm quarks
Figure 3: (Color online) Same as figure 1 but initial spatial distribution of (anti)charm quarks is restricted to a half-sized box.
then decreases due to the spatial diffusion, as in Fig. 3. However, one can see in Fig. 4 the multiplicity, obtained by the time integration of \(\Gamma_{\rm coll}\), differs from that obtained by applying the Wigner projection directly and from that, which is given by the statistical model.
## V Spatial diffusion term
The discrepancies between the statistical model and the time integration of \(\Gamma\) in Figs. 3 and 4 originate from the calculations of \(d\rho_{N}/dt\). Since \(\rho_{N}\) is the density operator of \(N\) (anti)charm quarks, it includes both spatial and momentum information. But only momentum space information has been taken from the comparison between Eq. (8) and Eq. (11). In other words, only the interaction terms are taken into account and the kinetic terms (the free streaming) are neglected in Eq. (11). The spatial diffusion is attributed to the kinetic terms in \(K_{1}\) and \(K_{2}\) of Eq. (5). In principle, it must not contribute to \(d\rho_{N}/dt\) because heavy quark and heavy antiquak are bound by the potential \(V_{12}\) and move together as shown in Eq. (7). However, in standard cascade simulations we have only free propagation and instant scatterings [12] and it is still challenging to properly implement microscopic potentials in numerical simulations [13; 29].
We therefore propose that the following term should be added to Eq. (3) if a standard cascade approach is employed
\[\Gamma_{\rm diff}(t)={\rm Tr}\biggl{(}\frac{d\rho_{\Phi}}{d\vec{r}}\cdot\frac{ d\vec{r}}{dt}\rho_{N}\biggr{)}, \tag{38}\]
and for an S-state it will be (see Eq. (9)) of the form
\[\Gamma_{\rm diff}(t)\sim-\frac{2}{\sigma^{2}}\vec{r}\cdot\vec{v}\ W_{S}(r,p), \tag{39}\]
where \(\vec{v}=d\vec{r}/dt\) with \(\vec{r}\) being \(\vec{r}_{Q}-\vec{r}_{Q}\) in their center-of-mass frame. Then the inequality of Eq. (37) can be removed through
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i}+\varepsilon){\cal W}_{r}( t_{i}+\varepsilon)\] \[-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i+1}-\varepsilon){\cal W}_{r }(t_{i+1}-\varepsilon)\] \[+\int_{t_{i}}^{t_{i+1}}dt\ \Gamma_{\rm diff}=0. \tag{40}\]
In fact, if the bound state is perfectly described in the simulations, even \(\Gamma_{\rm local}\) is not needed, because the equality of Eq. (29) will dynamically be restored:
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*},T;t_{i}+\varepsilon)={\cal W}_{p}(p_{1}^{*}, p_{2}^{*},T;t_{i+1}-\varepsilon). \tag{41}\]
As temperature decreases, the binding of quarkonium will be stronger so that the relative distance \(r\) decreases and the relative momentum \(p\) increases, which compensates the change of \(\sigma\) with temperature.
Figure 5 shows the time evolution of the multiplicity of \(\Phi\) including \(\Gamma_{\rm diff}\) in comparison the cases already studied in figures 3 and 4. One finds that including \(\Gamma_{diff}\) the time integration of \(\Gamma\) is now in good agreement with the statistical model predictions. Considering the results of our study of the \(\Phi\) multiplicity in a box, the best and simplest method to obtain the asymptotically correct values is to add to \(\Gamma\) a diffusion rate \(\Gamma_{\rm diff}\)
\[\Gamma(t)=\Gamma_{\rm local}(t)+\Gamma_{\rm coll}(t)+\Gamma_{\rm diff }(t)\] \[=\sum_{i=1,2}\sum_{j\geq 3}\sum_{\nu}\int d^{3}r_{1}d^{3}p_{1}...d^{3 }r_{N}d^{3}p_{N}(2\pi)^{3N}\] \[\times\rho_{\Phi}(r_{1},p_{1};r_{2},p_{2})\biggl{\{}\delta\biggl{(} t-t_{ij}(\nu)\biggr{)}\rho_{N}(t+\varepsilon)\] \[-\delta\biggl{(}t-t_{ij}(\nu-1)\biggr{)}\rho_{N}(t+\varepsilon) \biggr{\}}, \tag{42}\]
where \(\nu\) means \(\nu\) th scattering of \(i=1\) or of \(i=2\).
Consequently, to be consistent with statistical model predictions, one has to supplement the old projection
Figure 5: (Color online) Same as figures 3 and 4 but including \(\Gamma_{\rm diff}\) of Eq. (39)
probability of particles \(i=1,2\) (the second term in the bracket) by a new one (the first term in the bracket) whenever a scattering happens to \(i=1\) or to \(i=2\). Then the change of the temperature, reflected in the change of \(\sigma\), and of the spatial separation of (anti)charm quarks between \(t=t_{ij}(\nu-1)\) and \(t=t_{ij}(\nu)\) will completely be canceled. This approach is also more natural than Eq. (12), which assumes an instant interaction between \(t-\varepsilon\) and \(t+\varepsilon\), because it increases \(\varepsilon\) to the time between two scatterings. We note that the combination of Eq. (32) and Eq. (40) yields
\[{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i}+\varepsilon){\cal W}_{r}(t_ {i}+\varepsilon)\] \[-{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i+1}-\varepsilon){\cal W}_{r }(t_{i+1}-\varepsilon)\] \[+{\cal W}_{r}(t_{i})\int_{t_{i}}^{t_{i+1}}dt\ \frac{\partial{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t)}{ \partial\sigma}\frac{\partial\sigma}{\partial t}\] \[+{\cal W}_{p}(p_{1}^{*},p_{2}^{*};t_{i})\int_{t_{i}}^{t_{i+1}}dt \ \frac{\partial{\cal W}_{r}(t)}{\partial\tilde{r}}\cdot\frac{\partial \tilde{r}}{\partial t}=0. \tag{43}\]
## VI Summary
The Remler formalism has been advanced to study the production of \(J/\psi\) in a thermalized expanding system. In this study we have tested the Remler approach for \(J/\psi\) production in thermalized and thermalizing boxes, composed of c and \(\bar{c}\) quarks. The goal was to verify whether the numerical, Monte Carlo based, realization of the Remler algorithm gives the right asymptotic solution for \(t\to\infty\), which can be calculated analytically.
We started out with calculations in a completely thermalized box, where we find that the Remler formula produces the results which are consistent with that of a statistical model calculation. Then three different types of thermalizing boxes have been investigated.
In the first scenario the initial temperature of the charm quarks is high. They cool down with time through collisions with background particles and finally reach thermal equilibrium at a lower temperature. The multiplicity of the charm quarks is kept constant. We have found that the temperature derivative of the Wigner function is required to assure that the Remler approach agrees for large times with statistical model calculations.
In the second scenario charm and anticharm quarks are initially concentrated in a smaller box and then diffuse in space. The last scenario, which we studied, is the combination of the first and second one: The initial temperature of the charm quark is higher than that of the background particles and (anti-)charm quarks are concentrated initially in the central region of the box. Then they cool down and diffuse in space. This is a simple model for the expansion of the midrapidity QGP, which is created in heavy-ion collisions. We have found that in the second and third scenarios the discrepancy between the multiplicities, calculated in the Remler approach and in a statistical model, does not disappear even for \(t\to\infty\).
We identified the origin of this discrepancy: it is caused by the fact that in the numerical realization of the Remler algorithm, as presented in [12] for deuterons, the expansion of the system between two subsequent collisions is not taken properly into account. Neglecting the potential, it treats the c and \(\bar{c}\) quarks as freely moving particles between two collisions whereas in reality they are bound when they form a quarkonium. Introducing a diffusion rate, which adds to the local rate and the collision rate, this discrepancy disappears. We note that recently also an approach has been advanced which includes the \(c\bar{c}\) potential in an approximate way [13]. It would be interesting to verify whether there equilibrium is obtained without a spacial diffusion rate. In conclusion, we have found that also for an expanding system, which cools down, the Monte Carlo realization of the Remler formalism describes correctly the approach to equilibrium.
## Acknowledgements
The authors acknowledge valuable discussions with P.-B. Gossiaux. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the grant CRC-TR 211 'Strong-interaction matter under extreme conditions' - Project number 315477589 - TRR 211. This work is supported by the European Union's Horizon 2020 research and innovation program under grant agreement No 824093 (STRONG-2020). The computational resources have been provided by the LOEWE-Center for Scientific Computing and the "Green Cube" at GSI, Darmstadt and by the Center for Scientific Computing (CSC) of the Goethe University.
|
2303.03654 | MPool: Motif-Based Graph Pooling | Graph Neural networks (GNNs) have recently become a powerful technique for
many graph-related tasks including graph classification. Current GNN models
apply different graph pooling methods that reduce the number of nodes and edges
to learn the higher-order structure of the graph in a hierarchical way. All
these methods primarily rely on the one-hop neighborhood. However, they do not
consider the higher- order structure of the graph. In this work, we propose a
multi-channel Motif-based Graph Pooling method named (MPool) captures the
higher-order graph structure with motif and local and global graph structure
with a combination of selection and clustering-based pooling operations. As the
first channel, we develop node selection-based graph pooling by designing a
node ranking model considering the motif adjacency of nodes. As the second
channel, we develop cluster-based graph pooling by designing a spectral
clustering model using motif adjacency. As the final layer, the result of each
channel is aggregated into the final graph representation. We perform extensive
experiments on eight benchmark datasets and show that our proposed method shows
better accuracy than the baseline methods for graph classification tasks. | Muhammad Ifte Khairul Islam, Max Khanov, Esra Akbas | 2023-03-07T05:21:15Z | http://arxiv.org/abs/2303.03654v1 | # MPool: Motif-Based Graph Pooling
###### Abstract
Graph Neural networks (GNNs) have recently become a powerful technique for many graph-related tasks including graph classification. Current GNN models apply different graph pooling methods that reduce the number of nodes and edges to learn the higher-order structure of the graph in a hierarchical way. All these methods primarily rely on the one-hop neighborhood. However, they do not consider the higher-order structure of the graph. In this work, we propose a multi-channel Motif-based Graph Pooling method named (MPool) that captures the higher-order graph structure with motif and also local and global graph structure with a combination of selection and clustering-based pooling operation. As the first channel, we develop node selection-based graph pooling by designing a node ranking model considering the motif adjacency of nodes. As the second channel, we develop cluster-based graph pooling by designing a spectral clustering model using motif adjacency. As the final layer, the result of each channel is aggregated into the final graph representation. We perform extensive experiments on eight benchmark datasets and show that our proposed method shows better accuracy than the baseline methods for graph classification tasks.
Keywords:Graph Neural Network Graph Classification Pooling Motif.
## 1 Introduction
By inputting a graph with node attributes and message propagation along the edges, while some GNN [11] models learn the node-level representation for node classification [9, 11, 26], some others learn graph-level representation for graph classification [4, 7, 15]. Graph classification is the task of predicting graph labels by considering node features and graph structure. Motivated from the pooling layer in CNNs [12], graph pooling methods have been used to reduce the number of nodes and edges to capture the local and global structural information of the graph in the graph representation. Current pooling methods usually coarsen the graph in a hierarchical way by reducing the size of the graph in multiple steps and the utilize every pooled graph representation for the final graph representation.
There are mainly two types of hierarchical pooling methods for the graph in the literature: clustering-based and selection-based methods. While clustering-based methods merge similar nodes into super nodes using a cluster assignment
matrix, selection-based methods calculate a score for every node, which represents their importance, and select the top \(k\) nodes based on the score by discarding other nodes from the graph. All these methods primarily rely on Graph Convolution Networks (GCNs) with layer-wise propagation based on the one-hop neighbors to calculate the assignment matrix in the clustering-based method and score in the selection-based method. Despite the success of these models, there are some limitations. Selection based model mainly focuses on preserving the local structure of the node while the clustering-based method basically focuses on the global structure of the graph. Moreover, while selection-based models may lose information by selecting only some portion of the nodes, clustering-based models may include some redundant information including noise and over-smoothing.
Further, the current methods fail to incorporate the higher-order structure of the graph in pooling. There are different ways to model higher-order graph structures [3] such as hypergraphs, simplicial complexes [1], and motifs [17]. Among them, motifs (graphlets) are small, frequent, and connected subgraphs that are mainly used to measure the connectivity patterns of nodes [6] (see Figure 1 for a preview). They capture the local topology around the vertices, and their frequency can be used as the global fingerprints of graphs. Although motifs have been used for different graph mining tasks, including classification [13], and community detection [16], to the best of our knowledge, they have not been used in graph pooling operations. On the other hand, utilizing these structures for pooling provides crucial information about the structure and the function of many complex systems that are represented as graphs [21, 23].
In this paper, to address these problems, we propose a multi-channel Motif-based Graph Pooling method named (MPool) that captures the higher-order graph structure with motif and also local and global graph structure with a combination of selection and clustering-based pooling operation. We utilize motifs to model the relation between nodes and use this model for message passing and pooling in GNN. We develop two motif-based graph pooling models (\(\texttt{MPool}_{S}\) and \(\texttt{MPool}_{C}\)): selection and clustering based and then combine these models into one (\(\texttt{MPool}_{S}\)) to learn both local and global graph structure. For the selection-based graph pooling model, we design a node ranking model considering motif-based relations of nodes. Based on the ranks, we select the top \(k\) nodes to create the pooled graph for the next layer. For clustering-based graph pooling, we design a motif-based clustering model that learns a differentiable soft assignment based on learned embedding from the convolution layer. We jointly optimize this function by minimizing usual supervised loss and also unsupervised loss as a relaxation of the normalized mincut objective. However, instead of defining mincut objective on the regular adjacency matrix, we define it in the motif adjacency matrix. After learning the assignment matrix, we group the nodes in the same cluster to create a coarsened graph. Both models take motif into consideration hence incorporating higher-order graph structure in graph pooling operation. We further demonstrate detailed experiments on eight benchmark datasets. Our results
show that the proposed pooling methods show better accuracy than the current baseline pooling methods for graph classification tasks.
## 2 Related Work
**Graph Pooling:** Recent pooling methods learn graph representation hierarchically and capture the local substructures of graphs. There are two different hierarchical pooling methods in the literature: clustering-based and selection-based pooling. Clustering-based pooling methods [24, 28, 2, 4] do the pooling operation by calculating the cluster assignment matrix using node features and graph topology. After calculating the cluster assignment matrix, they build the coarse graph by grouping the nodes on the same cluster. For example, while DiffPool [28] calculates the cluster assignment matrix using a graph neural network, MinCutPool [4] calculates the cluster assignment matrix using a multi-label perception.
Selection-based pooling methods [29, 29, 7, 8, 15] compute the importance scores of nodes and select top \(k\) nodes based on their scores and drop other nodes from the graph to create the pooled graph. For example, while gPool [7] calculates the score using node feature and a learnable vector, SAGPool [15] uses an attention mechanism to calculate the scores. SUGAR [25] uses a subgraph neural network to calculate the score and select top-\(K\) subgraph for pooling operation. All these methods use the classical graph adjacency matrix to propagate information and calculate the score.
**Motifs in Graph Neural Network** Motifs are the most common higher-order graph structure used in various graph mining problems. A few works have used motif structure in GNNs as well [18, 22, 27, 14]. MCN [14] creates motif attention mechanism for graph convolution layer to learn node representation. All these methods employ motifs for learning node or subgraph representation. In our proposed method, we use motifs on the graph classification task.
## 3 Methodology
In this section, first, we discuss the problem formulation of graph classification and preliminaries. Then we present our motif-based pooling models.
### Preliminaries and Problem Formulation
We denote a graph as \(G(V,A,X)\) where \(V\) is the node-set, \(A\in\mathbb{R}^{N\times\mathbb{N}}\) is the adjacency matrix, and \(X\in\mathbb{R}^{N\times d}\) is the feature matrix with \(d\) dimensional node feature and \(N\) is the number of nodes in the graph. We denote a graph collection as \((\mathcal{G},Y)\) where \(\mathcal{G}=\{G_{0},G_{1},...,G_{n}\}\) with \(G_{i}\)'s are graphs and \(Y\) is the set of the graph labels. In this paper, we work on the graph classification problem, whose goal is to learn a function \(f:\mathcal{G}\to Y\) to predict the graph labels with a graph neural network in an end-to-end way.
**Graph Neural Network for Graph Classification:** GNN for graph classification has two modules: message-passing and pooling. For message-passing operation, Graph convolution network (GCN) [11] is the most widely used model.
GCN is a multilayer neural network that combines the features of each node from its neighbors with propagating the information through the edges as follows:
\[H^{(l+1)}=\sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{(l) }\theta^{(l)}) \tag{1}\]
where \(H^{(l+1)}\) is the node representation matrix for layer \((l+1)\), \(\sigma\) is an activation function, \(\tilde{A}=A+I\) is the adjacency matrix with self-loop, \(\tilde{D}\in\mathbb{R}^{\mathbb{N}\times\mathbb{N}}\) is the normalized degree matrix of \(\tilde{A}\), \(\theta^{(l)}\) is trainable weight for \(l^{(th)}\) layer and \(H^{(l)}\) is the input node representation matrix for \(l^{th}\) layer obtained from previous layer. \(H_{0}=X\) is the initial input node feature matrix of the input graph. We utilize GCN for message-passing operation in our model.
The second module of GNNs for graph classification is the pooling operation that helps to learn the graph features. The main idea behind graph pooling is to coarsen the graph by reducing the number of nodes and edges to encode the information of the whole graph. In the literature, there are two types of hierarchical graph pooling methods: selection-based and clustering-based methods. Selection-based methods calculate a score (attention) using a scoring function for every node that represents their importance. Based on the calculated scores, the top \(k\) nodes are selected to construct a pooled graph. They use a classical graph adjacency matrix to propagate information and calculate the score.
Clustering-based pooling methods learn a cluster assignment matrix \(S\in R^{N\times K}\) using graph structure and/or node features. Then, they reduce the number of nodes by grouping them into super nodes by \(S\in R^{N\times K}\) to construct the pooled graph at \((l+1)^{th}\) layer as follows
\[A^{(l+1)}=S^{(l)^{T}}A^{(l)}S^{(l)},\qquad H^{(l+1)}=S^{(l)^{T}}H^{(l)}. \tag{2}\]
**Motifs and Motif-based Adjacency Matrix**: Motifs (graphlets) are small, frequent, and connected subgraphs that are mainly used to measure the connectivity patterns of nodes [6]. Motifs of sizes 2-4 are shown in Figure 1. To include higher-order structural information between nodes, we create the motif adjacency matrix \(M_{t}\) for a motif \(t\) where \((M_{t})_{i,j}\) represents the # of the motif containing nodes \(i\) and \(j\).
### Motif Based Graph Pooling Models
We propose a hierarchical pooling method based on motif structure. As the first layer, graph convolution (GCN) takes the adjacency matrix \(A\) and feature matrix \(X\) of the graph as input and then updates the feature matrix by propagating the features through the neighbors and aggregating features coming from adjacent nodes. After getting the updated feature matrix from the convolution layer, our proposed graph pooling layer, MPool, operates coarsen on the graph. These
Figure 1: Motif Networks with size 2-4.
steps are repeated \(l\) steps, and outputs of each pooling layer are aggregated with readout function [5] to obtain a fixed-sized graph representation. After concatenating the results of readouts, it is fed to the multi-layer perceptron (MLP) layer for the graph classification task. We propose Three types of motif-based graph pooling methods: (1) \(\texttt{MPool}_{S}\) is the selection-based method, (2) \(\texttt{MPool}_{C}\) is the clustering-based method, and (3) \(\texttt{MPool}_{cmb}\) is the combined model. These are illustrated in Figure 2.
In this paper, we adopt the model architectures from SAGPool [15] as the selected based and MinCutPool [4] as the clustering-based model. On the other hand, our method is compatible with any graph neural network that we show later in our experiment section.
**A. Selection-based Pooling via Motifs:**\(\texttt{MPool}_{S}\) Previous selection-based methods [7, 15] do the pooling operation using a classical adjacency matrix. However, higher-order structures like motifs show great performance on graph convolution network [13] and are also important structures for graph classification. Therefore, while calculating attention scores of nodes, considering motif-induced neighborhoods to integrate information could provide more relevant information about graph structure [13]. In our selection method, we first calculate the motif adjacency matrix for a certain motif type, e.g., triangle, from the original graph as we discuss in Section 3.1. Then, we apply the graph convolution network to the motif adjacency matrix and calculate the motif attention score for each node using GNN. Based on these scores, we select the top \(k\) nodes for pooling and construct the coarsened graph using the pooling function. Figure 2 presents the overview of our selection-based graph pooling method.
**Motif attention** We calculate the motif attention score of nodes to select nodes to drop and nodes to retain. We use graph convolution to calculate the attention where we use node attributes and also motif-based graph topological information instead of pair-wise edge information. _motif attention_ score is defined as follows
\[\begin{split} X=GNN(X,\tilde{A};\theta_{GNN})\\ Z=\sigma(D^{-\frac{1}{2}}\tilde{M}D^{\prime-\frac{1}{2}}X\theta_{ att})\end{split} \tag{3}\]
Figure 2: An illustration of our motif-based pooling methods.
where \(\tilde{A}\) is the normalized adjacency matrix and \(\theta_{GNN}\) is the learnable parameter, \(\sigma\) is an activation function, \(M^{\prime}\in\mathbb{R}^{\mathbb{N}\times\mathbb{N}}\) is the motif adjacency matrix with self loop where \(\tilde{M}=M+I_{N}\), \(D^{\prime}\in\mathbb{R}^{\mathbb{N}\times\mathbb{N}}\) is the degree matrix of \(M\), and \(\theta_{att}\in R^{d\times 1}\) is the parameter matrix for pooling layer. Since we use graph features and motif adjacency matrix with convolution for motif attention score, the output of pooling is based on higher-order graph structures and features.
**Pooling:** Based on the motif attention score, we select the top \(k\) nodes from the graph following the node selection method in [7]. The top \(k=\alpha\times N\) nodes are selected based on the \(Z\) value where \(\alpha\) is the pooling ratio between 0 and 1. Thus, we obtain the pooling graph as follows
\[idx=topK(Z,[\alpha\times N]) \tag{4}\] \[X_{out}=X_{idx,:},\odot Z_{idx},\quad A_{out}=A_{idx,idx}\]
where \(idx\) is the indices of the top \(k\) nodes from the input graph which is returned by \(topK\) function, \(X_{idx}\) is the features of the selected \(k\) nodes, \(Z_{idx}\) is the motif attention value for those nodes. \(\odot\) is the element-wise broadcasted product, \(A_{idx,idx}\) is row and column wised indexed matrix, \(A_{out}\) is the adjacency matrix and \(X_{out}\) is the new feature matrix of the pooled graph.
**B. Clustering-based Pooling via Motifs:**\(\text{MPool}_{C}\) In this paper, as the base for our clustering-based pooling methods, we use MinCutPool [4] that is defined based on Spectral clustering (SC) by minimizing the overall intra-cluster edge weights. MinCUTpool proposes to use GNN with a custom loss function to compute cluster assignment with relaxing the normalized minCut problem. However, they consider only the regular edge-based adjacency matrix to find clusters. On the other hand, considering edge-type relations between nodes may result in ignoring the higher-order relations. Including higher-order relations like motifs for clustering may produce better groups for pooling.
In our clustering-based method, we calculate the cluster assignment matrix \(S\) utilizing motif adjacency information. We adopt spectral clustering method [4] where we use multi-layer perceptron (MLP) by inputting node feature matrix \(X\). We use the softmax function on the output layer of MLP. This function maps each node feature \(X_{i}\) into the \(i^{th}\) row of a soft cluster assignment matrix \(S\)
\[S=MLP(X;\theta_{MLP})=softmax(ReLU(XW_{1})W_{2}) \tag{5}\]
However, as it is seen in Equation 5, we do not use adjacency but use attributes of nodes obtained from the convolution part. Therefore, to include motif information in the pooling layer, we use the motif adjacency matrix in the convolution layer while passing the message to neighbors as \(X=GNN(X,\tilde{M};\theta_{GNN})\) where \(\tilde{M}\) is the normalized motif adjacency matrix which we mention in 3.2, and \(\theta_{GNN}\) and \(\theta_{MLP}\) are learnable parameter.
We also incorporate motif information in the optimization. Parameters of the convolution layer and pooling layer are optimized by minimizing a loss function \(\mathcal{L}\) including the usual supervised loss function \(\mathcal{L}_{s}\) and also an unsupervised loss function \(\mathcal{L}_{u}\) as \(\mathcal{L}_{u}=\mathcal{L}_{c}+\mathcal{L}_{o}\) where
\[\mathcal{L}_{c}=-\frac{Tr(S^{T}MS)}{Tr(S^{T}DS)}\quad\text{and}\quad\mathcal{L }_{o}=\left\|\frac{S^{T}S}{||S^{T}S||_{F}}-\frac{I_{K}}{\sqrt{K}}\right\|_{F} \tag{6}\]
\(\mathcal{L}_{c}\) is the cut loss that encourages strongly connected nodes in motif adjacency to be clustered together. \(L_{o}\) is the orthogonality loss, which helps the clusters to become similar in size. \(I_{K}\) is a (rescaled) clustering matrix \(I_{K}=\widehat{S}^{T}\widehat{S}\), where \(\widehat{S}\) assigns exactly \(N/K\) points to each cluster. After calculating the cluster assignment matrix, we compute the coarsened graph adjacency matrix and attribute matrix using Equation 2.
**C. Combined model:**\(\texttt{MPool}_{cmb}\) Selection-based models mainly focus on preserving the local structure of the node by selecting top-\(K\) representative nodes while cluster-based methods basically focus on the global structure of the graph by assigning nodes into \(K\)-clusters. To utilize the benefits of the selection-based and cluster-based models at the same time, we combine our selection-based and cluster-based motif pooling model into one model. As a result graph representation from the combined model encoded local structure information from the selection-based model and the global structure model from the cluster-based model. In this model we concatenate the graph-level representation from the selection-based motif pooling method and cluster-based motif pooling method into one final representation as follows:
\[X_{cmb}=X_{S}\oplus X_{C} \tag{7}\]
where \(X_{S}\) is the graph-level representation from \(\texttt{MPool}_{S}\) model and \(X_{C}\) from \(\texttt{MPool}_{C}\) method and, \(\oplus\) is the concatenation operation.
### Readout Function and Output Layer
To get a fixed-sized representation from different layers' pooled graph, we apply a readout function [15] that aggregates the node features as follows: \(Z=\frac{1}{N}\sum_{i=1}^{N}x_{i}||\underset{i=1}{\overset{N}{\max}}\ x_{i}\) where \(N\) is the number of nodes, \(x_{i}\) is the \(i^{th}\) node feature and \(||\) denotes concatenation. After concatenating the results of all readout functions as a representation of the graph, it is given as an input to a multilayer perceptron with the softmax function to get the predicted label of the graph as \(\hat{Y}=softmax(MLP(Z))\) where \(Z\) is the graph representation. For graph classification, parameters of GNNs and pooling layers are optimized by a supervised loss as \(\mathcal{L}_{s}=-\sum_{i=1}^{L}\sum_{j=1}^{C}Y_{i,j}log\hat{Y}_{i,j}\) where \(Y\) is the actual label of the graph.
## 4 Experiment
We evaluate the performance of our models in graph classification problems and compare our results with the baseline methods for selection-based and clustering-based on different datasets. We also give the results for the variation of our model by utilizing different message-passing models. Further, we analyze the effect of the motif types on the results of the pooling. More experiments can be found on supplements.
**Dataset**: We use eight benchmark graph datasets in our experiments commonly used for graph classification [19]. Among these, three datasets are social networks (SN); IMDB-BINARY, REDDIT-BINARY, and COLLAB, and five other datasets are biological and chemical networks (BN);D&D, PROTEINS
(PROT),NCI1, NCI109, and Mutagenicity(MUTAG).
**Baseline**: We use five graph pooling methods as baseline methods. Among them, gPool [7] and SAGPool [15] are selection-based method and MinCutPool (MCPool) [4], DiffPool [28] and ASAP [24] are clustering-based method.
**Experimental setup:** To evaluate our models for the graph classification task, we randomly split the data for each dataset into three parts. We use 80% data for the training set, 10% data for the validation set, and 10% data for the test set. We do the splitting process 10 times using 10 random seed values. We implement our model using PyTorch and PyTorch Geometric library. For optimizing the model, we use Adam optimizer [10]. In our experiments, we take node representation size as 128 for all datasets. Our hyperparameters are as follows: learning rate in {1e-2, 5e-2, 1e-3, 5e-3, 1e-4, 5e-4}, weight decay in {1e-2, 1e-3, 1e-4, 1e-5}, and pooling ratio in {1/2, 1/4}. We find the optimal hyperparameters using grid search. We run the model for a maximum of 100K epochs, and there is an early stopping condition if the validation loss does not improve for 50 epochs. Our model architecture consists of three blocks, and each block contains one graph convolution layer and one graph pooling layer like [15]. We use the same model architecture and hyperparameters with MinCuT and SAGPool models.
### Overall Evaluation
**Performance on Graph Classification:** In this part, we evaluate our proposed graph pooling methods for the graph classification task on the given eight datasets. Each dataset contains a certain number of input graphs and their corresponding label. In the graph classification task, we classify the input graph by predicting the label of the graph. We use node features of the graph as the initial features of the model. If a dataset does not contain any node feature, we use node degrees as initial features using one-hot encoding. Table 1 and Table 2 show the average graph classification accuracy, standard deviation, and ranking of our models and other baseline models for all datasets. We can observe from the tables that our motif-based pooling methods consistently outperform other state-of-art models, and our models get the first rank for almost all datasets.
Table 1 shows the results for our motif-based models and other graph pooling models on biochemical datasets. We obtain the reported results for gPool and DiffPool from the SAGPool paper since our model architecture and hyperparameters are the same as SAGPool. Also, for the ASAP method, we obtain the results from the initial publication ("-") means that results are not available for that dataset. As we see from the table, \(\texttt{MPool}_{cmb}\) gives the highest result for all biochemical networks. In particular, \(\texttt{MPool}_{cmb}\) achieves an average accuracy of 81.2% on D&D and 77.4% on NCI1 datasets which are around 4% improvements over the \(\texttt{MPool}_{C}\) method as the second-best model. We can also see \(\texttt{MPool}_{cmb}\) gives very good accuracy compared to baseline models for all biochemical datasets. Especially for D&D, NCI1, and NCI109 datasets \(\texttt{MPool}_{cmb}\) gives 5.8%, 5.8%, and 3.9% improvements over the best model of baseline models for these datasets. From this result, we can say that incorporating global and local structures of the graph in the combined model gives better results for graph classification on biochemical data. We further calculate the average rank
for all models, where our model \(\texttt{MPool}_{cmb}\) average rank is the lowest at 1 and our model \(\texttt{MPool}_{C}\) is the second lowest.
Table 2 shows the performance comparison with our models and other baseline models on social network datasets. As we see from the table, our proposed methods outperforms all the baseline methods for all datasets except ReDDIT-BINARY, where our model is the second best with giving very close to the first one, SAGPool. For IMDB-BINARY and REDDIT-BINARY \(\texttt{MPool}_{cmb}\) model gives better accuracy than the \(\texttt{MPool}_{S}\) and \(\texttt{MPool}_{C}\) model while for COLLAB dataset \(\texttt{MPool}_{C}\) give much higher accuracy than our other two models. For both types of datasets our selection-based method \(\texttt{MPool}_{S}\) gives better accuracy than the selection-based baseline methods SAGPool and gPool for most of the datasets. In particular, \(\texttt{MPool}_{S}\) achieves an average accuracy of 77.21% on D&D and 76.42% on Mutagenity datasets which is around 2% improvement over the SAGPool method which is our base model. Similarly, our cluster-based model outperforms the baseline methods of cluster-based methods for most of the datasets. Especially, \(\texttt{MPool}_{C}\) achieves an average accuracy of 83.62% on COLLAB datasets, which is around 5% improvement over the ASAP method as the second-best model and around 14% improvement over the MinCutPool, which is our base model.
\begin{table}
\begin{tabular}{c c c c c c c} \multicolumn{8}{c}{Network Dataset.} \\ \hline
**Model** & **IMDB-B** & **REDDIT-B** & **COLLAB** & **Avg. Rank** \\ \hline \hline gPool & 73.40\(\pm\)3.7 (3) & 74.70\(\pm\)4.5 (6) & 77.58 \(\pm\)1.6 (3) & 4 \\ SAGPool & 73.00\(\pm\)4.06 (4) & **84.66**\(\pm\)5.4 (1) & 70.1sw\(\pm\)2.5 (6) & 3.6 \\ MinCutPool & 70.78\(\pm\)4.7 (7) & 75.67 \(\pm\)2.7 (5) & 69.91 \(\pm\)2.3 (7) & 6.3 \\ DiffPool & 68.40 \(\pm\)6.1 (8) & 66.65 \(\pm\)7.7(7) & 74.83 \(\pm\)2.0 (4) & 6.3 \\ ASAP & 72.74 \(\pm\)0.9 (5) & - & 78.95 \(\pm\)0.7 (2) & 3.5 \\ \hline \hline \(\texttt{MPool}_{cmb}\) & **74.20**\(\pm\)2.8 (1) & 84.10\(\pm\) 5.0 (2) & 74.13\(\pm\)2.3 (5) & 2.6 \\ \(\texttt{MPool}_{S}\) & 73.44 \(\pm\)3.9 (2) & 83.89\(\pm\)4.3 (3) & 68.95\(\pm\)2.7 (8) & 4.3 \\ \(\texttt{MPool}_{C}\) & 71.44 \(\pm\)4.0 (6) & 78.77\(\pm\)5.0 (4) & **83.62**\(\pm\)5.2 (1) & 3.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of our models with baseline pooling methods for Social Network Dataset.
\begin{table}
\begin{tabular}{c c c c c c c c} \multicolumn{8}{c}{Biochemical Dataset.} \\ \hline
**Model** & **D\&D** & **NCI1** & **NCI109** & **PROT** & **Mutag** & **Rank** \\ \hline \hline gPool & 75.0\(\pm\)0.9/7 & 67.0\(\pm\)2.3/7 & 66.1 \(\pm\)1.6/7 & 71.1 \(\pm\)0.9/7 & 71.9 \(\pm\)3.7/8 & 7.2 \\ SAGPool & 75.7\(\pm\)3.7/6 & 68.7\(\pm\)3.0/6 & 71.0\(\pm\)3.4/4 & 72.5 \(\pm\)4.0/6 & 74.9\(\pm\)3.9/7 & 5.8 \\ MCPool & 76.7\(\pm\)3.0/5 & 73.1 \(\pm\)1.4/3 & 71.5 \(\pm\)2.7/3 & 76.3 \(\pm\)3.6/3 & 75.9 \(\pm\)2.7/5 & 3.8 \\ DiffPool & 66.9 \(\pm\)2.4/8 & 62.2 \(\pm\)1.9/8 & 62.0\(\pm\)2.0/8 & 68.2 \(\pm\)2.0/8 & 77.6 \(\pm\)2.6/3 & 7.2 \\ ASAP & 76.9 \(\pm\)0.7/4 & 71.5\(\pm\)0.4/4 & 70.1 \(\pm\)0.6/6 & 74.2 \(\pm\)0.8/4 & - & 4.5 \\ \hline \hline \(\texttt{MPool}_{cmb}\) & **81.2**\(\pm\)2.1/1 & **77.4**\(\pm\) 1.9/1 & **73.5**\(\pm\)2.5/1 & **79.3**\(\pm\)3.3/1 & **79.6**\(\pm\)3.7/1 & 1 \\ \(\texttt{MPool}_{S}\) & 77.2 \(\pm\)4.6/3 & 71.0\(\pm\)3.4/5 & 70.8\(\pm\)2.1/5 & 72.7 \(\pm\)4.2/5 & 76.4 \(\pm\)3.1/4 & 4.4 \\ \(\texttt{MPool}_{C}\) & 78.5 \(\pm\)3.3/2 & 74.4\(\pm\)1.8/2 & 73.1\(\pm\)2.5/2 & 78.1 \(\pm\)3.3/2 & 78.8 \(\pm\)2.1/2 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our models with baseline pooling methods for Biochemical Dataset.
Furthermore, when we compare our selection-based model \(\texttt{MPool}_{S}\) and clustering-based model \(\texttt{MPool}_{C}\) results from Tables, we can see that \(\texttt{MPool}_{C}\) outperforms \(\texttt{MPool}_{S}\) for all biochemical datasets. While \(\texttt{MPool}_{S}\) gives better accuracy for two social networks, IMDB-BINARY and REDDIT-BINARY, \(\texttt{MPool}_{C}\) have 15% better accuracy than \(\texttt{MPool}_{S}\) on COLLAB dataset.
**Ablation Study:** While we use GCN as the base model for message passing, our pooling model can integrate other GNN architectures. In order to see the effects of different GNN models in our methods, we utilize the other four most widely used convolutional graph models: Graph convolution network (GCN) [11], Graph-SAGE [9], GAT [26], and GraphConv [20]. Table 3 shows average accuracy results for these GNN models using \(\texttt{MPool}_{S}\)\(\texttt{MPool}_{C}\) and \(\texttt{MPool}_{cmb}\) on NCI1 and IMDB-BINARY datasets. As there is no dense version of Graph attention network(GAT), we use it only for selection-based model \(\texttt{MPool}_{S}\)For this experiment, we use triangle motifs for the motif adjacency matrix calculation. As we see in the table, the effects of GNN models and which model gives the best result depend on the dataset. For the NCI1 dataset, Graph-SAGE gives the highest accuracy on \(\texttt{MPool}_{S}\) and \(\texttt{MPool}_{cmb}\) model while GraphConv gives the highest accuracy on \(\texttt{MPool}_{C}\)model. For IMDB-BINARY, all the graph convolutional models give very close results for all of our pooling models. For \(\texttt{MPool}_{C}\) and \(\texttt{MPool}_{cmb}\) Graph-SAGE gives better accuracy than the other GNN models while GAT gives the highest accuracy for \(\texttt{MPool}_{S}\) model.
We further study the effect of the motif type for pooling. In this experiment, we use 2-star, triangle, and a combination of 2-star and triangle motifs, as these motifs are observed the most in real-world networks. We present the graph classification accuracy for different motifs using \(\texttt{MPool}_{S}\)\(\texttt{MPool}_{C}\) and \(\texttt{MPool}_{cmb}\)
\begin{table}
\begin{tabular}{c|c|c c c} \hline Model & motif & DD & NCI1 & Mutagenicity & IMDB-B \\ \hline \multirow{3}{*}{\(\texttt{MPool}_{S}\)} & 2-star & 77.21 & 69.48 & 70.11 & 73.00 \\ & Triangle & 75.63 & 70.98 & 76.42 & 73.44 \\ & 2-star+triangle & 75.63 & 69.82 & 72.39 & 69.64 \\ \hline \multirow{3}{*}{\(\texttt{MPool}_{C}\)} & 2-star & 78.48 & 73.56 & 73.56 & 71.20 \\ & Triangle & 75.80 & 74.44 & 78.77 & 71.44 \\ & 2-star+triangle & 74.21 & 74.20 & 76.00 & 70.96 \\ \hline \multirow{3}{*}{\(\texttt{MPool}_{cmb}\)} & 2-star & 81.20 & 77.36 & 79.60 & 74.20 \\ & Triangle & 80.50 & 76.09 & 77.90 & 73.90 \\ \cline{1-1} & 2-star+triangle & 79.95 & 76.75 & 78.42 & 73.40 \\ \hline \end{tabular}
\end{table}
Table 4: \(\texttt{MPool}_{S}\) and \(\texttt{MPool}_{C}\) performance with different motifs.
in Table 4. As we see in the table, we get the highest accuracy for MPool\({}_{S}\) and MPool\({}_{C}\) with the triangle motif for three datasets NCI1, Mutagenicity, and IMDB-BINARY. For D&D, we get the highest accuracy with 2-star motif adjacency on MPool\({}_{S}\) and MPool\({}_{C}\)We also observe that for D&D, the accuracy of the selection-based model does not vary much compared to the clustering-based model. For Mutagenicity, different motifs have a large effect on the accuracy, where triangle motif adjacency gives around 4% and 3% higher accuracy than the 2-star motif adjacency for the selection-based method and for the clustering-based model, respectively. For IMDB-BINARY, 2-star and triangle motifs give similar accuracy for both methods, and 2-star+triangle motif adjacency gives less accuracy for the clustering-based method. For our combined model MPool\({}_{cmb}\), the 2-star motif gives the highest accuracy for all datasets whereas other motifs give very close results to the 2-star motif.
## 5 Conclusion
In this work, we propose a novel motif-based graph pooling method, MPool, that captures the higher-order graph structures for graph-level representation. We develop graph pooling methods for both types of hierarchical graph pooling models, namely selection-based and clustering-based. We also develop a model where we combined selection-based and cluster-based model. In our selection-based pooling method, we use the motif attention mechanism, whereas, in the clustering-based method, we use motif-based spectral clustering using the mincut loss function. In both methods, we utilize node feature information and graph structure information together during the learning of the graph feature vector. We show that our proposed methods outperform baseline methods for most of the datasets.
|
2303.15036 | Retrievability in an Integrated Retrieval System: An Extended Study | Retrievability measures the influence a retrieval system has on the access to
information in a given collection of items. This measure can help in making an
evaluation of the search system based on which insights can be drawn. In this
paper, we investigate the retrievability in an integrated search system
consisting of items from various categories, particularly focussing on
datasets, publications \ijdl{and variables} in a real-life Digital Library
(DL). The traditional metrics, that is, the Lorenz curve and Gini coefficient,
are employed to visualize the diversity in retrievability scores of the
\ijdl{three} retrievable document types (specifically datasets, publications,
and variables). Our results show a significant popularity bias with certain
items being retrieved more often than others. Particularly, it has been shown
that certain datasets are more likely to be retrieved than other datasets in
the same category. In contrast, the retrievability scores of items from the
variable or publication category are more evenly distributed. We have observed
that the distribution of document retrievability is more diverse for datasets
as compared to publications and variables. | Dwaipayan Roy, Zeljko Carevic, Philipp Mayr | 2023-03-27T09:33:16Z | http://arxiv.org/abs/2303.15036v1 | # Retrievability in an Integrated Retrieval System: An Extended Study
###### Abstract
Retrievability measures the influence a retrieval system has on the access to information in a given collection of items. This measure can help in making an evaluation of the search system based on which insights can be drawn. In this paper, we investigate the retrievability in an integrated search system consisting of items from various categories, particularly focussing on datasets, publications and variables in a real-life Digital Library (DL). The traditional metrics, that is, the Lorenz curve and Gini coefficient, are employed to visualize the diversity in retrievability scores of the three retrievable document types (specifically datasets, publications, and variables). Our results show a significant popularity bias with certain items being retrieved more often than others. Particularly, it has been shown that certain datasets are more likely to be retrieved than other datasets in the same category. In contrast, the retrievability scores of items from the variable or publication category are more evenly distributed. We have observed that the distribution of document retrievability is more diverse for datasets as compared to publications and variables.
Retrievability, Dataset Retrieval, Interactive IR, Diversity
## 1 Introduction
In the present era of information, we are generating a colossal amount of data that needs to be handled and processed efficiently for quick look-ups. The expeditious advancement in technologies has made data generation even more complex with a diversified form of information coming from divergent sources. This necessitates the need to have a federated or integrated system (Adali and Emery, 1995; Arguello, 2012) that searches and assimilates results from assorted sources. Textual data still remains the predominant type among them and significant research has been conducted in the domain of textual document retrieval. Among the rest, recent research on dataset retrieval (Kunze and Auer, 2013) has become increasingly important in the (interactive) information retrieval and digital library communities. One of the reasons is undoubtedly the enormous number of research datasets available. However, the underlying characteristics of dataset retrieval also contribute to the attention in this area. One often-mentioned characteristic
is the increased complexity of datasets over traditional document retrieval. While the latter is well-known and adequately studied, datasets often include more extensive material and structures that are relevant for retrieval. This may involve the raw data, descriptions of how the data was collected, taxonomic information, questionnaires, codebooks, etc. Recently, numerous studies have been conducted to further identify the characteristics of dataset retrieval. These studies include the observation of data retrieval practices (Kramer et al, 2021), interviews and online questionnaires (Kern and Mathiak, 2015; Friedrich, 2020) and transaction log analysis (Kacprzak et al, 2017; Carevic et al, 2020).
In this paper, we follow a system-oriented approach for studying dataset retrieval. By employing the measure of _retrievability_(Azzopardi and Vinay, 2008), we aim to gain insights into the particularities of dataset retrieval in comparison to traditional document retrieval. The measure of retrievability was initially developed to quantify the influence that a retrieval system has on access to information. In a simplified way, retrievability represents the ease with which a document can be retrieved given a particular IR system (Azzopardi and Vinay, 2008). The measure of retrievability can be utilised for several use cases.
As an extension of our prior work (Roy et al, 2022), we investigate the retrievability of various types of documents in an integrated digital library _GESIS Search_ (see Section 3), focusing on various types of data, particularly datasets, publications and variables. The assumption followed here is that in an ideal ranking system1, the retrievability of each indexed item (dataset or other publication) is equally distributed. Likewise, a discrepancy to this assumption may reveal an inequality between the items in a collection caused by the system. By employing a measure of retrievability, we expect to gain further insight into the characteristics of dataset retrieval compared to traditional document retrieval.
Footnote 1: In this paper, by _ranking system_ or, _IR system_, we refer to _a system_ containing a corpus together with the retrieval model to be used to search on that corpus.
### Research questions
We verify the research questions put forward and discussed by Roy et al (2022) in the updated system with a variety of item types tested with more queries (see Section 4). Similar to the previous work, we substantiate the following research questions on the integrated search system _GESIS Search_ focusing on an additional type of item: _Variables_ together with _Publication_ and _Dataset_:
* **RQ1:** In the integrated search system with various types of items, can we observe any prior bias of accessibility of documents from a particular type?
* **RQ2:** Can we formalize this type-accessibility bias utilizing the concept of document retrievability?
* **RQ3:** How diverse are the retrievability score distributions in the different categories of documents in our integrated search system?
Our previous study (Roy et al, 2022) was designed to take all queries in the query log into account. This had the benefit of being as close to the real search behaviour as possible. At the same time this design choice introduced a popularity bias caused by reoccurring queries that positively influence the retrievability score of documents in the corresponding result set. Additionally, the popularity bias of queries has been ignored in this work. Thus, contrasting with the previously reported results, we address the following research question:
* **RQ4:** In a real-life search system, does popularity bias of queries influence the inequality in any way?
In sum, our contributions are as follows: 1) we utilize the retrievability measure to better understand the diversity of accessing datasets in comparison to publications with real-life queries from a search log; 2) building on retrievability, we propose to employ the measurement of _usefulness_, which represents implicit relevance signals observed for datasets and publications. Our understanding of bias follows the argumentation provided in Wilkie and Azzopardi (2017) where bias denotes the inequality between documents in terms of their retrievability within the collection. Bias can be observed when a document is overly or unduly favoured due to some document features
(e.g. length, term distribution, etc.) (Wilkie and Azzopardi, 2014).
The rest of the paper is organized as follows. We first present background and related work in Section 2 together with formally introducing the concept of retrievability. The integrated search system _GESIS Search_ along with the motivation of our retrievability study is presented in Section 3. Section 4 discusses the empirical results and analysis of the outcome of the experimentation before introducing the novel concept of usefulness in Section 5 along with the experimental study of usefulness. We conclude the paper in Section 6 highlighting the contributions and findings of the paper with directions to extend the work.
## 2 Background and related work
Considering a collection of items, the retrievability of items can be defined as how accessible or findable the items are by some searching techniques. In context of document retrieval, the concept was developed and proposed in Azzopardi and Vinay (2008). Informally, the retrievability of a document in a collection indicates the expectation of selection of the document by some retrieval model within a rank cutoff. Mathematically, the retrievability of any document \(d\) in a collection \(C\) is defined as:
\[r(d)=\sum_{q\in Q}w_{q}\cdot f(rank(d,q,M),c) \tag{1}\]
where,
* the set of all queries which are answerable by the collection;
* weight of the query \(q\);
* rank of the document \(d\) when retrieval is performed with query \(q\) using retrieval model \(M\);
* the rank cutoff.
The function \(f(rank(d,q,M),c)\) is an _indicator function_ that returns either 1 or 0 depending on whether the rank (\(rank(d,q,M)\)) of document \(d\) is within the rank cutoff \(c\) or not. The indicator function can be mathematically defined as the following:
\[f(rank(d,q,M),c)=\begin{cases}1,&\text{if }rank(d,q,M)\leq c.\\ 0,&\text{otherwise.}\end{cases} \tag{2}\]
In Equation 1, the retrievability of a document is computed based on retrieval performed with all sets of queries Q addressable by the document collection. Considering a sizeable collection of documents, there can be infinitely many distinct queries that can be answered by various documents in the collection. One of the practical approaches to get this set of all queries Q is to use a query log; however acquiring such a log is not always feasible. In the absence, a query-based sampling method (Callan and Connell, 2001) can be applied to randomly populate Q. In Azzopardi and Vinay (2008), the authors considered generating queries with unigrams and bigrams based on the collection frequency of them above a threshold in the collection. This approach may result in an enormous number of queries if a large collection of documents is considered. To keep the experimental setup tractable, one approach here is to truncate the list again based on a certain threshold value (e.g. 2 million as selected by Azzopardi and Vinay). Hence, the construction of Q based on either query log or random sampling of terms from the collection are some practical approximations that we can adapt in order to realize the concept of retrievability of documents in a collection.
The query weight \(w_{q}\) in Equation 1 may be used for incorporating a bias (such as popularity, importance, etc.) in the retrievability computation. Ignoring these biases, this weight is considered as uniform for all queries in earlier works (Azzopardi and Vinay, 2008; Bashir and Rauber, 2009, 2009). The approximated retrievability score (\(\hat{r}(d)\)) of document \(d\) will then be a discrete value \(x\) indicating the number of queries for which \(d\) is retrieved within top rank \(c\). Certainly, this is a simplifying assumption and the queries submitted to a search system in practice vary vastly both in terms of _popularity_ as well as _difficulty_(Carmel et al, 2006).
The second factor of the per-query component in Equation 1 is a boolean function that depends solely on the rank at which document \(d\) is retrieved. Increasing the value of the rank cut-off
(\(c\)) broadens the domain of documents retrieved which will positively influence the retrievability scores of more documents. Note that being selected by a retrieval model for some queries does not ensure the relevance of the document which can only be assessed by human judgements.
Retrievability as a measure was proposed in Azzopardi and Vinay (2008) where the authors experiment on two TREC collections with queries generated using a query-based sampling technique (Callan and Connell, 2001). Since then, Retrievability has been primarily used to detect bias in ranking systems. For instance, Samar et al (2018) employ retrievability to research the effect of bias across time for different document versions (treated as independent documents) in a web archive. Their results show a ranking bias for different versions of the same document. Furthermore, the study confirms a relationship between retrievability and findability measured by Mean Reciprocal Rank (MRR). They follow the assumption that the lower a document's retrievability score the more difficult it is to find the document. Another application of the retrievability measure can be found in patent or legal document retrieval which provides a unique use case due to its recall-oriented application. In both studies (Bashir and Rauber, 2009a, c), the authors look at document retrievability measurements and argue that a single retrievability measure has several limitations in terms of interpretability. In Bashir and Rauber (2009a) they try to improve accessibility measurement by considering sets of relevant and irrelevant queries for each document. In this way, they try to simulate recall-oriented users. In addition, they plot different retrievability curves to better spot the gaps between an optimal retrievable system and the tested system. The other work (Bashir and Rauber, 2009c) analyze the bias impact of different retrieval models and query expansion strategies. Their experiments show that clustering-based document selection for pseudo-relevance feedback is an effective approach for increasing the findability of individual documents and decreasing the bias of a retrieval system. Further researches on patent retrieval reported in Bashir and Rauber (2009b) and Bache and Azzopardi (2010) identify content-based features that can be used to classify a set of documents based on their retrievability. Experiments on various patent collections show that these features can achieve more than 80% classification accuracy.
A study on the query list generation phase for determining the measure of retrievability is presented in Bashir and Rauber (2011). The study addresses two central problems when determining retrievability: 1) query selection and 2) query characteristics identification. It is argued that the query selection phase is usually performed individually without well-accepted criteria for query generation. Hence their goal is to evaluate how far the selection of query subsets provides an accurate approximation of retrieval bias. The second shortcoming is addressed by determining retrievability bias considering different query characteristics. In their experiments, they recognise that query characteristics influence the increase or decrease of retrievability scores. A topic-centric query generation technique, tested on the Associated Press (AP) document collection, is proposed in Wilkie and Azzopardi (2016). A significant correlation is reported between the traditional estimate of Gini and the estimate produced by this method of topic-centric query. As recognised in Bashir and Rauber (2011), the majority of retrievability experiments employ simulated queries to determine retrievability. To study the ability of the retrieval measure in detecting a potential retrievability bias using real queries issued by users, Traub et al (2016) conducted an experiment on a newspaper corpus. Their study confirms the ability to expose retrievability bias within a more realistic setting using real-world queries. A comparison of simulated and real queries with regard to retrievability scores further shows considerable differences which indicate a need for improved construction of simulated queries. To see if there is any correlation between the retrievability bias and performance measurement, in another study, Wilkie and Azzopardi (2014b) examine the relationship between retrieval bias and ten retrieval performance measures. Experimentation of TREC ad-hoc data demonstrates that the retrievability bias hypothesis tends to hold for most of the performance measurements.
Retrievability of documents indicates the chance of selection by a retrieval model for various queries submitted. However, the selection of a document does not mean that the document is indeed _useful_ in addressing the information need
generating the query. This can only be realised by using document consumption signals (e.g. in the form of relevance judgements). This concept was first introduced in Cole et al (2009) as a criterion to determine how well a system is able to solve a user's information need. In their work, Cole et al denoted this notion as _usefulness_. In Hienert and Mutschke (2016), it has been operationalised within a log-based evaluation approach to determine the usefulness of a search term suggestion service. The usefulness has been further operationalised in Carevic et al (2018) to determine the effects of contextualised stratagem browsing on the success of a search session.
Recently, a considerable amount of research has been carried out concerning the characteristics of dataset retrieval. A comprehensive literature review on dataset retrieval is provided in Gregory et al (2019) focusing on dataset retrieval practices in different disciplines. Research in this area covers, for instance, the analysis of information-seeking behaviour during dataset retrieval through observations Kramer et al (2021), questionnaires and interviews (Kern and Mathiak, 2015; Friedrich, 2020), and transaction-log studies (Kacprzak et al, 2017; Carevic et al, 2020). In Kern and Mathiak (2015), the authors investigated the requirements that users have for a dataset retrieval system. Their findings on dataset retrieval practices suggest that users invest greater effort during relevance assessment of a dataset. They conclude that the selection of a dataset is a much more important decision compared to the selection of a piece of literature. This results in high demands on metadata quality during the dataset retrieval. The complexity of assessing the relevance of a dataset is also highlighted in Kramer et al (2021). Besides topical relevance, access to metadata as well as documentation about the dataset plays a crucial role. A query log analysis from four open data portals is presented in Kacprzak et al (2017). Their study indicates differences between queries issued towards a dataset retrieval system and queries in web search. In a subsequent study (Kacprzak et al, 2018), the extracted queries are further compared to queries generated from a crowdsourcing task. The intuition and focus of this work is to determine whether queries issued towards a data portal differ from those collected in a less constrained environment (crowdsourcing).
## 3 Retrievability in an integrated retrieval system
We define an _integrated search system_ as a system that searches multiple sources of different types and integrates the output in a unified framework2. The retrieval in such a system requires sophisticated decision-making considering the various modalities in documents in the collection of data.
Footnote 2: This is similar to the concepts of aggregated search (Lalmas, 2011) or federated search (Arguello, 2012).
Following Equation 1, the retrievability score of documents is dependent on the other documents in the collection3: considering a rank-cut \(c\), the rank of a document under consideration can be greater than \(c\) (\(>c\)) due to the documents, taking the top \(c\) positions, being more relevant or duplicate (Nikkhoo, 2011). Another factor that can influence the retrievability score of a document is its popularity; a popular document will be retrieved multiple times by users over time. In case of an integrated search engine, where the documents belong to various categories, some particular types could be having higher chances than others in terms of being retrieved. In general, there can be some disparity in the number of documents of various categories being retrieved which can be a result of popularity bias in the collection. This type of popularity bias can impede the satisfaction of the information need of a user, and in turn, can affect the performance of the system. The satisfaction of a user can only be realised via a direct feedback from them. In absence of such explicit information, it is strenuous, if at all possible, to understand whether information need is fulfilled or not. In this article, we are going to present an extended study of the diversity in retrievability scores for different categories of documents in the integrated search system _GESIS Search4_(Hienert et al, 2019).
Footnote 3: Here, we are considering the employed retrieval function as constant.
Figure 1: Screenshot of GESIS Search showing result sets for research data, publications, and variables.
Figure 2: Screenshot of the variable description of variable QD3_1 in the GESIS Search.
## 4 Experimental Study
As presented in Section 3, we use the integrated search system with various categories of documents in this work. In this section, we start by describing the data that we have used in the work along with different statistics of the data; this will be followed by the experimental evaluation of the study.
### Datasets
We conduct our experimentation on the integrated search system _GESIS Search_ containing a total of 860K indexed records (as of November 2022) in different categories such as Research Data, Publications, Variables, Instruments etc. Social science publications that are indexed in _GESIS Search_ use and reference survey datasets, containing hundreds or thousands of questions. These questions are using so-called survey variables (variables in the following). From an information retrieval perspective, variables in GESIS Search are information objects like datasets with specific metadata elements such as question text, answer categories and frequency tables.
A screenshot showing the interface of GESIS search is presented in Figure 1. See an example of a variable description in Figure 2 and the according link to the variable record QD3_15 in GESIS Search6. The indexed records in GESIS Search are divided into six categories based on their types, covering more than 122K _publications_, 64K _research data_ (also referred to as _datasets_), and more than 520K Variables. Given a query, the system returns six search result pages (SERP) corresponding to each of the categories (see Figure 1). The segregation of the SERP enables us to study the retrievability of the different types. In this study, we specifically focus on the three categories having the largest number of entries, that is, _dataset_, _publications_, and _variables_.
Footnote 5: [https://search.gesis.org/variables/exploredata-ZA5876_Varoqd3_1](https://search.gesis.org/variables/exploredata-ZA5876_Varoqd3_1)
Footnote 6: Further explanation and examples of Social Science variables and its utilisation for information retrieval can be found in (Tserterci et al, 2022).
In the integrated search system, the interaction of the users with the system is logged and stored in a database. A total of more than 40 different interaction types are stored covering, for instance, searches (queries), record views and export interactions etc. (Hienert et al, 2019). The export of a record belongs to an umbrella of categories including various interactions such as bookmarking, downloading or citing. These interactions are specifically useful for the application of implicit relevance feedback as they indicate a relevance of a record that goes beyond a simple record view. The interaction log of the search system provides the basis for our analysis in Section 4.4 (and later in Section 5.2). These real-user queries form the basis of determining the retrievability of documents. This ensures realistic queries in \(\mathsf{Q}\) of Equation 1 as opposed to the simulated queries used in Azzopardi and Vinay (2008) or Traub et al (2016). The data used in this study is an extended version of our previous work (Roy et al, 2022); in this log, all the interactions of real users with the search system were recorded for a period of more than five years, specifically between July 2017 and July 2022. The log records more than 2.3 million queries submitted to the integrated search system. Detailed statistics regarding the extracted interactions utilized in our study can be found in Table 1. Together with the previous observations for record type Publication and Dataset, we report the results on another category, the Variables.
Repeated queries can influence the retrievability score of a document. Formally, the set of all queries \(Q\) in Equation 1 may contain the same queries more than once. For synthetically generated queries (used by Azzopardi and Vinay (2008) and Bashir and Rauber (2009c)), this can be avoided by keeping track of the already generated queries. However, the query log of a real-life search system records all such instances where the same queries are given multiple times by the users. This factor additionally introduces popularity bias into the reproducibility of documents in the form of query popularity. The results and observations reported in our earlier study (Roy et al, 2022) were based on this type of interaction log. In order to exclusively understand the reproducibility without the query popularity factor, we have only considered unique queries in this work.
### Measuring retrievability in a collection
One way of quantifying the information coverage of a collection is by the count of queries that can
be addressed (or answered) by the items in the collection. From the traditional point of view of a web search, the most sought-after way of composing the queries is using free text where vocabulary terms are used to represent an information need. In a moderate-sized document collection, an intractable number of queries formed using a free-text format are possible. Also due to the significant number of documents that can match a free text query, a boolean matching algorithm is not sufficient; this leads to the development of ranked retrieval that returns an ordered list of items sorted based on their relevance.
Considering a traditional document collection C, all the documents are not equally important to a query, hence paving the need to have a ranked retrieval. Now given a set of all possible queries Q, some documents in C will be relevant to more queries (depending on the topical coverage of the document) than others which can be measured by the concept of retrievability (see Section 2). Formally with the notion of retrievability, some documents will be having higher \(r(d)\) in a collection, resulting in an unequal distribution of retrievability scores.
Similar types of inequalities are observed in economics and social sciences, and they are traditionally measured using the Gini coefficient or Lorenz curve (Gastwirth, 1972) which measures the statistical dispersion in a distribution7.
Footnote 7: Lorenz curve and Gini coefficient are popular in economics to measure of wealth disparity in a community/country.
Mathematically, the Gini coefficient (G) of a certain value \(v\) in a population \(\mathcal{P}\) can be defined as:
\[G=\frac{\sum_{i=1}^{N}\left(2*i-N-1\right)*v(i)}{N\sum_{j=1}^{N}v(j)} \tag{3}\]
where \(N\) is the size of the population and \(v(i)\) specifies the value of \(i^{th}\) item in \(\mathcal{P}\). The Gini coefficient in the population will be between 0 and 1 and is proportional to the inequality inherent in the population: higher value of \(G\) indicates greater disparity and vice versa. In other words, a value of \(G\) equal to 0 in Equation 3 indicates that all the items in the population are equally probable to be selected whereas higher values of \(G\) specify a bias implying that only certain items will be selected.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Record type** & **Size** & \begin{tabular}{c} **\#queries** \\ (unique) \\ \end{tabular} & \begin{tabular}{c} **avg. query** \\ **length** \\ \end{tabular} &
\begin{tabular}{c} **\#exports** \\ \end{tabular} \\ \hline
**Publication** & 113K &
\begin{tabular}{c} 1,028,485 \\ (345,144) \\ \end{tabular} & 2.6 & 63,577 \\
**Dataset** & 64K &
\begin{tabular}{c} 1,208,108 \\ (268,208) \\ \end{tabular} & 2.3 & 142,184 \\
**Variables** & 523K &
\begin{tabular}{c} 79,221 \\ (23,909) \\ \end{tabular} & 2.1 & 18,832 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the extracted information belonging to the three selected record types.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline
**Rank** & \multicolumn{4}{c}{**Publication**} & \multicolumn{4}{c}{**Research data**} & \multicolumn{4}{c}{**Variables**} \\ \cline{2-13}
**cutoff** & \(\mu\) & g-\(\mu\) & \(\sigma^{2}\) & \(\sigma\) & \(\mu\) & g-\(\mu\) & \(\sigma^{2}\) & \(\sigma\) & \(\mu\) & g-\(\mu\) & \(\sigma^{2}\) & \(\sigma\) \\ \hline
10 & 27.46 & 7.20 & 6554.97 & 80.96 & 28.16 & 6.45 & 12582.64 & 112.17 & 2.52 & 1.77 & 12.57 & 3.55 \\
20 & 37.56 & 10.49 & 9983.99 & 99.92 & 39.28 & 9.11 & 20022.23 & 141.50 & 2.77 & 1.91 & 15.05 & 3.88 \\
30 & 46.13 & 13.65 & 12666.31 & 112.54 & 48.49 & 11.63 & 27404.71 & 165.54 & 2.97 & 2.03 & 16.98 & 4.12 \\
40 & 53.34 & 16.88 & 14975.97 & 122.38 & 56.3 & 14.17 & 33835.99 & 183.95 & 3.13 & 2.12 & 18.47 & 4.30 \\
50 & 59.66 & 20.15 & 16821.35 & 129.70 & 63.52 & 16.97 & 40087.10 & 200.22 & 3.25 & 2.20 & 19.52 & 4.42 \\
100 & 66.80 & 26.09 & 17923.59 & 133.88 & 90.81 & 32.88 & 63517.06 & 252.03 & 3.67 & 2.48 & 22.68 & 4.76 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The mean (both arithmetic and geometric), variance and standard deviation of the retrievability values when the rank-cutoff is varied.
### Experimentation
As explained in Section 2, the retrievability of a document is a measurement of how likely the document will be retrieved by _any_ query submitted to the system8. Hence, the study of retrievability in a collection of documents requires rigorous retrieval with a set of diversified queries to cover all topics discussed in the collection. In other words, the retrievability of the documents should be calculated considering all sorts of queries submitted to the system. However, an infinite number of queries are possible to be answered by a collection of free-text queries. To cover all the topics, a traditional approximation is to simulate a set of queries randomly, accepting the risk of erratic queries not aligned with the real scenario (Azzopardi and Vinay, 2008; Traub et al, 2016). With the availability of a query log, the process of query generation can be made more formalized and streamlined to consider the actual queries submitted by real users. For the study reported in this article, we utilize the query log presented in Section 4.1.
Footnote 8: By a system, we are referring to the organization of the collection, along with a retrieval model to be used for retrieval for a given query.
As reported in the earlier study, the retrievability distribution in a collection depends on the employed retrieval model (Azzopardi and Vinay, 2008). Following the findings by Azzopardi and Vinay, we use BM25 as the retrieval model (Sparck Jones et al, 2000). Particularly, we use the implementation available in Elasticsearch9 which uses Lucene10 as the background retrieval model. Following Equation 1, the retrievability of a document depends on the selection of the rank cutoff value (\(c\)) - a rank threshold to indicate how deep in the ranked list are we going to explore before finding that document. Considering the model employed for retrieval and the set of all queries \(\mathsf{Q}\) as fixed, \(c\) is the only parameter in calculating the retrievability. For a query \(q\), setting a lower value to \(c\) will reduce the number of documents being considered retrievable because \(f(k(d,q),c)\) will be 1 only if \(k(d,q)\leq c\) (see Equation 2). Having a higher value of \(c\) will allow more documents to be considered retrievable reducing the overall inequality. In this study, we have varied the value of \(c\) in the range 10 to 100 in steps of 10 and have analyzed the observations which are reported in the next section 11.
Footnote 9: [https://www.elastic.co/](https://www.elastic.co/)
Footnote 10: [https://www.lucene.apache.org/](https://www.lucene.apache.org/)
Footnote 11: All codes are available here: [http://u.pc.cd/vzKctalK](http://u.pc.cd/vzKctalK)
### Observation and analysis
We start this section with describing different statistical properties of the retrievability distribution of items (from all the three different document types that we experimented with) when the value of \(c\) is varied. The mean (\(\mu\)), geometric mean (g-\(\mu\)), variance (\(\sigma^{2}\)), and standard deviation (\(\sigma\)) of the retrievability score distributions on different types (publication, dataset and variable) are given in Table 2. In general, it can be noticed that all the statistical measures for datasets are far more diverse than the other categories. On varying the value of \(c\) from 10 to 100, we observe a change
Figure 3: Graphical representation of the change in various statistical measures of the observed distribution of retrievability scores. The mean, geometric-mean, variance and standard deviation of the distribution of retrievability scores of publication (in blue), dataset (in orange), and variables (in green) are presented.
of more than 140% and 220% in mean retrievability scores in case of publication and dataset respectively while only 45% change is noticed in case of variables. In comparison to our earlier work (Roy et al, 2022), we can see these changes in the retrievability scores are moderate and are not as substantial as seen before. Note that we have excluded repeated queries from the interaction log in this work which were considered in Roy et al (2022). This indicates that there is a significant number of repeated queries submitted into the system that had contributed to the moments change reported earlier resulting in a vast diversity in retrievability scores (see Roy et al (2022), Table 2). Similar trends are recorded for variance and standard deviation as well when computed using the distribution of \(r(d)\) on all three categories with different \(c\) values. From Table 2, we can conclude that most of the statistical measurements (specifically mean, variance, and standard deviation) are higher for the datasets than publications. In comparison, the geometric mean (\(\mathbf{g}\)-\(\mu\) in Table 2) is seen to be higher for publications than datasets at the lower rank cut-offs. However, the geometric mean of retrievability of datasets surpasses that of publications at the rank cut-off 100. Combining the observation that can be drawn from geometric-mean values together with the other statistics, we can perceive that for some dataset items, the retrievability values are extensive (popular datasets retrievable by a number of queries); at the same time, there are datasets with poor \(r(d)\) values that are rarely retrieved through the submitted queries. The first category of datasets are contributing to the high mean of \(r(d)\), which is consistent across different \(c\) values, while the datasets of the second category cause the geometric-mean to fall. For the variables, we report all these measures are noticeably smaller than for publications and dataset. The reason behind this is the relatively small number of queries of the variable category compared to the other types; as a result, the variables in general are selected for less number of queries in comparison to other categories. These variations are presented graphically in Figure 3.
As proposed in Azzopardi and Vinay (2008) and used in our earlier work (Roy et al, 2022), we utilize the Gini coefficient (\(G\)) to quantify the variation in retrievability scores, and Lorenz curve to graphically represent the disparity in retrievability among the items in different categories. Figure 4 plots the Lorenz curve with the \(r(d)\) scores computed separately for publications, datasets and variables. To consider the highest coverage, we set the rank cut-off \(c\) to 100 while plotting the \(r(d)\) values12. From the Figure 4, it is seen that retrievability of datasets (presented in Figure 3(b)) is more imbalanced than the other two types with Gini coefficient 0.7000. Also, variables are seen to be the closest to the equality (in Figure 3(b)) attaining a Gini coefficient of 0.4806.
Footnote 12: Similar trends are observed with \(c\) set to lower values.
As discussed in Section 2, the retrievability score of documents escalates with higher values of \(c\); consequently, the overall retrievability-balance of the collection also changes positively bringing in the curve close to the equality. To empirically see this variation, Gini coefficients attained at different rank cut-offs are presented in Table 3 which
Figure 4: The Lorenz curve with the retrievability (rank cutoff set to 100). The straight line going through the origin (in black) indicates the equality, that is, when all the documents are equally retrievable.
is also graphically displayed in Figure 5. From the table, it can be noticed that the fall in \(G\) for variables (green curve in Figure 5) is more than 45%. From a severe unequal distribution with \(G\) having 0.8281 till rank 10 (highest among all the categories), the Gini value falls sharply at 0.4806 when the rank cut-off is set to 100. This indicates that more variables are discernible if the ranked list is explored beyond the top position.
Additionally, we report the percentage of total items retrieved while changing \(c\) in Table 3. Note that more than 92% of publication are retrieved within the top 10 positions while only 58% and 10% items respectively from the category dataset and variables are retrieved within the same rank cut-off. Increasing the value of \(c\), it is noticed that more than 98% documents are retrievable within the top 100 ranked documents by all the queries for both publication and dataset. The significant change in the percentage of retrieved documents of type dataset indicates that searching for datasets is more complex than publications; a deeper ranked list traversal might be essential to find a relevant dataset. Note that only half of the items from variables category (specifically 50.43%) are retrieved within the top 100 positions although the Gini value indicates more balance in retrievability (\(G=0.4806\)). This leads to an interesting observation: as reported in Table 2, the average retrievability scores for variables are significantly smaller (\(r(d)=3.67\) at cut-off 100), the difference in not being retrieved (having \(r(d)=0\)) and retrieved with average retrievability score is merely a small value. Due to this seemingly-inconsequential difference in \(r(d)\) score, the Gini is not affected significantly. However, these variables, which are not retrieved at all, lowers the percentage of retrieved items.
### Comparing influence of query popularity bias
Considering a real-life query log, there is an obvious possibility of having more than one entry for the popular queries. While computing the retrievability, the items retrieved by those repeated queries get a boost in the retrievability score due to the popularity bias of the queries. To understand the influence of this query popularity bias, in this section, we report relationship between the retrievability scores of the items computed with \(i)\)\(Q_{r}\) - the interaction log containing _repeated_ queries, and \(ii)\)\(Q_{u}\) - the query log with only the _unique_ queries13. Particularly, we report how disjoint the documents with the highest retrievability scores are when the retrieabilities are computed with the two types of queries separately. If the documents are ordered by their retrievability scores, we get two individual ranked lists of documents each when \(Q_{r}\) and \(Q_{u}\) are employed. In order to compare and contrast the lists produced by the two types of query lists, we adapt three ways to quantify the difference:
Footnote 13: Note that as the system may evolve with new documents being added into the index, the exact ranked list produced for the same query submitted at two different times may differ. However, we have ignored the evolving nature of the index and have considered the latest snapshot of the index to perform the retrieval.
* **Set-based:** We compute the Jaccard's coefficient between the two lists ranked by their retrievability scores till different rank cut-offs. Particularly, the first 1K, 5K, 10K, 20K and 50K top-ranked items are considered and their set-based overlap is computed. The results are reported in Table 4. From the results, we can see that overlap in items having the highest 1K retrievability scores are 10% and 12% respectively for the categories publication and dataset. However, around 31% overlap is observed for the variable category among top 1K items. The Jaccard's coefficient changes swiftly for all the categories when higher number of items
Figure 5: The change in Gini coefficient when the rank cut-off is varied in the range from 10 to 100. The blue line indicates the publication while dataset is specified by the orange curve.
are considered. This indicates that the diversity between the two types of ranked lists are significant for all the three categories of items.
* **Correlation-based:** Further, we compare the two ranked lists in terms of their correlations. Based on the discordant and concordant pairs, we compute the Kendall's \(\tau\) correlation coefficient. Additionally, the Spearman's rank correlation is also assessed and reported in Table 5 for all three categories. Considering these measures, we note that the rank correlations indicate an imperceptible relation between the two lists for all of the types while the most diverse results are observed in the case of publication category. For variables, the correlations are noted to be higher as compared to the other types whereas it is too inconsiderable for the other types.
* **Rank overlap-based:** The correlation-based measures suffer from certain limitations such as the lists needing to be conjoint and the measurement does not consider the position where the disagreements are happening; that is, the measure does not discriminate between mismatch at top position and at later positions. As an alternative, Webber et al (2010) proposed a ranked-biased overlap measure (RBO) that weights the difference considering the position at which they are occurring. Mathematically, the RBO between two ranked lists \(S\) and \(T\) is computed as: \[RBO(S,T,p)=(1-p)\sum_{d=1}^{\infty}p^{d-1}\cdot A_{d}\] (4) In the Equation, \(d\) is the depth of the list, \(p\) is a weighting factor (between 0 and 1) and \(A_{d}\) is the common items at depth \(d\) divided by the depth \(d\) itself. Following Webber et al, we have set the weight parameter \(p\) to 0.9. The RBO-based similarity between the two types of results is reported in Table 5. Again, it is prominent from the results that the dissimilarities between the rank of the items based on their retrievability scores are noteworthy, particularly for the publication and dataset categories.
From the dissimilarities between the two ranked items of all three categories, it can be concluded that the popularity bias of queries affects the retrievability irrespective of the type. Out of the three categories, comparatively the least influence by this bias is observed for items belonging to the variable categories. The retrievability of items from the publication and dataset categories are noted to be the most impacted with less than 13% common items being observed among the top 1K.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Top items** & \multicolumn{3}{c}{**Jaccard’s coefficient**} \\ \cline{2-4}
**considered** & **Publication** & **Dataset** & **Variable** \\ \hline
**1000** & 0.1025 & 0.1287 & 0.3199 \\
**5000** & 0.2917 & 0.2606 & 0.4319 \\
**10000** & 0.3896 & 0.3546 & 0.5473 \\
**20000** & 0.4584 & 0.4821 & 0.6353 \\
**50000** & 0.5756 & 0.8383 & 0.8897 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The Jaccard’s coefficient (set-based similarity) between the ranked lists of items obtained with different query sets \(Q_{r}\) and \(Q_{u}\) are reported. The first column indicates the number of top retrievable items considered to compute the similarity.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Rank** & \multicolumn{3}{c}{**Gini coefficient**} & \multicolumn{5}{c}{**Retrieved**} \\ \cline{2-9}
**cutoff** & **Publication** & **Dataset** & **Variable** & **Publication** & **\%** & **Dataset** & **\%** & **Variable** & **\%** \\ \hline
10 & 0.8281 & 0.8800 & 0.8892 & 110666 & 92.15 & 37554 & 58.31 & 53799 & 10.28 \\
20 & 0.7460 & 0.8438 & 0.8194 & 116322 & 96.86 & 46437 & 72.10 & 89959 & 17.20 \\
30 & 0.7276 & 0.8201 & 0.7640 & 118050 & 98.30 & 51160 & 79.44 & 118961 & 22.74 \\
40 & 0.7112 & 0.7996 & 0.7155 & 118819 & 98.94 & 54503 & 84.63 & 144393 & 27.60 \\
50 & 0.6961 & 0.7813 & 0.6701 & 119259 & 99.31 & 56761 & 88.13 & 167691 & 32.06 \\
100 & 0.6632 & 0.7000 & 0.4806 & 119847 & 99.80 & 63735 & 98.96 & 263801 & 50.43 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Change in Gini coefficient when the rank cut off is increased. Also the number and percentage of documents retrieved of type Publication Dataset and Variable are presented.
## 5 From Retrievability to
Usefulness was introduced in Cole et al (2009) and designed initially as a criterion for the evaluation of interactive search systems. The _usefulness_ of a document can be defined as how often the document is retrieved and _exported_ (see Section 4.1) by the end user. Of course the concept of usefulness can only reliably be recognized by relevance judgements submitted by the user for a given query and the relevance of a document may also depend on the perspective of the user which may vary across users and different points in time. Without an explicit relevance judgement, the approximation of usefulness of documents can not be reliably accomplished. Considering the availability of the export and utilisation information from the query log, we can define the usefulness of a document (\(u(d)\)) by the following equation:
\[u(d)=\sum_{q\in\mathbb{Q}}w_{q}\cdot g(d,q) \tag{5}\]
In Equation 5, the weight of the query (\(w_{q}\)) can be defined in a similar way as defined in retrievability (Equation 1). The usefulness of a document may also depend on the _difficulty_ of the query (Carmel et al, 2006; Carmel and Yom-Tov, 2010)14. A document \(d\) should be considered more useful if it is retrieved and consumed following a query \(Q\) than any other document, say \(d^{\prime}\) with an associated query \(Q^{\prime}\) which is relatively easier than \(Q\) (i.e. \(difficulty(Q)>difficulty(Q^{\prime})\)). Hence, we extend the definition of the weight of the query taking into account a difficulty factor in Equation 6.
Footnote 14: A query can be considered as _difficult_ if the top ranked documents are mostly non-relevant in which scenario, the user has to go deep down the ranked list to get the document addressing the query Carmel and Yom-Tov (2010).
\[w^{\prime}_{q}=w_{q}*h(q) \tag{6}\]
where the function \(h(q)\) represents the difficulty of the query \(q\). The function \(g(\cdot)\) in Equation 5 indicates usefulness in terms of relevance of the document \(d\) for the query \(q\). Mathematically, \(g(\cdot)\) can be defined as follows:
\[g(d,q)=rel(d,q) \tag{7}\]
The function \(rel(d,q)\) in Equation 7 indicates the relevance of \(d\) for the query \(q\). It works, in the same way, \(f(k(d,q),c)\) is defined in Equation 2 considering a binary relevance (that is \(d\) can be either relevant - \(rel(d,q)=1\), or non-relevant - \(rel(d,q)=0\) to the query \(q\)).
Informally speaking, the usefulness of a document can be generally stated as the number of queries for which, it is exported (i.e. consumed) by the user. Considering a SERP without any duplicate documents, the usefulness can be further simplified as the count of exportation of the document.
### Experimentation
As presented and argued earlier in Section 5, the signal of document consumption by the user is essential in order to compute the usefulness of documents. We utilize the information stored in the interaction log of the integrated search system _GESIS Search_ as the indication of document consumption by the user. Particularly, the usefulness is determined on the basis of implicit relevance feedback from the _export_ interactions (see Section 4.1). The difficulty of the query is kept as constant (\(h(q)\) in Equation 6 set to 1) in this study and further study in this regard has been left as part of future work.
### Observation and analysis
The experimental results on usefulness are graphically presented in Figure 6 where a pair of Lorenz curves are displayed with the usefulness of the documents of type publication, dataset and variable.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{**Correlation coefficient**} \\ \cline{2-4}
**Measure** & **Publication** & **Dataset** & **Variable** \\ \hline
**Kendall’s \(\tau\)** & 0.0279 & 0.0789 & 0.1275 \\
**Spearman’s \(r\)** & 0.0390 & 0.1179 & 0.2267 \\
**RBO** & 0.4594 & 0.6211 & 0.7119 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The rank correlation based (Kendall’s \(\tau\) and Spearman’s \(r\)) and rank-bias based (RBO) similarities between the ranked lists of items obtained with different query sets \(Q_{r}\) and \(Q_{u}\) are reported.
From Figure 5(c), we can observe that the usefulness distribution of variables is close to being equally distributed as compared to the other types. In comparison, the similar distribution of datasets (presented in Figure 5(b)) is observed to be more skewed with an evident inclination towards certain items being more useful. The corresponding Gini coefficient of the distributions is presented in Table 6 where the value of \(G\) for the usefulness of dataset distribution is seen to be almost three times greater than the variables. The difference in publications and datasets is also evinent. This observation clearly highlights that a few datasets are more useful than the rest, whereas the usefulness distribution of the variables is considerably close to being uniform. In the case of publications, the distributions are also observed to be similar to that of variables which are close to uniformity.
## 6 Conclusion and future work
In Roy et al (2022), we have reported a significant difference in retrievability of items belonging to various categories in the integrated search system _GESIS Search_. We particularly focused on the types _publications_ and _datasets_ and concluded that there is a significant difference in the retrievability scores if the item belonged to the category of publication or dataset. As an extension to that work, we have included another category to study the retrievability which is _variables_. Along with that, we have used a newer and larger version of interaction logs for our experimentation. A noticeable difference in the experimental setup from our earlier work is that we have used a deduplicated version of the log. That is, only the unique queries from the interaction log are considered excluding any repeated entries. This ensures bypassing any query popularity bias, which may influence the retrievability of the items.
In this extended study, we observe similar phenomena on the newer data as well as on the variable type. In response to **RQ1**, we have seen a significant popularity-bias with certain items being retrieved more often than others. Particularly, it has been shown that certain items from the dataset category are more likely to be retrieved than the other items in the same category. In contrast, the retrievability scores of items from variable or publication types are more evenly distributed. For the **RQ2**, the intra-document selection bias is formalized using the common measures of Lorenz curve and Gini coefficient. In response to **RQ3**, we have observed that the distribution of document retrievability is more diverse for the datasets as compared to publications. This can
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Publication** & **Dataset** & **Variables** \\ \hline
**Gini** & 0.3160 & 0.8031 & 0.2876 \\
**coefficient** & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: The Gini coefficient computed with the distribution of usefulness of the publication, dataset and variables. A higher Gini coefficient (upper bound 1.0) indicates an uneven distribution of usefulness.
Figure 6: Plotting Lorenz curves for usefulness values. The straight line going through the origin (in black) indicate the _equality_, that is, when all the documents are equally useful. The blue (Figure 5(a)) and orange (Figure 5(b)) curves respectively specify the publication and dataset, while variable is indicated by the green curve (Figure 5(c)).
be attributed again to the popularity bias of certain items in the dataset category. The earlier study used an interaction log not employing any deduplication of queries; as a result, the items retrieved for those popular queries (occurring frequently in the log) gain a boost in the computed retrievability scores. In this paper, we have further included an explicit discussion and comparison of the retrievability scores of items in different categories when the query popularity bias is factored out by the deduplication of the queries. In this connection, as a response to **RQ4**, we showed that there can be a positive influence of the query popularity bias on the distribution of the retrievability scores.
Further study on the measurement of usefulness (proposed in our earlier work (Roy et al, 2022)) reveals a prominent diversity in the nature of consumption of items among the different types. We notice that variables are close to having an equality in usefulness which is significantly disparate in publication and dataset categories. Additionally, we have proposed a measurement of _usefulness_ of documents based on the signal of document consumption by the users after submitting a query to the system. Experimenting with the variables, we observe that the usefulness of items in this category is closer to equality than items in the other categories.
The proposed usefulness metric indicates its popularity in terms of being consumed by the users. Hence one possible extension of this work will be to test the applicability of usefulness to improve retrieval performance. Incorporating the usefulness of documents as a feature in the learning to rank framework could actually boost the retrieval effectiveness. In terms of presenting the results (SERP) to end users, usefulness can be used as a sorting measure to organise the retrieved items based on popularity. Specifically, together with the provision of presenting the results sorted based on the recency or relevance, it can also be extended to provide an ordering based on how popularly the document is viewed by the users.
#### Acknowledgment
This work was funded by DFG under grant MA 3964/10-1, the "Establishing Contextual Dataset Retrieval - transferring concepts from document to dataset retrieval" (ConDATA) project at GESIS. Dwaipayan Roy wants to acknowledge a research grant provided by the GESIS Research Gateway EUROLAB in summer 2022.
#### Conflict of interest
Philipp Mayr is on the Editorial Board of the "International Journal on Digital Libraries" and guest co-editor of the special issue "JCDL 2022". In this case, the co-editors are handling the review process.
|
2309.00306 | On the Aggregation of Rules for Knowledge Graph Completion | Rule learning approaches for knowledge graph completion are efficient,
interpretable and competitive to purely neural models. The rule aggregation
problem is concerned with finding one plausibility score for a candidate fact
which was simultaneously predicted by multiple rules. Although the problem is
ubiquitous, as data-driven rule learning can result in noisy and large
rulesets, it is underrepresented in the literature and its theoretical
foundations have not been studied before in this context. In this work, we
demonstrate that existing aggregation approaches can be expressed as marginal
inference operations over the predicting rules. In particular, we show that the
common Max-aggregation strategy, which scores candidates based on the rule with
the highest confidence, has a probabilistic interpretation. Finally, we propose
an efficient and overlooked baseline which combines the previous strategies and
is competitive to computationally more expensive approaches. | Patrick Betz, Stefan Lüdtke, Christian Meilicke, Heiner Stuckenschmidt | 2023-09-01T07:32:11Z | http://arxiv.org/abs/2309.00306v1 | # On the Aggregation of Rules for Knowledge Graph Completion
###### Abstract
Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models. The rule aggregation problem is concerned with finding one plausibility score for a candidate fact which was simultaneously predicted by multiple rules. Although the problem is ubiquitous, as data-driven rule learning can result in noisy and large rule sets, it is underrepresented in the literature and its theoretical foundations have not been studied before in this context. In this work, we demonstrate that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules. In particular, we show that the common Max-aggregation strategy, which scores candidates based on the rule with the highest confidence, has a probabilistic interpretation. Finally, we propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
Machine Learning, Knowledge Graph Completion, Knowledge Graph Completion
## 1 Introduction
A knowledge graph (KG) is a collection of _relation(subject, object)_ facts which can be used to compactly describe certain domains. KGs can be utilized for various downstream applications such as drug repurposing (Liu et al., 2021) or visual relationship detection (Baier et al., 2017). Most of the real-world KGs are incomplete, which means that absent facts are not necessarily false. The problem of knowledge graph completion (KGC) aims to derive the missing facts by using the information in the existing graph (Ruffinelli et al., 2020; Rossi et al., 2021). The proposed model classes in the literature are data-driven, e.g., a model might learn the regularity that people which appear in movies tend to be actors and can use it to make new predictions. Although the dominating paradigm in the literature lies on models based on latent representation, a KG is symbolic by its nature.
Symbolic machine learning approaches for KGC employ rule mining techniques and represent the KG with the raw predicates which makes them inherently interpretable. In regard to predictive performance they are shown to be competitive to latent based approaches (Rossi et al., 2021) and can achieve state-of-the-art results on large graphs (Meilicke et al., 2023). To perform KGC with a symbolic approach, a previously learned set of rules has to be applied to the KG to derive plausibility scores for unseen target facts. Whenever multiple rules predict a candidate fact, the question arises of how to aggregate individual rules, as demonstrated in the following running example.
**Example 1.1**.: _Consider the following clauses or rules._
\[c_{1}\ [0.64]:\text{\#}\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text
Chen et al., 2016; Ortona et al., 2018; Fan et al., 2022).
The predictive quality of a mined rule set depends to a large extend on the aggregation decision and surprisingly there exists a theoretical and empirical gap in the recent KGC literature between techniques to learn rules and their successful application. To the best of our knowledge, there only exist two recent works which are primarily concerned with the aggregation problem for KGC (Ott et al., 2021; Betz et al., 2022b). While they improve upon simple strategies, the approaches are computationally expensive and theoretical foundations are not discussed.
The goal of this work is to close this gap and to inspire new research in this direction. We aim to achieve this by developing the formal foundations of the problem and by empirically analysing the practicality of existing approaches. We present a probabilistic model in which the aggregation reduces to performing marginal inference over a joint distribution of the rules when rule marginals are approximated with confidences (Section 4.1 and 4.3). With this formulation we are able to show that the common Max-aggregation strategy can be recovered from the model when the correlation matrix of the rules is set to the upper Frechet-Hoeffding bound for the correlation of random variables (Section 4.4). We then search for the simplest and most efficient way to combine the assumptions made by common aggregation strategies. This leads to an efficient baseline, Noisy-or top-\(h\), which is competitive when taking into account the performance-runtime trade-off (Section 5). Moreover, our experiments show that the choice of the aggregation function has significant performance impacts and therefore it deserves more attention in the context of rule-based KGC.
## 2 Related Work
While data-driven rule learning approaches for KGC are often evaluated in comparison to embedding models, the focus of this work is rule aggregation and we therefore refer to the recent literature for an overview to latent-based KGC (Rossi et al., 2021).
Rule mining approaches learn datalog rules from a KG. In the context of **association rule mining**, AMIE (Galarraga et al., 2013) and the respective improved versions AMIE+ (Galarraga et al., 2015) and AMIE3 (Lajus et al., 2020) show how to mine rules when data is incomplete. AnyBURL (Melilicke et al., 2019) is the successor of RuleN (Melilicke et al., 2018). It is shown to be competitive to neural approaches (Rossi et al., 2021; Meilicke et al., 2023) and it can be utilized to explain predictions made by embedding models (Betz et al., 2022a). Other approaches are tailored towards large graphs (Fan et al., 2022; Chen et al., 2016) or to learn negative rules (Ortona et al., 2018). There also exist attempts to improve rule quality by providing more advanced confidence computations (Galarraga et al., 2013; Pellissier Tanon et al., 2017; Zupanc & Davis, 2018). The rule quality is evaluated by calculating the precision of the individual rules independent from the remaining rules on a gold standard KG. For the resulting metrics, the aggregation problem is irrelevant. In this work we regard rule quality from the viewpoint of the predictions made by the rules, which also allows comparisons to other model classes.
Related branches of work combine latent and symbolic models in hybrid approaches (Guo et al., 2016; Garcia-Duran & Niepert, 2018; Wu et al., 2022; Meilicke et al., 2021). Moreover, some work propose **differentiable rule learning** i.e., learning rules by solving a smooth optimization problem (Yang et al., 2017; Sadeghian et al., 2019). Rule mining and the aggregation are arguably coalesced in one forward pass of a neural module. It has been shown, nevertheless, that the rules extracted from the models might not derive the same facts as the models themselves and achieve a lower predictive performance (Tena Cucala et al., 2022). Therefore, they might benefit from encapsulating rule learning and the aggregation. A step in this direction is made by RNNlogic (Qu et al., 2021), in which a neural rule generator and a reasoning predictor operate independently. The predictive performance of the resulting model, when not augmented with embeddings, lacks, however, in regard to purely symbolic models.
The combination of logic and uncertainty has a rich history in the **statistical relational learning** literature. For instance, Stochastic Logic Programs (Muggleton et al., 1996; Sato & Kameya, 1997) and Bayesian Logic Programs (BLP) (Kersting & De Raedt, 2001) augment inductive logic programming (Muggleton & De Raedt, 1994) with probability semantics. Rules are represented as conditional probabilities and a joint probability distribution is modelled over the least Herbrand base of the logic program. Here, the aggregation problem becomes explicit. In particular, when multiple conditionals have the same effect variable, they are collapsed into one by the use of a _combining rule_. Nevertheless, this heuristic is applied on top of the formal framework whereas in this work we model the problem directly. A difficulty for BLPs is that the probability distribution is only well defined when the underlying graph does not contain cycles which is quite unlikely in the context of KGC when millions of rules are learned. Markov Logic Networks (MLNs) (Richardson & Domingos, 2006) are proposed to overcome the cycle problem as well as the requirement to define the _ad hoc_ combining rule. MLNs subsume many of the approaches from the statistical learning literature. Each possible ground fact is associated with a binary random variable and every possible grounding of
every rule with a weight and a binary feature. The aggregation of clauses is performed implicit for MLNs and can not be modelled easily. We show an example regarding MLNs in the appendix of this work.
The focus of this work are settings where model theoretic entailment is not feasible. For instance, an MLN would need to define \(15k\cdot 237\) random variables on the dataset FB15k-237 (Toutanova & Chen, 2015) and a feature for every possible rule grounding with a ruleset size of 5 million. Even if we would just calculate the immediate predictions of the rules on this dataset, including storing some indices for further processing, this would already take more than 600GB of memory. A similar note can be made for neural theorem proofing, where the forward-chaining algorithm is relaxed to a smooth differentiable function (Evans & Grefenstette, 2018; Rocktaschel & Riedel, 2017; Minervini et al., 2020; 20). To the best of our knowledge, these approaches have not shown yet to scale to datasets of the size used in our experiments. This also holds for ProbLog (De Raedt et al., 2007) which combines probabilistic inference with model theoretic entailment and has the strongest resemblance to our approach. We discuss the details in Section 4.4 and in the appendix of this work.
The rule aggregation problem is discussed explicitly by SAFRAN (Ott et al., 2021) where a clustering of the rules is learned and by Betz et al. (2022) who represent rules with embeddings. These works show improvements in regard to simple strategies but they do not consider a fundamental treatment of the problem and the models are inefficient to use, which will be demonstrated in the experimental section.
## 3 Background
### Knowledge Graph Completion
A KG \(\mathcal{G}\) is a set of _relation(subject, object)_ triples or facts with \(\mathcal{G}\subseteq\mathcal{E}\times\mathcal{P}\times\mathcal{E}\) where \(\mathcal{E}\) denotes a set of entities and \(\mathcal{P}\) a set of binary predicates which we term relations. KGC is concerned with finding unknown facts, given an input or training KG \(\mathcal{G}\). In this work, we focus on the mostly used evaluation protocols which are defined by ranking based evaluation metrics. The derivations of this work are, however, independent of the evaluation protocol as long as scalar scores for candidate predictions are required.
The common practice is to split the graph into disjoint training, validation, and testing sets. After the training or mining phase a model is evaluated by proposing answers to queries formed from the facts in the test set. For each of these evaluation facts a head query and a tail query are formed. For example, from \(worksFor(Anna,Google)\) the queries \(worksFor(Anna,?)\) and \(worksFor(?,Google)\) are formed, where \(worksFor\) is a relation and \(Anna\) and \(Google\) are entities. A model has to propose candidate facts for the tail query, e.g., \(worksFor(Anna,e_{1})\) and candidate facts for the head query \(worksFor(e_{2},Google)\) for multiple \(e_{1},e_{2}\in\mathcal{E}\). Each candidate fact is assigned with a score such that for each direction a ranking of answers can be formed. The metrics usually are presented with their filtered versions, e.g., if \(e_{2}\neq Anna\) but \(worksFor(e_{2},Google)\) exists in one of the data splits, then it is removed from from the ranking of the current query to not penalize the model when it correctly ranks true answers on top positions. Performance is measured by the ranking position of the respective true candidate \(worksFor(Anna,Google)\) in both directions where the mean reciprocal rank (MRR) and Hits@X being the most common evaluation metrics. The definitions of the metrics can be found in the appendix.
### Rules and Application
We let a \(c\in\tilde{\mathcal{C}}\) denote a logical clause, which we will term rule throughout the work, where \(\tilde{\mathcal{C}}\) is a collection of clauses. The \(c\) will later be indexed and represented by separate random variables. The rules that we consider in this work are of the form as given in the running example. They are composed of variables and relations and they additionally can contain entities as shown in the following example.
\[speaks(X,English)\gets livesIn(X,London)\]
We call \(speaks(X,English)\) the head of the rule and \(livesIn(X,London)\) the body of the rule. The rules and the KG can be described with a subset of Prolog, where entities are constants, relations are predicates, rules are clauses, and the facts of the KG are ground atoms where we do not consider negation. We will use the rule learners AnyBURL (Meilicke et al., 2019) and AMIE3 (Lajus et al., 2020) in our experimental section and we refer to the respective works for further details, nevertheless, the descriptions and derivations in this work are independent of the particular syntax.
We define a substitution to be the expression obtained when replacing the variables of the rules with entities from \(\mathcal{E}\). For instance, for the first rule from the running example with \((X{=}Anna,Y{=}Google)\) we obtain the substitution \(worksFor(Anna,Google){\leftarrow}internsAt(Anna,Google)\). A detailed formalization is suppressed here for brevity.
Rule application refers to predicting previously unseen facts given a set of rules and the input or training KG. We can describe it compactly with the recently introduced concept of one-step-entailment (Betz et al., 2022). Let \(\tilde{\mathcal{C}}\) be a set of rules and \(\mathcal{G}\) a KG.
**Definition 3.1** (One-step entailment \(\models_{1}\)).: _The fact \(t\) is **one-step entailed** by \(\tilde{\mathcal{C}}\cup\mathcal{G}\), written as \(\tilde{\mathcal{C}}\cup\mathcal{G}\models_{1}t\), iff there
is a rule in \(\tilde{\mathcal{C}}\) for which a substitution exists such that the resulting body facts are in \(\mathcal{G}\) and the head is equal to \(t\)._
Clearly, one-step entailment is weaker but more efficient than model theoretic entailment. As mentioned before, we focus on settings where general entailment is not feasible. One-step-entailment implies entailment but not vice versa.1 In the context of KGC often the less formal notion of an individual rule predicting a candidate is used which we can now describe precisely.2
Footnote 1: Note that \(\models_{1}\) is different to \(\bar{k}\)-entailment which limits the number of constants used in entailment (Kuzelka et al., 2018).
**Definition 3.2** (Prediction).: _A rule \(c\in\tilde{\mathcal{C}}\)**predicts** a fact \(t\) iff it individually one-step entails \(t\), i.e., iff \(\{c\}\cup\ \mathcal{G}\models_{1}t\)._
For simplicity, we will write \(c\models_{1}t\) instead of \(\{c\}\cup\ \mathcal{G}\models_{1}t\), where from the context the reference to the facts \(\mathcal{G}\) will be clear. The section concludes with an example.
**Example 3.3** (cont.).: _Let \(e_{d}\), \(e_{u}\), and \(e_{g}\) be entities in \(\mathcal{E}\). Let \(t=wf(e_{d},e_{g})\) and assume that_
\[\mathcal{G}=\left\{\begin{array}{c}cooperatesWith(e_{g},e_{u})\\ internAt(e_{d},e_{g})\\ studentAt(e_{d},e_{u})\end{array}\right\}.\]
_Consider the three rules from the running example. Then the joint set of rules and every pairwise set of rules one-step entail \(t\) while only the first and the third rule predict \(t\)._
### Rule Aggregation
For the remainder of the work we assume that \(\tilde{\mathcal{C}}\) is a given ruleset that has been learned from the training graph \(\mathcal{G}\). Furthermore, for a target triple \(t\notin\mathcal{G}\) we let \(\mathcal{C}_{t}(\mathcal{G})\) denote the set of rules that predicted \(t\) with respect to the KG \(\mathcal{G}\). For performing KGC under any evaluation protocol a model has to assign plausibility scores to candidate facts. For rule-based KGC this requires the introduction of two additional concepts, rule confidences and aggregation strategies.
#### 3.3.1 Confidences
Rule confidences originate from the context of association rule mining and we will now assume that each rule in \(\tilde{\mathcal{C}}\) is assigned with a confidence which can be calculated as follows.
\[\textit{conf}(c)=\frac{\big{|}\{t^{\prime}\mid c\models_{1}t^{\prime}\wedge t ^{\prime}\in\mathcal{G}\}\big{|}}{\big{|}\{t^{\prime}\mid c\models_{1}t^{ \prime}\}\big{|}} \tag{1}\]
Equation (1) is the vanilla confidence definition described in many works (e.g., Galarraga et al., (2013)). The confidence divides the number of all true predictions a rule makes by the number of all predictions of the rule. Intuitively, we could interpret this as the probability that the rule is true, which will be discussed in later sections.
#### 3.3.2 Aggregation Strategies
In practical scenarios it rarely occurs that a candidate fact is predicted by only one rule, i.e., then \(|\mathcal{C}_{t}(\mathcal{G})|>1\). The rule aggregation problem, also termed joint prediction (Galarraga et al., 2015), is concerned with defining a function that maps the confidences of the rules that predicted the candidate to a real valued score.
Note that the number of rules that predict a candidate fact simultaneously can be large, as mentioned before, such that rules are to some extend redundant. For instance, if the second rule from the running example predicts \(Anna\) to work for _Google_, the question arises whether the third rule provides additional evidence for this prediction. The rules make the prediction for seemingly similar reasons, as it is more likely for an university and a company to cooperate when they are located in the same location. In the following the two most common aggregation strategies are defined.
**Definition 3.4** (Max-Aggregation).: _The Max-Aggregation score \(s^{M}\) is calculated according to the rule with the highest confidence from the rules that predicted the candidate, \(s^{M}(t)=\max\{\textit{conf}(c)\mid c\in\mathcal{C}_{t}(\mathcal{G})\}\)._
Max-aggregation was first used in the context of KGC by Galarraga et al. (2015) and it was later adapted to _Max+aggregation_(Meilicke et al., 2019) which allows for tie handling. When the two predicting rules with the highest confidences for two candidates are identical the candidates are compared according to the rules with the second highest confidence which is continued until the candidates can be discriminated.
**Definition 3.5** (Noisy-or aggregation).: _The Noisy-or score \(s^{NO}\) is calculated as the noisy-or product over the predicting rules, \(s^{NO}(t)=1-\prod_{c\in\mathcal{C}_{t}(\mathcal{G})}(1-conf(c))\)._
The Noisy-or product originates from Bayesian networks where it is used to express independent causes (Pearl, 1988) and it was proposed by Galarraga et al. (2015) for KGC.
**Example 3.6** (cont).: _Let us assume that \(Anna\) is predicted by all rules from the starting example to work for \(Google\), while \(Lisa\) is predicted by only the second and third rule to work for \(Google\). The Max-aggregation and Noisy-or scores for \(Anna\) are \(0.64\) and \(0.88\), respectively. For \(Lisa\) they are \(0.44\) and \(0.67\)._
While the aggregation functions have the purpose of merging the various confidences into a final score, this value also should be meaningful in the sense that a higher value for one prediction should mean it is more likely than another prediction.
## 4 Probabilistic and Efficient Rule Aggregation
In the following section we present the notation for the probabilistic representation, subsequently we introduce the inference model and show how the introduced rule aggregation functions can be recovered from the framework when making certain dependency assumptions. Finally, we will present an efficient baseline, that combines these assumptions.
### Representation
First, we enumerate the rules in \(\tilde{\mathcal{C}}\) with an index set \(\tilde{I}=\{1,...,N\}\) such that \(c_{i}\in\tilde{\mathcal{C}}\) for \(i\in\tilde{I}\). Each rule \(c_{i}\) is represented by a binary random variable \(\tilde{R}_{i}\) which is also indexed by \(\tilde{I}\) and has realisations \(\tilde{r}_{i}\in\{0,1\}\). We let \(\mathbf{\tilde{R}}\) denote the random vector representing all rules and likewise \(\mathbf{\tilde{r}}=\left(\tilde{r}_{i}\right)_{i\in\tilde{I}}\in\{0,1\}^{N}\) is the vector of realisations. For brevity we write \(p(\mathbf{\tilde{r}})\) for \(p(\mathbf{\tilde{R}}\)=\(\mathbf{\tilde{r}})\), that is, the probability that \(\mathbf{\tilde{R}}\) takes value \(\mathbf{\tilde{r}}\).
For the rule aggregation problem the set of rules \(\mathcal{C}_{t}(\mathcal{G})\subseteq\tilde{\mathcal{C}}\) that predict a target fact \(t\) based on \(\mathcal{G}\) are of particular relevance. Therefore, similar as above \(\mathcal{C}_{t}(\mathcal{G})\) is enumerated by \(I=\{1,...,k\}\) and the random vector \(\mathbf{R}\) with realisations \(\mathbf{r}=(r_{j})_{j\in I}\in\{0,1\}^{k}\) represents the rules that predict the target. Note that \(\mathbf{R}\) represents a subset of all the rules and this depends on \(t\), however, to not clutter notation we not write this explicit and the reference to \(t\) will be clear from the context.
Moreover, we write \(p_{j}\) or \(p_{i}\) for the probability that a rule is true, i.e., for the marginals \(p(R_{j}\)=\(1)\) or \(p(\tilde{R}_{i}\)=\(1)\). We assume that index sets are ordered according to the marginals, e.g., \(p_{m}\geq p_{n}\) when \(m\leq n\) with \(m,n\) being indices. Facts \(t\) are likewise represented as binary variables, here we overload notation for brevity and write \(p(t)\) for the probability of a query triple to be true. For an observed triple \(t\in\mathcal{G}\) we set \(p(t)=1\).
### Dealing with Uncertainty
To incorporate uncertainty into the prediction of new facts we take the following approach. If we are certain that a rule is true, then we deduce that a prediction it makes must be also true. We can model this for all the learned rules with a conditional distribution that conditions on the truth values of the rules and the data.
\[p(t|\mathbf{\tilde{r}},\mathcal{G})=\left\{\begin{array}{ll}1,\;\text{if} \;L(\mathbf{\tilde{r}})\models_{1}t\\ 0,\;\text{else},\end{array}\right. \tag{2}\]
Here, \(L\) is a simple mapping that collects all rule objects in \(\tilde{\mathcal{C}}\) whose realisation are one in \(\mathbf{\tilde{r}}\) and takes the union with \(\mathcal{G}\), i.e.,
\[L_{\tilde{I}}^{\mathcal{G}}:\mathbf{\tilde{r}}\mapsto L_{\tilde{I}}^{\mathcal{ G}}(\mathbf{\tilde{r}})=\{c_{i}\mid\tilde{r}_{i}=1\;\text{and}\;i\in\tilde{I}\}\cup \mathcal{G}. \tag{3}\]
We drop, as shown in equation (2), the reference to the index set \(\tilde{I}\) and \(\mathcal{G}\) from \(L\) for readability. Clearly, if the rules would not be associated with uncertainty evaluating equation (2) would boil down to performing rule application in regard to the correct rules. However, the truth values of the rules cannot be observed from the data.
We have, on the other hand, an estimate that statistically quantifies the uncertainty of the rules, the defined rule confidences. A confidence may serve as an approximation for the marginal probability that the respective rule is true, i.e., \(p(\tilde{R}_{i}\)=\(1)\). However, we have to acknowledge that it is only the marginal \(\sum p(\tilde{R}_{i}=1,\mathbf{\tilde{r}}_{-i})\), which sums over all realisations of the remaining rules, where \(\mathbf{\tilde{r}}_{-i}\) is the vector of realisations with \(\tilde{r}_{i}\) dropped.
The last paragraph makes the difference to the viewpoint of association rule mining explicit. In fact, we assume that \(p(\tilde{R}_{i}\)=\(1)\) is potentially influenced by an underlying joint distribution. For instance, the confidence of the rule \(c_{2}\) of the running example might be influenced by the confidence of \(c_{3}\) through the second term in the sum \(p(\tilde{R}_{2}\)=\(1)=p(\tilde{R}_{2}\)=\(1,\tilde{R}_{3}\)=\(0)+p(\tilde{R}_{2}\)=\(1,\tilde{R}_{3}\)=\(1)\). Therefore, for fact prediction associated with uncertainty we have to take into account the joint distribution over the rules which will be discussed in the next section.
### Inference for Target Facts
We want to calculate the probability that an unknown target fact \(t\notin\mathcal{G}\) is true, given the known triples, i.e., we seek to compute \(p(t|\mathcal{G})\). However, we cannot observe the truth values \(\mathbf{\tilde{r}}\) of the rules from the data and we therefore choose a standard approach regarding such settings, i.e., we marginalize over all possible rule realisations,
\[p(t|\mathcal{G})=\sum_{\mathbf{\tilde{r}}\in\{0,1\}^{N}}p(t|\mathbf{\tilde{r} },\mathcal{G})p(\mathbf{\tilde{r}}|\mathcal{G}). \tag{4}\]
Where we set \(p(t|\mathbf{\tilde{r}},\mathcal{G})\) to equation (2). We can simply calculate \(p(t|\mathbf{\tilde{r}},\mathcal{G})\) by collecting all rules that are one in \(\mathbf{\tilde{r}}\) and subsequently evaluate if one of these rules predicts the target, i.e., performing rule application. The distribution \(p(\mathbf{\tilde{r}}|\mathcal{G})\) seems to be more problematic. It defines the joint distribution over all \(N\) rules, given the data, including the rules that did not predict \(t\). Rule aggregation, however, was defined with only the \(k\) rules that predicted a candidate. We will argue in the following proposition that under one-step entailment for calculating \(p(t|\mathcal{G})\) it is indeed sufficient also under the probabilistic model to exclusively take into account the rules \(\mathbf{R}\) with realisations \(\mathbf{r}\) that predicted \(t\).
**Proposition 4.1**.: _Under a one-step entailment regime, i.e., using equation (2) for \(p(t|\mathbf{\tilde{r}},\mathcal{G})\), and a global distribution
\(p(\vec{\mathbf{r}}|\mathcal{G})\) we have that_
\[p(t|\mathcal{G})=\sum_{\mathbf{r}\in\{0,1\}^{k}}p(t|\mathbf{r},\mathcal{G})p( \mathbf{r}|\mathcal{G}). \tag{5}\]
The proof is in the appendix. Instead of using the global distribution we can focus directly on performing marginal inference \(p(\mathbf{r}|\mathcal{G})\) with respect to the rules that predicted \(t\). Although marginal inference can equally be expensive, the complexity can be reduced if the joint distribution is specified accordingly and if some parameters of the joint are known such as the individual rule marginals. Additionally, it might even be beneficial to model \(p(\mathbf{r}|\mathcal{G})\) directly.
Note that Proposition (4.1) would not hold if we would consider general model theoretic entailment. Finally, by the definition of equation (2) and one-step entailment it is easy to see that the query probability is the probability that at least one rule from \(\mathbf{R}\) is true.
**Proposition 4.2**.: _For the query probability it holds that_
\[p(t|\mathcal{G})=p\big{(}\sum_{j\in I}R_{j}\geq 1\mid\mathcal{G}\big{)}. \tag{6}\]
Proof.: We write out \(p(t|\mathbf{r},\mathcal{G})\) in equation (5) and then drop the one term that is zero. The proposition follows from the definition of one-step-entailment as \(L(\mathbf{r})\) one-step entails the target if at least one component of \(\mathbf{r}\) is one. That means the probabilities of all realisations where at least one rule is true are summed up.
We will henceforth refer to calculating \(p(t|\mathcal{G})\) under the previous derivations when mentioning the inference model and we conclude the section with an example.
**Example 4.3** (cont).: _Lisa is predicted by the two rules \(c_{2}\) and \(c_{3}\) to work for Google. Assuming that we know the joint distribution over all rules, we can calculate the probability that Lisa works for Google by querying the joint distribution for the probability that at least one of \(c_{2}\) and \(c_{3}\) is true._
### Recovering Aggregation Functions
We will demonstrate in this section that the inference model leads to the different aggregation strategies depending on the assumed joint distribution when marginals are approximated with the rule confidences. Therefore we assume for the following derivations \(p(\hat{R}_{i}{=}1)=\textit{conf}(c_{i})\) for \(i\in\tilde{I}\).
#### 4.4.1 Probabilistic Max-Aggregation
Max-aggregation was introduced in the literature as a computational heuristic (Galarraga et al., 2015), it was further described as accounting for strong rule dependencies without providing a detailed treatment (Meilicke et al., 2019), or it was even described with assuming fact independence (Svatos et al., 2020). We will now introduce the Frechet-Hoeffding bound which will help us to achieve a formal derivation. It limits the possible association, expressed as correlation, of two random variables (Joe, 1997). Let \(p_{i}\) and \(p_{j}\) be the marginal probabilities for two Bernoulli variables, then it holds for the correlation \(\rho_{ij}\) that \(\rho_{ij}\leq U(i,j)\) where
\[U(i,j)=\min\bigg{\{}\bigg{(}\frac{p_{i}(1-p_{j})}{p_{j}(1-p_{i})}\bigg{)}^{1 /2},\bigg{(}\frac{p_{j}(1-p_{i})}{p_{i}(1-p_{j})}\bigg{)}^{1/2}~{}\bigg{\}}. \tag{7}\]
**Example 4.4** (cont).: _Let \(p_{1}=0.64\) and \(p_{2}=0.44\) then \(U(1,2)\approx 0.66\). Whereas for \(p_{3}=0.41\), \(U(2,3)\approx 0.94\)._
While the configuration of the marginals in Example 4.4 allows for complex dependencies in regard to the joint distribution, they are not compatible with complete dependence as this would require unit correlation. Interestingly, equation (7) suffices to specify a joint distribution \(p(\vec{\mathbf{r}}|\mathcal{G})\) such that the inference model from Section 4.3 performs Max-aggregation.
**Theorem 4.5**.: _If for the correlation matrix \(\Omega\in[-1,1]^{(N,N)}\) with entries \(\rho_{ij}\) for all \(i,j\) it holds that \(\rho_{ij}=U(i,j)\) then a unique distribution for \(p(\vec{\mathbf{r}}|\mathcal{G})\) is induced such that \(p(t|\mathcal{G})=s^{M}(t)\)._
We will show the proof for the case where \(k=2\) rules predicted the candidate here briefly and the general case can be found in the appendix. Let \(p_{\tilde{i}}=1-p_{i}\) and let, e.g., \(p_{\tilde{i}j}=p(R_{i}{=}0,R_{\tilde{j}}{=}1|\mathcal{G})\) and likewise for the remaining realisations. Further note for the correlation \(\rho_{ij}=\frac{p_{\tilde{i}j}-p_{\tilde{i}p_{\tilde{j}}}}{\tilde{\sigma}_{i} \tilde{\sigma}_{j}}\) where \(\tilde{\sigma}\) is the respective standard deviation.
Proof (k=2).: Following Propositions (4.1) and (4.2), \(p(t|\mathcal{G})\) is equivalent to querying the joint distribution marginally for \(p(r_{i}+r_{j}\geq 1)\) assuming \(c_{i}\) and \(c_{j}\) predicted the target. We here assume the global distribution exists and is unique. It therefore suffices to show that
\[\max\left\{p_{i},p_{\tilde{j}}\right\}=p_{\tilde{i}j}+p_{\tilde{i}j}+p_{\tilde {i}j}~{}.\]
Assume w.l.o.g. that \(p_{i}\geq p_{j}\). Then after plugging in \(U(i,j)\) into \(\rho_{ij}\) and solving for \(p_{ij}\), we obtain \(p_{ij}=p_{j}\). However, by definition of the marginal it holds that \(p_{j}=p_{ij}+p_{\tilde{i}j}\) and therefore \(p_{\tilde{i}j}=0\). Then we have,
\[\max\left\{p_{i},p_{\tilde{j}}\right\} =\max\left\{p_{i\tilde{j}}+p_{\tilde{i}j},\;p_{\tilde{i}j}+p_{ \tilde{i}j}\right\}\] \[=\max\left\{p_{i\tilde{j}}+p_{\tilde{i}j},\;p_{\tilde{i}j}\right\}\] \[=p_{\tilde{i}j}+p_{\tilde{i}j}\] \[=p_{\tilde{i}j}+p_{\tilde{i}j}+p_{\tilde{i}j}.\qed\]
**Example 4.6** (cont).: _For \(p_{1}=0.64\) and \(p_{2}=0.44\) we obtain \(p_{12}=0.44\), \(p_{12}=0\), and \(p_{1\tilde{2}}=p_{1}-p_{2}=0.2\), leading to \(p(t|\mathcal{G})=0\cdot p_{1\tilde{2}}+1\cdot p_{12}+1\cdot p_{1\tilde{2}}+1 \cdot p_{\tilde{1}2}=0.64\)._
We have specified a unique multivariate Bernoulli distribution \(p(\mathbf{\tilde{r}}|\mathcal{G})\) by simply defining a correlation matrix. Clearly setting the \(N^{2}\) values of the correlation matrix is in general not sufficient for defining a distribution that has \(2^{N}\) parameters and also not every correlation matrix is admissible in the first place Huber and Maric (2019).
#### 4.4.2 Noisy-or Aggregation
To derive Noisy-or aggregation we have to make an assumption about the joint distribution that goes beyond pairwise interactions.
**Proposition 4.7**.: _If the \(N\) rules in \(p(\mathbf{\tilde{R}}|\mathcal{G})\) are mutually independent then \(p(t|\mathcal{G})=s^{NO}(t)\)._
It is trivial to derive the Noisy-or product from the inference model under the independence assumption and the proof is shown in the appendix for completeness.
The independence assumption of Noisy-or aggregation reveals the connection of the model from section 4.3 to ProbLog De Raedt et al. (2007). ProbLog assigns probabilities to logic programs and inference is performed by aggregating all programs that logically entail a query by assuming individual probabilities are independent. Two results are shown in the appendix that make the connection to the derivations here explicit. First, if the logical semantics of ProbLog would be substituted with one-step entailment than it would perform Noisy-or aggregation. Second if we setup a ProbLog program with the rules \(\tilde{\mathcal{C}}\), the fact probability would be always equal or larger than the Noisy-or probability. Note that the computational complexity of reasoning, as discussed earlier, here also applies. Finally, aggregating all the predicting rules with the Noisy-or product might not optimal in the context of data-driven rule learning where millions of rules can be partially redundant, which will be shown in the experimental section.
### Mixing Assumptions
Both of the aggregation approaches derived in Section 4.4 make strong assumptions in regard to the dependence structure of the joint distribution over the rules. Clearly this can lead to an overestimation or underestimation of the final probability when the assumptions fail. Intuitively, this gives rise to mixture distributions that make assumptions between mutual independence and maximal correlation. Along these lines, previous work proposes models that can express both approaches as their special cases. These models are expensive to use, however, as they learn a clustering of all rules Ott et al. (2021) or represent rules with latent embeddings Betz et al. (2022). We will now present a simple approach that is overlooked in the literature so far which likewise operates in between both assumptions.
**Definition 4.8**.: _(Noisy-or top-h) Let \(I^{*}\subseteq I\) be the subset of indices for the \(h\) predicting rules with the highest marginals. The Noisy-or top-h aggregation strategy calculates the final score according to \(s(t)^{NO_{h}}=1-\prod_{j\in I^{*}}(1-conf(c_{j}))\)._
The correlation assumption is revealed when considering that for decreasing \(h\) the approach converges to Max-aggregation which is stated more compactly in the final proposition of this section.
**Proposition 4.9**.: _For the score calculated with noisy-or top-h we have that \(s^{M}(t)\leq s^{NO_{h}}(t)\leq s^{NO}(t)\) where the equalities are achieved for \(h=1\) and \(h=k\), respectively._
The proposition immediately follows from the definitions of the approaches. Furthermore, instead of setting one value for \(h\) we can exploit the mixture property more fine-grained and set the value independently for relations and query-directions which will be discussed in the next section.
## 5 Experiments
The goal of our experimental section is to analyse the predictive performance of the existing aggregation approaches, to evaluate how to efficiently exploit the overlooked Noisy-or top-\(h\) approach, and to give a potential user an overview about the performance-speed trade-off regarding more complex approaches. We abstain from comparing against the general KGC literature which is not the focus of this work. The competitiveness of rule-based approaches is discussed in many works and we refer to the recent literature for a summary Rossi et al. (2021); Sadeghian et al. (2019); Meilicke et al. (2023).
### Experimental Settings
We evaluate the aggregation techniques on the most common KGs from the KGC community. We use FB15k-237 Toutanova and Chen (2015), WNRR Dettmers et al. (2018), Codex-M Safavi and Koutra (2020), and Yago3-10 Dettmers et al. (2018). The datasets are downloaded from the LibKGE library Broscheit et al. (2020) and we use the same train, valid, testing splits as used throughout the literature as well as the exact same evaluation protocol Rossi et al. (2021) which is described in Section 3.1.
We use AnyBURL Meilicke et al. (2019) and AMIE3 Lajus et al. (2020) to mine the rules \(\tilde{\mathcal{C}}\). For AnyBURL we use the same rulesets as used by Meilicke et al. (2021). For AMIE3 we tried to find the best possible hyperparameter configuration regarding the results (see appendix).
We compare Max (MAX), Max+ (MAX+), Noisy-or (NO), and Noisy-or top-h aggregation (NO top-\(h\)). For Noisy-or
top-h we investigate how one global value \(h=5\) performs over all datasets and we additionally search for the best parameter on the validation set for the relations and query directions independently (NO top-\(h^{*}\)) as described in Section 4.5. For AnyBURL we search over the values \(h\in\{1,4\ldots 10\}\) where for \(h{=}1\) we use MAX+. For AMIE3 we additionally include \(h{=}k\) as AMIE3 learned smaller vale-sets and overall a smaller number of rules predict the query candidates. We also include the two works concerned with the aggregation problem, SAFRAN (Ott et al., 2021) and the supervised sparse aggregator (SV) proposed by Betz et al. (2022b). We provide wall-clock times (Table 2) of the approaches for the larger datasets and the rulesets of AnyBURL. Further experimental details, the used server architecture, dataset statistics, and the overall number of learned rules can be found in the appendix of the work.
### Results
Table 1 shows performance results and Table 2 shows runtimes for the rules from AnyBURL. Despite the fact that the datasets are quite different NO top-5 performs surprisingly well and for the rules from AnyBURL it only falls short for the h@1 and MRR metrics for Yago3-10 compared to MAX+ while being faster on average and 1.6PP better on FB15k-237. In general we observe nevertheless that the best performing specification might be dataset specific, e.g., for the rules from AMIE3 NO performs best on FB15k-237, however, the results for these valeets are significantly worse in general. A pragmatic approach is to simply learn the best value for \(h\) on the validation set which, not surprisingly, performs always as good or better as the second best configuration although the improvement is sometimes marginal.
Although SAFRAN and SV are superior on average in regard to performance they are significantly slower. For instance SAFRAN is outperformed on Codex-m by NO top-\(h^{*}\) while running approximately 55 times longer and it is 0.8PP better on FB15k-237 where it runs more than 100 times longer. SV performs 0.3PP better on FB15k-237 while being 180 times slower and it performs 0.9PP better on Codex-M with a running time that is 13 times slower.
To conclude we observe that the aggregation method can have significant impact on the overall performance of the mined rulsets. Furthermore, when runtimes are a consideration factor a simple approach might be the preferred choice of aggregation.
## 6 Conclusion
We have shown that the problem of rule aggregation for KGC can be expressed with marginal inference over a joint distribution over the rules. We provided probabilistic interpretations for previously defined aggregation functions.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline & \multicolumn{4}{c}{FB15k-237} & \multicolumn{4}{c}{WNRR} & \multicolumn{2}{c}{Codex-M} & \multicolumn{2}{c}{Yago3-10} \\ \hline & Approach & h@1 & h@10 & MRR & h@1 & h@10 & MRR & h@1 & h@10 & MRR & h@1 & h@10 & MRR \\ \hline \multirow{6}{*}{**Codex-M**} & MAX & 0.236 & 0.496 & 0.321 & 0.442 & 0.561 & 0.482 & 0.240 & 0.443 & 0.309 & 0.394 & 0.640 & 0.477 \\ & MAX+ & 0.246 & 0.506 & 0.331 & 0.457 & 0.574 & 0.497 & 0.248 & 0.452 & 0.317 & 0.498 & 0.691 & 0.566 \\ & NO & 0.251 & 0.499 & 0.333 & 0.391 & 0.560 & 0.446 & 0.219 & 0.427 & 0.290 & 0.367 & 0.628 & 0.456 \\ & NO top-5 & 0.260 & 0.524 & 0.347 & 0.458 & 0.578 & 0.499 & 0.243 & 0.461 & 0.317 & 0.486 & 0.697 & 0.560 \\ & NO top-\(h^{*}\) & 0.263 & 0.524 & 0.349 & 0.459 & 0.578 & 0.499 & 0.253 & 0.464 & 0.326 & 0.498 & 0.698 & 0.568 \\ \cline{2-13} & SAFRAN & 0.272 & 0.524 & 0.357 & 0.459 & 0.578 & 0.502 & 0.254 & 0.458 & 0.325 & 0.491 & 0.693 & 0.564 \\ & SV & 0.266 & 0.526 & 0.352 & 0.459 & 0.574 & 0.499 & 0.266 & 0.467 & 0.335 & - & - & - \\ \hline \multirow{6}{*}{**Codex-M**} & MAX & 0.167 & 0.384 & 0.236 & 0.414 & 0.511 & 0.445 & 0.191 & 0.383 & 0.255 & 0.350 & 0.592 & 0.431 \\ & MAX+ & 0.178 & 0.394 & 0.247 & 0.419 & 0.514 & 0.450 & 0.198 & 0.395 & 0.263 & 0.395 & 0.622 & 0.473 \\ \cline{1-1} & NO & 0.209 & 0.430 & 0.284 & 0.377 & 0.513 & 0.424 & 0.190 & 0.390 & 0.257 & 0.345 & 0.615 & 0.439 \\ \cline{1-1} & NO top-5 & 0.199 & 0.425 & 0.273 & 0.380 & 0.513 & 0.426 & 0.197 & 0.401 & 0.266 & 0.360 & 0.622 & 0.452 \\ \cline{1-1} & NO top-\(h^{*}\) & 0.217 & 0.439 & 0.292 & 0.419 & 0.514 & 0.450 & 0.199 & 0.407 & 0.269 & 0.401 & 0.625 & 0.479 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for the joint filtered MRR and Hits@X with rules from AnyBURL or AMIE
\begin{table}
\begin{tabular}{l c c c} \hline \hline Approach & FB15k-237 & Codex-M & Yago3-10 \\ \hline MAX & 1.1m & 5.5m & 4.1m \\ MAX+ & 3.1m & 10.4m & 4.2m \\ Noisy-or & 5.4m & 25.0m & 12.2m \\ Noisy-or top-5 & 1.5m & 6.6m & 4.3m \\ NO top-\(h^{*}\) & 13.9m & 1.27h & 1.01h \\ \hline SAFRAN & \(\approx\)24h & \(\approx\)72h & \(>\)72h \\ SV & \(\approx\)42h & \(\approx\)16.5h & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Runtimes in minutes (m) our hours (h) with rules from AnyBURL.
Subsequently we proposed a baseline that is slightly superior over previous simple methods while being efficient and we found that more advanced models are expensive to use while only providing a small boost in regard to predictive performance. Future work might build on these foundations by finding suitable ways of modelling the joint distribution over the rules. For instance, rules could be grouped according to syntactic similarity, distributions might be estimated from more advanced statistics such as pairwise confidences or marginals could be approximated more rigorously. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.